text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Java Reference In-Depth Information One of the main features of the springframework is that it allows applications to be very loosely coupled by allowing dependencies to be defined in XML configuration files. This allows changing dependencies without having to change a single line of code. In the jasperSpring-servlet.xml file, we define a dependency on the database datasource by declaring the bean with an id of datasource and setting it up as a property of the jasperController bean. The bean with the id of publicUrlMapping maps the report URL to our controller. The bean with the id of viewResolver is an instance of org.springframework.web.servlet.view. ResourceBundleViewResolver . Its purpose is to look up values in a resource bundle to determine what view to use. Its basename property defines the name of the property file containing the keys to look up. In this case, the property file must be named views.properties . report.class=org.springframework.web.servlet.view.jasperreports. JasperReportsPdfView report.url=reports/DbReportDS.jasper Notice that the base name of the keys ( report , in this case) must match the name of the controller property defined in the application context for SimpleUrlHandlerMapping . It is in this property file that we actually declare that JasperReports will be used to render the data. In this example, we are using the JasperReportsPdfView class to export to PDF. The Spring framework also supports exporting to CSV, HTML, and Excel. To export to one of these formats, the classes to use would be JasperReportsCsvView , JasperReportsHtmlView , and JasperReportsXlsView , respectively. All of these classes are in the org.springframework.web.servlet.view.jasperreports package. The report.url property defines where to find the compiled report template. In order for the JasperReportsPdfView class to find the compiled report template, it must be located in a directory matching the value of this property. The report template we will use for this example is the one discussed in the Database Reporting via a Datasource section of Chapter 4. Just as with most MVC frameworks, we never code our servlets directly when writing web applications using Spring MVC; instead, we write controller classes. In this example, our controller class implements the org.springframework.web. servlet.mvc.Controller interface. This interface defines a single method called handleRequest() . package net.ensode.jasperbook.spring; import java.io.IOException; Search WWH :: Custom Search
http://what-when-how.com/Tutorial/topic-6654l7tl/JasperReports-for-Java-Developers-316.html
CC-MAIN-2017-39
refinedweb
392
55.64
I am creating a cruise management system in c++ and I need to check if a phone number entered during registration is valid (valid in Australia), basically if it starts with 04 and is 10 numbers long. I think there was a function for checking the length of a string but I'm unable to recall it. Perhaps something like: int phonenumber; cin >> phonenumber; if (phonenumber.length() == 10 && FirstTwoLetters(phonenumber, 04) { // rest of registering code } First of all, I suggest you to store the data information as a string, instead of integer. In order to do that, you need to include the library for string: #include <string> After you can use: std::string phonenumber; That's more efficient because an integer which starts with 0 is semantically equal to another integer with no 0. E.g: 001234 == 1234 That's no true with strings. Once you have stored the information in a string variable you have to check the structure in this way: if ((phonenumber.length() == 10) && (phonenumber.substr(0, 2) == "04")) { std::cout << "Valid number\n"; } Finally in order to make more robust your application, you need to check whether the string is composed by just digits. That's because the user could type: 04asdqwerty1 And the check will pass anyway. The fastest way in my mind at this moment is to write a function: bool check_all_digit(const std::string& str) { for(const auto& c : str) { if (c < 48 || c > 57) { return false; } } return true; } Note that this function uses a C++11 syntax. That functions takes all characters of the string and check whether the ASCII code is not a digit. So the complete check will become: if ((phonenumber.length() == 10) && (phonenumber.substr(0, 2) == "04") && (check_all_digit(phonenumber) == true)) { std::cout << "Valid number\n"; }
https://codedump.io/share/NNg4uxxXYLT2/1/how-would-i-check-if-a-phone-number-entered-is-a-valid-in-c
CC-MAIN-2018-13
refinedweb
296
61.46
This post will show a very simple KeyVault demo application. Background Working with a customer, I needed to show them the simplest Azure KeyVault sample possible so they could easily understand what it does. There is a really good sample published to GitHub (Azure KeyVault .NET Sample Code), but it was still quite a bit verbose for someone seeking a simple Hello World type example. I threw this Console application together to show the bare minimum application. The scenario is an application that interacts with multiple Azure services such as Redis, DocumentDB, and Storage. Each service has its own key to access the service. We obviously need our application to access the keys, but we don’t want developers to have access to the keys. We also want the ability to centrally update the keys on a regular basis. This is a perfect scenario for KeyVault because you can restrict access to keys and secrets. To show this, I have a storage account named “kirkedemo”. Once the account is created, I can access its keys. I copied the key1 value for my storage account and will store that as a secret in KeyVault. The source code for this project is available at. Create a Vault and Secret I often show people how to start using the portal because it provides a nice visual way to get started. Of course you can do this using scripting and ARM templates, and you should do that for production to ensure consistent provisioning with no configuration drift. I created a vault and named it “demo”. Once the vault is created, I click on Secrets in the portal and create a new secret using the storage account key. In order to secure access for my application, I need to create a service principal in Azure AD. Register the Application in Azure AD In the Azure AD blade, select App Registrations and click the Add button to register a new application. Give it a name, select “Native” as the application type, and provide a URL (doesn’t have to be a real endpoint, just a URL). Once created, click on your application, view Settings, and click Properties. Copy the Application ID value, this will be the clientID value used to authenticate to KeyVault. Now click Keys. Create a new key and click Save, then copy the value that was generated. The key you copied will be the clientSecret value used to authenticate to KeyVault. Important step! Registering an application using the Azure portal doesn’t (at the time of this writing) create a service principal. Open PowerShell and run the following commands, replacing the ApplicationID value with the value you just created. - Login-AzureRmAccount - New-AzureRmADServicePrincipal -ApplicationId d80cc6d4-6037-49c5-9c3b-8b304626f3ee Create the Project In Visual Studio, create a new Console application. Add the following NuGet packages: - Microsoft.Azure.KeyVault - Microsoft.IdentityModel.Clients.ActiveDirectory - WindowsAzure.Storage Add a reference to System.Configuration. Edit the app.config file and add the following keys along with the values that you created above. - <appSettings> - <add key="VaultUrl" value="URL to your vault (ex:)" /> - <add key="storageAccountName" value="Storage account name (ex: kirkedemo)" /> - <add key="clientId" value="Application ID from Azure AD (ex: d80cc6d4-6037-49c5-9c3b-8b304626f3ee)" /> - <add key="clientSecret" value="Generated key from Azure AD (ex: Ova+O90sdfk8435908YKIKGF48395jhdfg=" /> - </appSettings> Here is my configuration file for reference. Show Me the Code My favorite part. If you don’t want to read and just want a copy, the source is available at. - using Microsoft.Azure.KeyVault; - using Microsoft.IdentityModel.Clients.ActiveDirectory; - using Microsoft.WindowsAzure.Storage; - using Microsoft.WindowsAzure.Storage.Auth; - using Microsoft.WindowsAzure.Storage.Queue; - using System; - using System.Configuration; - using System.Threading.Tasks; - - namespace mykeyvault - { - class Program - { - - - static void Main(string[] args) - { - MainAsync(args).GetAwaiter().GetResult(); - } - - private static async Task MainAsync(string[] args) - { - //Get the storage key as a secret in KeyVault - var storageKey = await GetStorageKey(); - string storageAccountName = ConfigurationManager.AppSettings["storageAccountName"]; - var creds = new StorageCredentials(storageAccountName, storageKey); - var storageAccount = new CloudStorageAccount(creds, true); - var queueClient = storageAccount.CreateCloudQueueClient(); - var queue = queueClient.GetQueueReference("samplequeue"); - await queue.CreateIfNotExistsAsync(); - await queue.AddMessageAsync(new CloudQueueMessage("Hello keyvault")); - } - - private static async Task<string> GetStorageKey() - { - - var client = new KeyVaultClient( - new KeyVaultClient.AuthenticationCallback(GetAccessTokenAsync), - new System.Net.Http.HttpClient()); - - var vaultUrl = ConfigurationManager.AppSettings["vaultUrl"]; - - var secret = await client.GetSecretAsync(vaultUrl, "storageAccountKey"); - - return secret.Value; - } - - - private static async Task<string> GetAccessTokenAsync( - string authority, - string resource, - string scope) - { - //clientID and clientSecret are obtained by registering - //the application in Azure AD - var clientId = ConfigurationManager.AppSettings["clientId"]; - var clientSecret = ConfigurationManager.AppSettings["clientSecret"]; - - var clientCredential = new ClientCredential( - clientId, - clientSecret); - - var context = new AuthenticationContext( - authority, - TokenCache.DefaultShared); - - var result = await context.AcquireTokenAsync( - resource, - clientCredential); - - return result.AccessToken; - } - } - } The one weird thing about this code was the KeyVaultClient.AuthenticationCallback function, it expects a function with three string values. The authority is the Azure AD instance that you are working with (public cloud, govt cloud, Germany cloud, etc) plus your tenant ID, and the resource parameter has the value “”. You don’t provide those values, the callback function provides them. You only need to provide the clientID and clientSecret values and the SDK does the rest for you. Now try hitting F5 to make sure it compiles and runs. See It In Action “Huh? I did everything you said, Kirk, and when I hit F5 I get an Access Denied exception from KeyVault!” Good! That means it’s working. We created a secret in KeyVault, and created an application in Azure AD, but we never told KeyVault that the application was allowed to access the secret. Go back to your KeyVault in the portal and click Access policies. Your application is not listed. Click Add new to add your application. Note: if you don’t see your application, make sure to run the New-AzureRmADServicePrincipal cmdlet as mentioned above. Once we’ve selected the principal, we then select the Secret permissions. We will only allow our application get a secret, it cannot list, set, or delete a secret in the KeyVault. Save your access policy. Now go run the application again. This time everything should work. Go back to your storage account, and you should now have a new queue. What Just Happened? We were able to create an application that stores secrets in KeyVault. An administrator would have the ability to set access policies for users and applications. For example, I have a user, “kirkevans@blueskyabove.onmicrosoft.com”, I can set a policy for that user for keys and secrets. If I log in as that user and try to view the secrets in my KeyVault, I will see a message telling me that I am not authorized to view the secrets in the vault. This lets administrators configure which applications and users are able to work with secrets in the vault. Users or applications that have the ability to Set secrets can now update secret values, such as updating the storage account key. The obvious point that someone will mention is that we have a client ID and client secret in our app.config, that is a secret that is not stored in KeyVault. Great point! What we could have done was to have a *user* log in, use the user’s identity to obtain the key from the keyvault. Another option would be to use a client certificate for the application’s credentials instead of a client ID and client secret. This is exactly what the demo at Azure KeyVault .NET Sample Code does, it uses a client certificate to authenticate the application. To see how to perform the operations using PowerShell instead of going to the Azure portal, see. For More Information Azure KeyVault .NET Sample Code
https://blogs.msdn.microsoft.com/kaevans/2016/10/31/using-azure-keyvault-to-store-secrets/
CC-MAIN-2017-22
refinedweb
1,277
50.33
Opened 9 years ago Closed 9 years ago #7373 closed (invalid) for key, value in some_dict does not work Description I have the following code in my template: {% if form.has_errors %} errors: <ol> {% for field, error in form.error_dict %}<li>{{ field }} <p><strong>{{ error }}</strong></p></li>{% endfor %}</ol> {% endif %} That stuff does not properly work as I believe it has to: all rows contains 'p' as 'field' and 'u' as 'error'. Here's the patch: Index: django/template/defaulttags.py =================================================================== --- django/template/defaulttags.py (revision 7574) +++ django/template/defaulttags.py (working copy) @@ -138,13 +138,14 @@ # Boolean values designating first and last times through loop. loop_dict['first'] = (i == 0) loop_dict['last'] = (i == len_values - 1) + + context[self.loopvars[0]] = item if unpack: # If there are multiple loop variables, unpack the item into # them. - context.update(dict(zip(self.loopvars, item))) - else: - context[self.loopvars[0]] = item + context[self.loopvars[1]] = values[item][0] + for node in self.nodelist_loop: nodelist.append(node.render(context)) if unpack: Change History (4) comment:1 Changed 9 years ago by comment:2 Changed 9 years ago by It's hard to figure out from the current template documentation, sorry :) Thank you! comment:3 Changed 9 years ago by I reverted my changes(and svn update) and wrote for field, error in form.error_dict.items as you suggested, no success. Now, I've got python list in error variable: field: author error: [u'\u041e\u0431\u044f\u0437\u0430\u0442\u0435\u043b\u044c\u043d\u043e\u0435 \u043f\u043e\u043b\u0435.'] I am django newbie. Do I have to add another inner loop to traverse through all errors or what ? I missed something in documentation, didn't I ? comment:4 Changed 9 years ago by The error messages for a field are a list, because a single field may have multiple errors associated with it. This is covered in the forms documentation; please consult it for further information. Use {% for field, error in form.error_dict.items %}
https://code.djangoproject.com/ticket/7373
CC-MAIN-2017-13
refinedweb
328
58.89
HTTP Response Status Codes Explained Any web server will issue HTTP status codes in response to a client’s request. This list shows codes from IETF Request for Comments (RFCs) and some additional commonly used codes. The first digit of the HTTP status code specifies one of five standard classes of responses. 1xx Informational 1xx status codes offer a temporary response. They’re made up of just the Status-Line and they end with an empty line. There’s no need for headers with this kind of status code. Because HTTP/1.0 doesn’t offer definitions for any 1xx status codes, servers mustn’t send a 1xx response to an HTTP/1.0 client, unless it’s for investigational purposes. A client needs to be ready to handle at least one 1xx status response before a regular response, even if the client doesn’t expect to receive a 100 continue status message. A user agent can safely ignore a 1xx status response if it wasn’t one that was expected. Proxies have to forward 1xx responses, except in the case of a closed connection between client and proxy, or except when the proxy itself produced the request to generate the 1xx response. (For instance, if a proxy includes an “Expect: 100-continue” field to the forwarded request, then it doesn’t need to forward the related 100 (Continue) response(s).) 100 Continue This status code means that the server has received the request headers and that the client should carry on and send the request body (when the request is one for which a body needs to be included; like with a POST request for instance). With a request body that’s large, it’s not efficient to send it to a server after a request has been rejected already because the headers aren’t appropriate. It’s possible to ask the server to check whether the request will be accepted based solely on its headers by sending Expect: 100-continue as a header in its initial request. It will carry on if a 100 Continue status code is received in respons or it will stop if a 417 Expectation Failed response is received. 101 Switching Protocols This tells us that the server has been asked to change protocols and it’s confirming that it will do that. It comprehends the client’s request and signals that it’s willing to abide by it, using the Upgrade message header field to change the application protocol in use on this connection. The server will change protocols to those defined by the response’s Upgrade header field right after it encounters the empty line which ends the 101 Switching Protocols response. The protocol should only be changed when it’s beneficial to do this. For instance, changing to a newer version of HTTP from older ones is beneficial, and changing to a synchronous, real-time protocol could be of benefit when resources that use such features need to be delivered. 102 Processing (WebDAV) The 102 Processing code is an interim response that’s used to let the client know that the server has accepted the request but hasn’t yet fulfilled it. It should only be issued when there is a reasonable expectation that processing is going to take a while. When the server takes more than 20 seconds to run through a request, it will issue a 102 (Processing) response, and it will send a final response when the request has been completed. Methods can take a while to process, particularly those that support the Depth header. In these instances, the client may suspend the connection while it’s waiting for the response to come back, but if it issues a 102 code then the client will know to wait because it is being dealt with. 2xx Success This type of status code tells us that the client’s request was received, understood, and processed successfully. 200 OK A response to say that the request succeeded. The information included along with it will be dependent upon which method the request used, for instance: - GET an entity that corresponds to the requested resource is sent in the response - HEAD the entity-header fields that correspond to the requested resources are sent in the response with no message-body - POST an entity that describes or contains the outcome of the action - TRACE an entity that contains the request message as received by the end server 201 Created This means that the request has been completed and a new resource was produced which can be identified using the URI(s) included in the entity that the response relates to, with the most specific URI for the resource given by a Location header field. The response has to include an entity that contains a list of resource characteristics and location(s). A user or user agent can then use this to select the one that’s most appropriate. The entity’s format is defined by whichever media type is presented in the Content-Type header field. The origin server has to create the resource before it can issue the 201 Created Status Code. If the action can’t be carried out straight away, the server should issue a 202 response (which means Accepted) instead. An ETag response header field can be included in a 201 response to show that the entity tag’s present value for the variant that was requested has just been created. 202 Accepted The request was accepted for processing, but the processing hasn’t been completed. The request might or might not eventually be acted upon, as it could potentially be disallowed when processing does occur. There is no way to re-send a status code from this kind of unsynchronized operation. The 202 response is ambivalent on purpose. It’s intended to give a server the chance to look at a request for some other process (maybe a batch-type process that only runs once a day) without any need for the user agent’s server connection to carry on till the procedure has completed. The entity returned with this response has to indicate the current status of the request along with either a link to a status monitor or a rough idea of when the request might be completed. 203 Non-Authoritative Information The server successfully processed the request, although it’s returning information that might be from a different source. Not found in HTTP/1.0 but found in HTTP/1.1. The returned metainformation in the entity-header isn’t the final set as obtainable from the origin server. Instead, it’s been gleaned from a local or a third-party copy. The set that’s presented CAN be a subset or superset of the original one. For instance, if it includes local annotation information about the resource, this could result in a superset of the metainformation available to the origin server. The use of this response code isn’t a requisite and will only be pertinent when the response would otherwise be 200 (OK). 204 No Content The request has been completed by the server. It doesn’t need to return an entity-body, but it might want to return updated metainformation though. The response can include new or revised metainformation in the form of entity-headers, and if present these have to be associated with the variant that was requested. When the client is a user agent, it can’t alter its document view from the one that caused the request to be sent. The main aim of this response is to let input related to actions be entered, without that causing a change to the user agent’s current document view, although any new or updated metainformation has to be applied to the document currently in the user agent’s active view. The 204 response mustn’t contain a message-body, and it’s always ended by an empty line after the header fields. 205 Reset Content The server has completed the request and the user agent needs to reset the document view that led to it being sent. This response exists so that users can provide input that instigates actions, and then clears the form of that input ready for the next time. The response mustn’t include an entity. This is a little different from a 204 response, where the requester needs to reset the document view themselves. 206 Partial Content The server has finished processing a partial GET request for the resource. This request needs to have included a Range header field (section 14.35) that specifies the preferred range and was allowed to include an If-Range header field (section 14.27) so the request was made conditional. The following header fields have to be included in the response: - Either a Content-Range header field (section 14.16) indicating the range included with this response, or a multipart/byteranges Content-Type including Content-Range fields for each part. If a Content-Length header field is present in the response, its value has to an If-Range request that employed a strong cache validator was what produced the 206 response then it mustn’t contain any other entity-headers. If it was the result of an If-Range request that employed a weak validator, the response mustn’t include other entity-headers; the idea of this is to prevent inconsistencies between updated headers and cached entity-bodies. In other cases, the response needs to contain all of the entity-headers that would have been returned with a 200 (OK) response to an identical request. A cache mustn’t combine a 206 response with other previously cached content if the ETag or Last-Modified headers don’t match exactly (see 13.5.4). A cache that doesn’t support the Content-Range and Range headers mustn’t cache any 206 (Partial) responses. 207 Multi-Status (WebDAV) This provides status information for numerous independent operations. (Section 11 contains more information on this). 208 Already Reported (WebDAV) This code can be used within a DAV: propstat response element to avoid numbering the internal members of several bindings to the same collection again and again. For each binding to a collection that falls within the scope of the request, only one is reported with a 200 status, while subsequent DAV: response elements for all supplementary bindings use the 208 status, and no DAV: response elements for their descendants will be included. Members in a DAV binding will have been counted in a reply prior to this request and aren’t going to be included again. 226 IM Used The server has completed a GET request for the resource and responded with a representation of the result of one or more instance-manipulations applied to the present instance. The present instance may not be available except by the combination of this response with other preceding or upcoming responses, as might apply for the specific instance-manipulation(s). If this is the case, the headers of the resulting instance come from a combination of the status-226 response headers and the other instances, as stipulated by the rules in section 13.5.3 of the HTTP/1.1 specification. The request has to have included an A-IM header field that lists a minimum of one instance-manipulation. The response has to include an Etag header field presenting the entity-tag of the present instance. A response received with a status code of 226 CAN be placed in a cache and deployed when replying to a subsequent request, contingent on the HTTP expiry mechanism and any Cache-Control headers, and on the requirements contained in section 10.6. A response received with a status code of 226 CAN be used by a cache, in conjunction with a cache entry for the base instance, to create a cache entry for the present instance. 3xx Redirection This type of status code shows that the user agent needs to take additional action so that the request can be fulfilled. The required action CAN be carried out by the user agent automatically so long as the method used in the second request is either HEAD or GET. A client needs to detect endless redirection loops since loops like this create bidirectional network traffic. It’s worth noting that in the past, the specification recommended five redirections at most. Anyone developing content needs to consider that some clients might still exist that use this limit. 300 Multiple Choices Shows a number of choices for the resource that the client can follow, each with its own specific location, and negotiation information driven by the agent (section 12) is provided in order that the user (or user agent) may pick a preferred representation and redirect their request to that location. Unless it was a HEAD request, the response has to include an entity that offers a list of resource characteristics and locations out of which the user agent or user can select the most suitable one. The entity format is defined by the type of media provided in the header field Content-Type. The most appropriate can be automatically selected, but it depends on their format and the user agent’s capabilities. This specification does not however set any standard for this type of automatic selection. If the server has a preferred choice of representation, it has to include the specific URI for it in the Location field. For automatic redirection, user agents CAN use the Location field value. It’s possible to cache this response where permitted. 301 Moved Permanently A new permanent URI was given to the requested resource and any future references to it need to use one of the returned URIs. Where possible, clients that can edit links should automatically re-link Request-URI references to one or more of the new references indicated by the server. This response is cacheable except when otherwise indicated. The new permanent URI has to be given by the Location field in the response. Except when the request method was HEAD, the response entity has to include a short hypertext note that links to the new URI(s). If a request other than GET or HEAD generates the 301 status code, the user agent mustn’t automatically redirect the request except when the user can confirm it, because otherwise, it could alter the conditions that led to the request being issued. Note: during automatic redirection of a POST request following the reception of a 301 status code, some existing HTTP/1.0 user agents will change it to a GET request by mistake. 302 Found For a limited time, the resource requested is residing under a different URI. Because there are times when the redirection might be altered, the client has to continue using the Request-URI for any subsequent requests. This response will only be cached if a Cache-Control or Expires header field indicates that this is necessary. The temporary URI has to be given by the Location field in the response. Except when the request method was HEAD, the response entity needs to contain a short hypertext note with a hyperlink to the new URI(s). Sometimes the 302 status code might be received following a request that isn’t GET or HEAD, but if this is the case then the user agent mustn’t automatically redirect the request except when it can be confirmed by the user, because doing so might alter the conditions under which the request was issued. Note that RFC 2068 and RFC 1945 stipulate that the client isn’t allowed to alter the method on the request that was redirected, but the majority of existing user agent implementations treat 302 as if it were a 303 response, executing a GET on the Location field-value without regard for the initial request method. The status codes 307 and 303 were added so that servers wanting to make it as clear as possible exactly which kind of reaction is expected of the client can do so. 303 See Other The request-response has been stored in a different URI and will need a GET method on that resource to recover it. The main reason for this method is to let the output of a POST-activated script send the user agent to a particular resource. The new URI isn’t a stand-in reference for the resource that was originally requested. The 303 response mustn’t be cached, but the response to the second (redirected) request might be cacheable. The different URI has to be given by the Location field in the response. Except when the request method was HEAD, the response entity has to contain a short hypertext note with a hyperlink to the new URI(s). Note: Many pre-HTTP/1.1 user agents don’t understand the 303 status. When such clients need to be operated with, the 302 status code can be used instead, because most user agents react to a 302 response as we describe here for 303. 304 Not Modified Indicates the resource hasn’t been modified because last requested. If a conditional GET request has been performed by the client and access is permitted but the document has not been modified, then the server has to respond with this status code. The 304 Not Modified response mustn’t contain a message-body and is thus always ended by the first empty line after the header fields. The response has to include the following fields in the header: - Date, except when its omission is required by section 14.18.1 - If a clockless origin server follows these rules, and proxies and clients add their own Date to any response received without one (as previously defined by [ RFC 2068 ], section 14.19), caches will work correctly. - ETag and/or Content-Location, if the header would have been sent in a 200 response to the same request - Expires, Cache-Control, and/or Vary, if the field-value differs from the one sent in any earlier response for the same variant. If a strong cache validator (see section 13.3.3) was used by the conditional GET, the response must not include other entity-headers. Otherwise (such as when conditional GET used a weak validator), the response mustn’t include other entity-headers; this prevents inconsistencies between cached entity-bodies and updated headers. If the 304 response points to an entity that is not currently being cached, then the cache must ignore the response and repeat the request without the condition. If a cache uses a received 304 response to update a cache entry, it will need to update the entry to reflect any new field values given in the answer. 305 Use Proxy The resource that was requested has to be accessed using the localization field proxy. The field Location provides the proxy URI. It’s anticipated that the recipient will repeat that particular request using the proxy. 305 Reactions only need to be produced by servers of origin. Note: It wasn’t clear from RFC 2068 that 305 was intended to redirect a single request and only be generated by servers of origin. Failure to observe such restrictions has significant consequences for security. 306 (Unused) The 306 status code isn’t used anymore, and it’s reserved. 307 Temporary Redirect The requested resource temporarily resides under another URI. Because the redirection may occasionally be altered, the client has to continue to use the Request-URI for future requests. This response is only cachable when indicated by a header field of Cache-Control or Expires. In the response, the temporary URI must be given by the Location field. The response entity has to contain a short hypertext note with a hyperlink to the new URI(s) except when the request method was HEAD because many pre-HTTP/1.1 user agents don’t understand the 307 status. Consequently, the note needs to contain the necessary information for a user to repeat the original request on the new URI. If the status code for 307 is received because of a request other than GET or HEAD, the user agent mustn’t automatically redirect the request except when it can be confirmed by the user, because this might alter the conditions under which the request was issued. In this case, the request should be repeated with another URI; however, future requests can still use the original URI. In contrast to 302, the request method shouldn’t be altered when reissuing the original request. For instance, a POST request has to be repeated using another POST request. 308 Permanent Redirect (experimental) This status code says that all subsequent requests should be repeated using another URI. 307 and 308 behave in the same way as 302 and 301 but don’t require an alteration of the HTTP method. So the submission of a form to a permanently redirected resource, for example, can continue as before. 4xx Client Error The status code type 4xx is intended for instances where the client appears to have made a mistake. The server should include an entity that contains an explanation of the error scenario and whether it’s a temporary or permanent condition, except when responding to a HEAD request. These status codes are applicable to any type of request. User agents have to show any included entity to the user. If the client is sending data, a server implementation using TCP has to ensure that the client acknowledges receipt of the packet(s) that contain(s) the response, before the input connection is closed by the server. If the client continues to send data to the server after closing, the TCP stack of the server will send a reset packet to the client that can erase the unrecognized input buffers of the client before the HTTP application can read and interpret them. 400 Bad Request Because of bad syntax, the request cannot be fulfilled. The client must not repeat the request without changing it first. A general error would cause an invalid state when the request is fulfilled. Examples of errors include domain validation and missing data. 401 Unauthorized Like 403 Forbidden, but to be used when it’s perfectly possible to authenticate although this didn’t work or hasn’t been offered yet. The request requires User authentication. The response must include a WWW-Authenticate header field (section 14.47), which has a challenge that applies to the resource being requested. The Customer CAN repeat the request with the relevant Authorization header field (section 14.8). If the request already included Authorization credentials, then the 401 response tells us that no authorization has been granted for those credentials. If the 401 response includes an identical challenge to the last response, and the user agent has already tried authentication at least once, then the entity given in the response needs to be given to the user, because that entity could include pertinent diagnostic data. HTTP access authentication is covered in “HTTP Authentication: Basic and Digest Access Authentication”. 402 Payment Required This code has been set aside for use in the future. The initial intention was that this code could be used as part of a form of electronic cash or micropayment program but that outcome hasn’t transpired, so this it isn’t in normal use. Apple’s MobileMe service, though, does produce an example 402 error (“httpStatusCode:402” in the Mac OS X Console log) if the MobileMe account is delinquent. 403 Forbidden The server has understood the request but is denying it. Unlike a 401 Unauthorized response, authenticating will make no difference and there must be no repeat of the request. If the request method wasn’t HEAD, and the server wishes to say why the request is being denied, the entity’s reason for doing so needs to be outlined. As an alternative, it’s possible to use the status code 404 (Not Found) if the server doesn’t want to that this information to the client. 404 Not Found Nothing on the server that matches the Request-URI could be found. No indication of whether the condition is permanent or temporary has been given. The status code 410 (Gone) must be used when the server knows that an old tool is indefinitely inaccessible and doesn’t have a forwarding address. This status code is commonly used when the server doesn’t want to reveal exactly why the request was declined, or when there isn’t another answer. 405 Method Not Allowed For the Resource described by the Request-URI, the method stated in the Request-Line isn’t authorized. The response MUST include an Allow header which contains a list of relevant methods for the resource requested. 406 Not Acceptable Except when it’s a HEAD request, the response needs to contain an entity with a list of available characteristics of the entity and location(s) from which the user or user agent can select the most suitable one. The format of the entity is dictated by the type of media present in the Content-Type header field. It depends on the user agent’s formatting capabilities as to whether it can automatically select the most appropriate choice. Be aware that the specification doesn’t describe any standards for this kind of automatic selection. Note: HTTP/1.1 servers may return responses that aren’t acceptable in accordance with the Accept headers sent in the request. Sometimes this can even be the preferred choice to a 406 response being sent. It would be helpful if user agents could pay attention to incoming request headers to check whether they are acceptable or not. 407 Proxy Authentication Required This code is somewhat like 401 Unauthorized, but it tells us that the client needs to validate itself with the proxy first. The proxy will need to return a Proxy-Authenticate header field (section 14.33) with a challenge that applies to the proxy for the resource requested. The request may be repeated by the client with an appropriate Proxy-Authorization header field (section 14.34). You can find an explanation of HTTP access authentication in “HTTP Authentication: Basic and Digest Access Authentication”. 408 Request Timeout While it was waiting for the request the server timed out. W3 HTTP specifications state: “The client didn’t produce a request within the time that the server was prepared to wait. The client CAN repeat the request without modifications at any later time.” 409 Conflict The application couldn’t be performed due to a conflict with the resource’s current state. This code is only permitted in situations where the user is expected to resolve the conflict and to submit the request again. The response body has to include sufficient information to allow the user to identify where the conflict came from. The ideal situation would be for the response entity to offer sufficient information in relation to the problem to allow the user or user agent to resolve it; although that might not be feasible or even necessary. Responding to PUT requests is the most common cause of problems. For instance, when versioning is being used and the thing that’s being PUT includes alterations to a resource that’s in conflict with resources that were created by a prior (third-party) request, the server could use the 409 response to indicate its inability to undertake it. When this happens, the responding entity would probably have a list of the differences between the separate versions in a format that the Content-Type response defines. 410 Gone The resource that’s been requested can’t be found, there’s no known forwarding address and nor is one likely to be found. If you’re able to edit links, then you should delete any references to this resource. Use a 404 [Not Found] if the server doesn’t know or has no way of finding out if this is a permanent situation. This response can be cashed except when otherwise indicated. The primary purpose of the 410 response is to help web maintenance. It does this by letting the recipient know that the resource is deliberately unavailable and that the owners of the server want any remote links to it to be taken down. This tends to happen with short-term promotional services and resources and for resources related to people who may have worked with the server but no longer have any involvement. You don’t need to mark every permanently unavailable resource as “gone” or do so indefinitely. That’s entirely up to the server owner. 411 Length Required The server needs a defined Content-Length before it will accept a request. The client can make the request again so long as it includes a valid Content-Length header field that stipulates in the request message how long the message body is. 412 Precondition Failed One or more of the request-header fields established a precondition that wasn’t met when the server tested it. This response code can be used by the client to put preconditions on the current resource metainformation (header field data) and by doing so preclude applying the method requested to a resource that isn’t the intended one. 413 Request Entity Too Large The server is refusing to process a request because the request is larger than the server is willing or able to process. The server CAN close the connection to prevent the client from continuing the request. If the condition is temporary, the server has to include a Retry-After header field to indicate that it is temporary and after what time the client CAN try again. 414 Request-URI Too Long The URI provided was too long for the server to process. It’s a rare situation that’s only likely to be encountered under certain circumstances, including when the client has converted a POST request to a GET request with excessively long query information, when the client’s become mired in a URI “black hole” of redirection (like when a redirected URI prefix points back to a suffix of itself), or when the server is being attacked by a client that is trying to find a way through the kind of security holes that some servers have using fixed-length buffers that can read and manipulate the Request-URI. 415 Unsupported Media Type The server won’t process the request since the request entity has been presented in an unsupported format. 416 Requested Range Not Satisfiable If a request includes a Range request-header field (section 14.35), and none of the range-specifier values in this field are overlapping the current extent of the selected resource, and the request didn’t include an If-Range request-header field, then a server must respond with this status code. (For byte-ranges, this means that the first-byte-pos of all the byte-range-spec values have exceeded the current length of the resource that was selected.) When a byte-range request results in this status code been returned, a Content-Range entity-header field stipulating the present length of the selected resource (see section 14.16) needs to include this in the response. This response mustn’t use the multipart/byte ranges content- type. 417 Expectation Failed The server can’t meet the requirements of the Expect request-header field, or, if it’s a proxy server it sees a clear indication that the next-hop server couldn’t meet the request. 418 I’m a teapot (RFC 2324) Actually, an IETF April Fools’ joke that was defined in 1998. The RFC 2324, Hyper Text Coffee Pot Control Protocol wasn’t really intended to be implemented on HTTP servers, but naturally, that hasn’t stopped people from trying. An Nginx HTTP server has used this code to feign goto-like behavior. 420 Enhance Your Calm (Twitter) The Twitter Search and Trends API returns this when the client is being rate limited. It’s a quote from the movie ‘Demolition Man’ and ‘420’ is probably a nod to the fact that the number’s associated with marijuana. Other services might prefer to use the 429 Too Many Requests response code as an alternative. 422 Unprocessable Entity (WebDAV) This status code indicates that the server knows what type of content is being requested (which is why a 415(Unsupported Media Type) status code wouldn’t be right), and the request entity syntax is right (which also means a 400 (Bad Request) status code would be equally out of place) but for whatever reason, it still wasn’t able to process the instructions contained in the request. This error condition can sometimes crop up when an XML request body has instructions with the correct syntax, but they contain semantic mistakes. 423 Locked (WebDAV) The 423 (Locked) status code tells us that the source or destination resource of a method has been locked. This response needs to contain an appropriate precondition or postcondition code, such as ‘lock-token-submitted’ or ‘no-conflicting-lock’. 424 Failed Dependency (WebDAV) The 424 (Failed Dependency) status code means that the method couldn’t be performed on the resource because the requested action was dependent on another one that failed. For instance, if a command in a PROPPATCH method fails, then, at the very least, the other commands will also fail with 424 (Failed Dependency). 425 Reserved for WebDAV Slein, J., Whitehead, E.J., et al., “WebDAV Advanced Collections Protocol”, Work in Progress. 426 Upgrade Required A clear indication of failure is necessary to negotiate reliable, interoperable Upgrade features. The 426 (Upgrade Required) status code lets a server clearly set out the precise protocol extensions a given resource needs to be served with. The client should move to an alternative protocol like TLS/1.0. 428 Precondition Required The 428 status code shows that the origin server needs the request to be conditional. It’s usually used to avoid the “lost update” problem, which is where a client GETs a resource’s state, adjusts it, and PUTs it back to the server, and while this is happening a third-party has altered the state on the server, which naturally creates a conflict. By insisting that requests should be conditional, the server can ensure that clients are using the right copies. Responses that use this code need to explain how to successfully resubmit it. 428 is an optional status code, which means that clients shouldn’t rely on it to circumvent “lost update” conflicts. 429 Too Many Requests The 429 status code is saying that the user has sent too many requests during a given period (“rate limiting”). The response representations need to include details that explain the condition and CAN include a Retry-After header that shows how long to wait before a new request can be made. When a server is being attacked (or simply being benignly bombarded) with requests from one source, responding to each with a 429 will take up too many valuable resources. Consequently, servers don’t need to use status code 429. When limiting the use of resources, it might be better to something as simple and efficient as dropping connections. 431 Request Header Fields Too Large This code is saying that the server doesn’t want to process the request because its header fields are too big. The request CAN be submitted again once the size of the request header fields has been reduced. It can be used when the total set of request header fields are too big, and also when that’s the case with just one header field. In the latter case, the response needs to say which one is too big. Servers don’t necessarily need to use the 431 status code when under attack, as dropping connections is sometimes the more preferable option. 444 No Response (Nginx) This is an Nginx HTTP server extension that can be useful to deter malware. The server sends no information to the client and shuts down the connection. 449 Retry With (Microsoft) This is a Microsoft extension that should be retried after performing the most suitable action. 450 Blocked by Windows Parental Controls (Microsoft) This is a Microsoft extension that appears when Windows Parental Controls have blocked access to a particular webpage. 451 Unavailable For Legal Reasons As the name suggests, this is used when access to a resource has been denied for legal reasons. Paper ignites at 451°F, which is why this number was chosen. Ray Bradbury’s classic 1953 dystopian novel Fahrenheit 451 explores the society where books are banned and any that are found are destroyed by a dedicated team of “Firemen.” 499 Client Closed Request (Nginx) An Nginx HTTP server extension. This code appears to log the case when the client closes the connection while the request is being processed by the HTTP server, which means the server can’t send the HTTP header back. 5xx Server Error When status codes start with “5” this means that the server either knows that it’s made a mistake or it can’t complete the task. Unless it’s responding to a HEAD request, the server has to include an entity that explains what the error situation as, and if it’s permanent or temporary. User agents must show any included entity to the user. These codes will apply to any request method. 500 Internal Server Error A generic error message that means the server came across a condition that wasn’t expected and it stopped it from performing the request. It’s a generic error message that can cover any situation where the problem is with the server. 501 Not Implemented The server lacks the functionality needed to perform the request. This response is returned when the server doesn’t understand the requested method or can’t support it for any resource. 502 Bad Gateway The server was acting as a proxy or a gateway server when it received an invalid response from the upstream server. 503 Service Unavailable The server can’t deal with the request at the current time because of maintenance or temporary overloading. A Retry-After header can be used to show how long resolving the issue will take. If no Retry-After is given, the client has to act as it would with a 500 response. Be aware that the existence of the 503 status code isn’t meant to suggest that a server has to use it when it’s experiencing overload. It could just refuse the connection. 504 Gateway Timeout While the server was acting as a gateway or proxy, it didn’t get a timely response from the upstream server that the URI specified (e.g. FTP, HTTP, LDAP) or from a different auxiliary server (e.g. DNS) that it needed to access while it tried to complete the request. Note that some deployed proxies have been known to return 400 or 500 errors when DNS lookups time out. 505 HTTP Version Not Supported The server either doesn’t support or refuses to support the HTTP protocol version that the request message used. The server indicates that it can’t or won’t fulfil the request using the same major version as the client, as described in section 3.1, except with this error message. The response has to contain an entity that says why that version isn’t supported and what other protocols that server supports. 506 Variant Also Negotiates (Experimental) A 506 status code means that the server has an internal configuration error: the selected variant resource is set up to take part in transparent content negotiation itself, and so isn’t an appropriate endpoint in the negotiation process. 507 Insufficient Storage (WebDAV) The 507 (Insufficient Storage) status code tells us that the method couldn’t be conducted on the resource due to the fact that the server can’t store the necessary representation that would allow it to complete the request successfully. This is considered to be a temporary condition. If a user action resulted in the status code that was sent to this request, it shouldn’t be repeated until it’s requested by another user action. 508 Loop Detected (WebDAV) The 508 (Loop Detected) status code is sent in lieu of code 208, and it’s saying that the server ended an operation because it encountered an infinite loop while it was processing a request. It’s also indicating that the whole operation was a failure. 509 Bandwidth Limit Exceeded (Apache) Despite the fact that many servers use this status code it isn’t specified in any RFCs. 510 Not Extended Further extensions to the request are needed so that the server can fulfil it. The policy for accessing the resource wasn’t met in the request. The server should send back all the information necessary so that the client can issue an extended request. It’s beyond the scope of this specification to stipulate how the extensions should pass this information to the client. If the 510 response contains information that relates to extensions that weren’t present in the original request, the client CAN make the request again if it believes it can fulfill the extension policy by altering the request in accordance with the information given in the 510 response. Alternatively, the client CAN offer any entity mentioned in the 510 response to the user, since that entity may have pertinent diagnostic information to contribute. 511 Network Authentication Required 511 indicates that the client needs to authenticate in order to gain network access. The response representation has to contain a link to a resource that lets the user send credentials (as it would with an HTML form, for example). Be aware that the 511 response mustn’t offer a login interface itself or a challenge because (somewhat confusingly) browsers would show the login interface as being associated with the URL that was initially requested. Origin servers should not produce the 511 status. It’s meant to be used by intercepting interposed proxies in order to control network access. 511 status code responses mustn’t be cached. The client needs to be authenticated in order to receive network access. This code is intended to alleviate difficulties attributable to “captive portals” to software (especially non-browser agents) when that software is expecting a response from the server that a request was made to, not the network infrastructure in between. It isn’t intended to discourage the deployment of captive portals, just to minimize the damage they can cause. A network operator who wishes to specify some authentication, acceptance of terms or some other user interaction before allowing access to the user will typically do so by identifying clients who haven’t done so (“unknown clients”) via their MAC addresses. Subsequently, unknown clients will have all traffic blocked unless it’s on TCP port 80, which is routed to an HTTP server (the “login server”) and is specifically there for “logging in” unknown clients, and naturally, traffic to the login server itself. A response that carries the 511 status code well typically be sent from the origin server shown in the request’s URL, presenting a multitude of security issues. Examples include when an attacking intermediary inserts cookies into the namespace of the original domain, observes cookies or HTTP authentication credentials sent from the user agent. These risks aren’t confined to the 511 status code though. A captive portal that doesn’t use this status code throws up the same kinds of difficulties. Also, be aware that captive portals employing this status code on an SSL or TLS connection (commonly, port 443) will produce a certificate error on the client. 598 Network read timeout error Not specified in any RFCs, but some HTTP proxies use it to indicate a network read timeout behind the proxy to a client that’s in front of the proxy. 599 Network connect timeout error Not specified in any RFCs, but some HTTP proxies use it to indicate a network connect timeout behind the proxy to a client that’s in front of the proxy. How useful was this post? Click on a heart to rate it! Average rating / 5. Vote count: No votes so far! Be the first to rate this post. Oh no, sorry about that! Let us know how we can do better below Thanks for your feedback!
https://www.plesk.com/blog/various/http-response-status-codes-explained/
CC-MAIN-2020-16
refinedweb
7,409
58.42
Data clustering with Python Notice:Just now I realized this has been linked to to a Stack Overflow question. I recently wrote a new post that uses a different technique and a combination of R and Python. Check it out! Following up my recent post, I’ve been looking for alternatives to TMeV. So far I’ve found the R package pvclust and the Pycluster library, part of BioPython. The first one also performs bootstrapping (I’m not sure if it’s similar to what support trees do, but it’s still better than no resampling at all). I’ve found another Python project but it is still too basic to perform what I need. Pvclust would be my first interest, but it only plots dendrograms and not heatmaps, and the clustering must be done twice by transposing the data (it only clusters columns). The package’s web page shows the various options and what to do with it. Pycluster, on the other hand, can be used to generate files which can be read by the Java TreeView program, where you can view a heat map of the results and their annotations. Although there’s documentation available, it is not part of the Biopython documentation (as usual, I’d say: lack of documentation is a plague for Biopython). In any case, doing a cluster analysis is rather simple, but we need to remember that we need to do two cluster runs (one for genes, the other for experiments). Here I show an example with hierarchical clustering, but the documentation (Python part on chapter 8) has examples also with other methods such as SOMs or k-means. from Bio.Cluster import * # Load data, in Cluster format data = DataFile("somefile.txt") # Clustering using Pearson's correlation and average linkage gene_clustering=data.treecluster(method="a",dist="c",transpose=0) # Same as above, but clustering samples exp_clustering = data.treecluster(method="a",dist="c", transpose=1) # We then save the results to a series of files to view in Java TreeView data.save("name",gene_clustering,exp_clustering) Java TreeView is a program to view trees and heat maps. Unlike its counterpart TreeView, it’s truly cross-platform (Java) and GPLed, a nice added bonus. You can load the files directly and display the results like in this picture, taken with the sample data available on the project page. It’s still not perfect (no data shown on the main map page, only with the detailed view) but a good start, nevertheless. I’ll investigate whether I can complement TMeV usage with these tools. Luca Beltrame LINUX · SCIENCE bioinformatics cluster python R
https://www.dennogumi.org/2007/11/data-clustering-with-python/
CC-MAIN-2016-30
refinedweb
435
62.38
Intro This is a guide for creating a Rust DLL and calling it from C#. We will cover native return values as well as structs. This guide assumes a windows environment, and also assumes that you have installed rust and set up a c# development environment. Rust Project Setup and Dependencies It's pretty simple to create a rust library and get it to compile into a DLL. First, navigate to the folder where you want your project and run the following command: cargo new cs_call_rst This will create a new folder named cs_call_rust, and initilize it with a 'src' folder and a cargo.toml file. We can build our new project by changing into the newly created cs_call_rust folder and running: cargo build After running this command, you'll notice that there is now a new folder named target and it contains a folder named debug and in it are the output of our build. However, there's a problem, we didn't build a dll, we built a .rlib file. To tell the rust compiler to create a dll, open up cargo.toml and make it look like the following: [package] name = "cs_call_rst" version = "0.1.0" authors = ["Jeremy Mill <jeremymill@gmail.com>"] [lib] name="our_rust" crate-type = ["dylib"] [dependencies] The [package] section tells the compiler some metadata about the package we're building, like who we are and what version this is. The next section, [lib] is where we tell the compiler to create a DLL, and name it 'our_rust'. When you run cargo build again, you should now see our_rust.dll in the output directory. First external rust function Now that we've got our project all set up, lets add our first rust function, then call it from c#. Open up lib.rs and add the following function: #[no_mangle] pub extern fn add_numbers(number1: i32, number2: i32) -> i32 { println!("Hello from rust!"); number1 + number2 } The first line, #[no_mangle] tells the compiler to keep the name add_numbers so that we can call it from external code. Next we define a public, externally available function, that takes in two 32 bit integers, and returns a 32 bit integer. The method prints a 'hello world' and returns the added numbers. Run cargo build to build our DLL, because we'll be calling this function in the next step. C# project setup I'm going to make the assumption that you're using visual studio for c# development, and that you already have a basic knowledge of c# and setting up a project. So, with that assumption, go ahead and create a new c# console application in visual studio. I'm naming mine rust_dll_poc. Before we write any code, we need to add our DLL into our project. Right click on our project and select add -> existing item -> our_rust.dll. Next, in the bottom right 'properties' window (with the dll highlighted), make sure to change 'Copy Output Directory' from 'Do not copy' to 'Copy always'. This makes sure that the dll is copied to the build directory which will make debugging MUCH easier. Note, you will need to redo this step (or script it) with every change you make to the DLL. Next, add the following using statement to the top of our application: using System.Runtime.InteropServices; This library will let us load our DLL and call it. Next add the following private instance variable Program class: [DllImport("our_rust.dll")] private static extern Int32 add_numbers(Int32 number1, Int32 number2); This allows us to declare that we're importing an external function, named add_numbers, it's signature, and where we're importing it from. You may know that c# normally treats the int as a 32 bit signed integer, however, when dealing with foreign functions, it is MUCH safer to be explicit in exactly what data type you're expecting on both ends, so we declared, explicitly, that we're expecting a 32 bit signed integer returned, and that the inputs should be 32 bit signed integers. Now, lets, call the function. Add the following code into main: static void Main(string[] args) { var addedNumbers = add_numbers(10, 5); Console.WriteLine(addedNumbers); Console.ReadLine(); } You should see the following output: Hello from rust! 15 Note!: If you see a System.BadImageFormatException When you try and run the above code, you (probably) have a mismatch in the build targets for our rust dll, and our c# application. C# and visual studio build for x86 by default, and rust-init will install a 64 bit compiler by default for a 64 bit architecture. You can build a 64 bit version of our c# application by following the steps outlined here Returning a simple struct Ok, awesome, we now know how to return basic values. But how about a struct? We will start with a basic struct that requires no memory allocation. First, lets define our struct, and a method that returns an instance of it in lib.rs by adding the following code: #[repr(C)] pub struct SampleStruct { pub field_one: i16, pub field_two: i32, } #[no_mangle] pub extern fn get_simple_struct() -> SampleStruct { SampleStruct { field_one: 1, field_two: 2 } } Now we need to define the corresponding struct in c# that matches the rust struct, import the new function, and call it! Add the following into our program.cs file: Edit: As Kalyanov Dmitry pointed out, I missed adding a Struct Layout annotation. This annotation ensures that the C# compiler won't rearrange our struct and break our return values namespace rust_dll_poc { [StructLayout(LayoutKind.Sequential)] public struct SampleStruct { public Int16 field_one; public Int32 field_two; } class Program { [DllImport("our_rust.dll")] private static extern SampleStruct get_simple_struct(); ... and then we call it inside of Main: static void Main(string[] args) { var simple_struct = get_simple_struct(); Console.WriteLine(simple_struct.field_one); Console.WriteLine(simple_struct.field_two); .... You should see the following output (you remembered to move your updated DLL into the project directory, right?) 1 2 What about Strings? Strings are, in my opinion, the most subtly complicated thing in programming. This is doubly true when working between two different languages, and even MORE true when dealing with an interface between managed and unmanaged code. Our strategy will be to store static string onto the heap and return a char *in a struct to the memory address. We will store this address in a static variable in rust to make deallocation easier. We will also define a function free_string which, when called by c#, will signal to rust that we're done with the string, and it is OK to deallocate that memory. It's worth noting here that this is VERY oversimplified and most definitely NOT thread safe. How this should 'actually' be implemented is highly dependent on the code you're writing. Lets first add a using statement to the required standard libraries: //external crates use std::os::raw::c_char; use std::ffi::CString; Next we're going to create a mutable static variable which will hold the address of the string we're putting onto the heap: static mut STRING_POINTER: *mut c_char = 0 as *mut c_char; It's important to know that anytime we access this static variable, we will have the mark the block as unsafe. More information on why can be found here. Next we're going to edit our struct to have a c_char field: #[repr(C)] pub struct SampleStruct { pub field_one: i16, pub field_two: i32, pub string_field: *mut c_char, } Now, lets create two helper methods, one that stores strings onto the heap and transfers ownership (private) and one that frees that memory (public). Information on these methods, and REALLY important safety considerations can be found here fn store_string_on_heap(string_to_store: &'static str) -> *mut c_char { //create a new raw pointer let pntr = CString::new(string_to_store).unwrap().into_raw(); //store it in our static variable (REQUIRES UNSAFE) unsafe { STRING_POINTER = pntr; } //return the c_char return pntr; } #[no_mangle] pub extern fn free_string() { unsafe { let _ = CString::from_raw(STRING_POINTER); STRING_POINTER = 0 as *mut c_char; } } Now, lets update get_simple_struct to include our code: #[no_mangle] pub extern fn get_simple_struct() -> SampleStruct { let test_string: &'static str = "Hi, I'm a string in rust"; SampleStruct { field_one: 1, field_two: 2, string_field: store_string_on_heap(test_string), } } Awesome! Our rust code is all ready! Lets edit our C# struct next. We will need to use the IntPtr type for our string field. We're supposed to be able to use the 'MarshalAs' data attributes to automatically turn this field into a string, but I have not been able to make it work. [StructLayout(LayoutKind.Sequential)] public struct SampleStruct { public Int16 field_one; public Int32 field_two; public IntPtr string_field; } and if we add the following line into main below the other Console.WriteLines, we should be able to see our text: Console.WriteLine(Marshal.PtrToStringAnsi(simple_struct.string_field)); finally, we need to tell rust that it's OK to deallocate that memory, so we need to import the free_string method just like we did with the other methods and call it `free_string(); The output should like this this: 1 2 Hi, I'm a string in rust I hope all of this was useful to you! The complete c# can be found here and the complete rust code can be found here. Good luck, and happy coding! Discussion I followed you article to bind up some common code. But got a problem whit bool always being true. Any thing special whit exposing a function using bool?? Yes, I had issues with bools as well, I didn't cover it in the article because it's weird. What I ended up doing was setting them up in c# as a byte and evaluating if it was a 1 or a 0, which SUCKS, but is totally doable. I tried getting a bunch of the marshal-as methods to work, but as of yet, none of them have. if you figure it out, let me know! example: rust: c#: Thanks, it worked. Had the solution of sending and revising a u8. But whit the byte at least i don't have to change anything on the rust side. Some suggested to use types from libc like c_bool, but i need to get abit better at rust. Have just started out. I'll let you know if i find a good solution I've been doing rust FFI at work for a few more months since I wrote this post. There's some things that I'll probably need to go back and update when I get the time. c_bool is a possible solution, there's also some shorthand in c# that may end up working, but I'll make sure to let you know if/when I get it working! Thanks :-) C# bools are Win32 BOOLs are 32-bit signed integers, for historical reasons. Still, Marshall as book "should" work, correct? bool on the Rust side but byte on the C# side, or (better) make a user-defined struct on the C# side, e.g. "ByteBool", that holds a single byte value and implements the conversions to/from System.Boolean. [StructLayout(LayoutKind.Sequential)] public struct test { public ByteBool isbool; } You should annotate the structs (on the C# side) with [StructLayout(LayoutKind.Sequential)]- otherwise CLR is free to reorder fields or add/remove padding which will break ABI. You're totally correct. I have that in the production code this is modeled after and I forgot it. I'll add it in, thanks! Hey, thanks for the post. As you noted, the string handling is not ideal. I suggest you allocate a Box / Vec for the string, pass it to C#. From there, you just copy it into its native String type and call a Rust-defined free_string function. For users who are unexperienced with memory management / unsafety, the additional overhead seems justified for me. Another minor I've noticed is the unideomatic return in one function (can be skipped at the end) ;) Hey, thanks for the reply. I have a lot more experience with this now than when I wrote this. It definitely needs to be updated, i'll try and get around to it sooner than later Any links on how to do that? Don't have a link at hand, but I'd just return the char pointer directly (instead of storing it in a global) and ask for it again in the free function. Hey, hope its not to late for a Question. I try your Tutorial, but i have a Problem to call a second function. It throw that he cant find an entryPoint for the second Function. Have you any Idea how i can call an another function in the same dll? Or have you maybe an Exemple? Did you tried to use thread_local!instead of static mut? Nope, I'll look into it though. Thanks! What if we use fixed size char (byte) array? Would that make passing string simpler? Do you know how to do that? I haven't done it yet, though I can think of no reason it wouldn't work. I'll see if I can throw together an example sometime today The explicit Int32ing makes no sense. You're not being 'explicit' about the type - int is 100% always shorthand for Int32. It's as meaningless as writing Object instead of object.
https://dev.to/living_syn/calling-rust-from-c-6hk
CC-MAIN-2020-50
refinedweb
2,203
71.24
Download | JavaDoc | Source | Forums | Support. These options can be applied in addition to relevant the Basic options to use asymmetric key encryption.. In order to encrypt the payload, the marshal processor needs to be applied on the route followed by the secureXML() tag. In order to decrypt the payload, the unmarshal processor needs to be applied on the route followed by the secureXML() tag. Given below are several examples of how marshalling could be performed at the Document, Element, and Content levels. A namespace prefix that is defined as part of the camelContext definition can be re-used in context within the data format secureTag attribute of the secureXML element. This data format is provided within the camel-xmlsecurity component.
http://camel.apache.org/xmlsecurity-dataformat.html
CC-MAIN-2013-48
refinedweb
120
54.32
Have. Lots of people smarter than me have talked about primitive obsession in C#. In particular I found these resources by Jimmy Bogard, Mark Seemann, Steve Smith, and Vladimir Khorikov, as well as Martin Fowler's Refactoring book. I've recently started looking into F#, in which this is considered a solved problem as far as I can tell! An example of the problem To give some context, below is a very basic example of the problem. Imagine you have an eCommerce site, in which Users can place an Order. An Order for this example is very basic, just the following few properties: public class Order { public Guid Id { get; set; } public Guid UserId { get; set; } public decimal Total { get; set; } } You can create and read the Orders for a User using the OrderService: public class OrderService { private readonly List<Order> _orders = new List<Order>(); public void AddOrder(Order order) { _orders.Add(order); } public Order GetOrderForUser(Guid orderId, Guid userId) { return _orders.FirstOrDefault( order => order.Id == orderId && order.UserId == userId); } } This trivial implementation stores the Order objects in memory, and has just two methods: AddOrder(): Add a new Orderto the collection GetOrderForUser(): Get an Orderwith the given Idand UserId. Finally, we have an API controller that can be called to create a new Order or fetch an Order: [Route("api/[controller]")] [ApiController, Authorize] public class OrderController : ControllerBase { private readonly OrderService _service; public OrderController(OrderService service) { _service = service; } [HttpPost] public ActionResult<Order> Post() { var userId = Guid.Parse(User.FindFirstValue(ClaimTypes.NameIdentifier)); var order = new Order { Id = Guid.NewGuid(), UserId = userId }; _service.AddOrder(order); return Ok(order); } [HttpGet("{orderId}")] public ActionResult<Order> Get(Guid orderId) { var userId = Guid.Parse(User.FindFirstValue(ClaimTypes.NameIdentifier)); var order = _service.GetOrderForUser(userId, orderId); if (order == null) { return NotFound(); } return order; } } This ApiController is protected with an [Authorize] attribute, so users have to be logged in to call it. It exposes two action methods: Post(): Used to create a new Order. The new Orderobject is returned in the response body. Get(): Used to fetch an order with the provided ID. If found, the Orderis returned in the response body. Both methods need to know the UserId for the currently logged in user, so they find the ClaimTypes.NameIdentifier from the current User claims and parse it into a Guid. Unfortunately, the code API controller above has a bug. Did you spot it? I don't blame you if not, I doubt I would. The bug - All GUIDs are interchangeable The code compiles and you can add a new Order successfully, but calling Get() always returns a 404 Not Found. The problem is on this line in OrderController.Get(), where we're fetching the Order from the OrderService: var order = _service.GetOrderForUser(userId, orderId); The signature for that method is: public Order GetOrderForUser(Guid orderId, Guid userId); The userId and orderId arguments are inverted at the call site! This example might seem a little contrived (requiring the userId to be provided feels a bit redundant) but this general pattern is something you'll probably see in practice many times. Part of the problem is that we're using a primitive object ( System.Guid) to represent two different concepts: the unique identifier of a user, and the unique identifier of an order. The problem of using primitive values to represent domain concepts is called primitive obsession. Primitive obsession "Primitives" in this case refer to the built-in types in C#, bool, int, Guid, string etc. "Primitive obsession" refers to over-using these types to represent domain concepts that aren't a perfect fit. A common example might be a ZipCode or PhoneNumber field that is represented as a string (or even worse, an int!) A string might make sense initially, after all, you can represent a Zip Code as a string of characters, but there's a couple of problems with this. First, by using a built-in type ( string), all the logic associated with the "Zip Code" concept must be stored somewhere external to the type. For example, only a limited number of string values are valid Zip Codes, so you will no-doubt have some validation for Zip Codes in your app. If you had a ZipCode type you could encapsulate all this logic in one place. Instead, by using a string, you're forced to keep the logic somewhere else. That means the data (the ZipCode field) and the methods for operating on it are separated, breaking encapsulation. Secondly, by using primitives for domain concepts, you lose a lot of the benefits of the type system. C# won't let you do something like the following: int total = 1000; string name = "Jim"; name = total; // compiler error But it has no problem with this, even though this would almost certainly be a bug: string phoneNumber = "+1-555-229-1234"; string zipCode = "1000 AP" zipCode = phoneNumber; // no problem! You might think this sort of "mis-assignment" is rare, but a common place to find it is in methods that take multiple primitive objects as parameters. This was the problem in the original GetOrderForUser() method. So what's the solution to this primitive obsession? The answer is encapsulation. Instead of using primitives, we can create custom types for each separate domain concept. Instead of a string representing a Zip Code, create a ZipCode class that encapsulates the concept, and use the ZipCode type throughout your domain models and application. Using strongly-typed IDs So coming back to the original problem, how do we avoid the transposition error in GetOrderForUser? var order = _service.GetOrderForUser(userId, orderId); By using encapsulation! Instead of using a Guid representing the ID of a User or the ID of an Order, we can create strongly-typed IDs for both. So instead of a method signature like this: public Order GetOrderForUser(Guid orderId, Guid userId); You have a method like this (note the method argument types): public Order GetOrderForUser(OrderId orderId, UserId userId); An OrderId cannot be assigned to a UserId, and vice versa, so there's no way to call the GetOrderForUser method with the arguments in the wrong order - it wouldn't compile! So what do the OrderId and UserId types look like? That's up to you, but in the following section I show an example of one way you could implement them. An implementation of OrderId The following example is an implementation of OrderId.); } OrderId is implemented as a struct here - it's a simple type that just wraps a Guid, so a class would probably be overkill. That said, if you're using an ORM like EF 6, using a struct might cause you problems, so a class might be easier. That also gives the option of creating a base StronglyTypedId class to avoid some of the boiler plate. There are some other potential issues with using a stuct, for example implicit parameterless constructors. Vladimir has a discussion about these problems here. The only data in the type is held in the property, Value, which wraps the original Guid value that we were previously passing around. We have a single constructor that requires you pass in the Guid value. Most of the functions are overrides of the standard object methods, and implementations of the IEquatable<T> and IComparable<T> methods to make it easier to work with the type. There's also overrides for the equality operators. I've written a few example tests demonstrating the type below. Note, as Jared Parsons suggests in his recent post, I marked the stuctas readonlyfor performance reasons.. You need to be using C# 7.2 at least to use readonly struct. Testing the strongly-typed ID behaviour The following xUnit tests demonstrate some of the characteristics of the strongly-typed ID OrderId. They also use a (similarly defined) UserId to demonstrate that they are distinct types. public class StronglyTypedIdTests { [Fact] public void SameValuesAreEqual() { var id = Guid.NewGuid(); var order1 = new OrderId(id); var order2 = new OrderId(id); Assert.Equal(order1, order2); } [Fact] public void DifferentValuesAreUnequal() { var order1 = OrderId.New(); var order2 = OrderId.New(); Assert.NotEqual(order1, order2); } [Fact] public void DifferentTypesAreUnequal() { var userId = UserId.New(); var orderId = OrderId.New(); //Assert.NotEqual(userId, orderId); // does not compile Assert.NotEqual((object) bar, (object) foo); } [Fact] public void OperatorsWorkCorrectly() { var id = Guid.NewGuid(); var same1 = new OrderId(id); var same2 = new OrderId(id); var different = OrderId.New(); Assert.True(same1 == same2); Assert.True(same1 != different); Assert.False(same1 == different); Assert.False(same1 != same2); } } By using strongly-typed IDs like these we can take full advantage of the C# type system to ensure different concepts cannot be accidentally used interchangeably. Using these types in the core of your domain will help prevent simple bugs like incorrect argument order issues, which can be easy to do, and tricky to spot! Unfortunately, it's not all sunshine and roses. You may be able to use these types in the core of your domain easily enough, but inevitably you'll eventually have to interface with the outside world. These days, that's typically via JSON APIs, often using MVC with ASP.NET Core. In the next post I'll show how to create some simple convertors to make working with your strongly-typed IDs simpler. Summary C# has a great type system, so we should use it! Primitive obsession is very common, but you should do your best to fight against it. In this post I showed a way to avoid issues with incorrectly using IDs by using strongly-typed IDs, so the type system can protect you. In the next post I'll extend these types to make them easier to use in an ASP.NET Core app.
https://andrewlock.net/using-strongly-typed-entity-ids-to-avoid-primitive-obsession-part-1/
CC-MAIN-2019-26
refinedweb
1,608
55.54
I finished a first step. I have refactored AntUnit task in order to extract the logic of running the antunit tests from the logic related to the interaction with the container project. Feedback are welcome.'m +1 to migrate antunit completely to 1.5 (I know, again the same discussion...). Gilles Scokart 2009/1/19 Kevin Jackson <foamdino@gmail.com>: >>> I already some code prepared to support this. The adaptation layer : >> >>> public class CompilePath extends TestCase { >> >>> public static TestSuite suite() { >>> File script = new >>> File("src/test/java/net/sourceforge/deco/ant/test-compilepath.xml"); >>> return new AntUnitSuite(script); >>> } >> >> Looks good. >> >>> Before check-in the code, I would like to have a feed back to know if >>> I'm the only one to find it useful, and if that doesn't contradict >>> with the current antunit philosophy. >> >> You are talking about adding it to AntUnit, right? >> >> +1 from me. > > +1 - looks quite handy - at the moment I'm a shell luddite with > antunit and this would be nice to try in IntelliJ and Eclipse > > Kev > > --------------------------------------------------------------------- > To unsubscribe, e-mail: dev-unsubscribe@ant.apache.org > For additional commands, e-mail: dev-help@ant.apache.org > > --------------------------------------------------------------------- To unsubscribe, e-mail: dev-unsubscribe@ant.apache.org For additional commands, e-mail: dev-help@ant.apache.org
http://mail-archives.apache.org/mod_mbox/ant-dev/200901.mbox/%3Ce9d8a2610901271337s13b4f41cm9f8c35369d3c16f8@mail.gmail.com%3E
CC-MAIN-2014-42
refinedweb
212
60.61
{-# LANGUAGE RecordWildCards, DeriveDataTypeable #-} -- |A module for automatic, optimal protocol pipelining. -- -- Protocol pipelining is a technique in which multiple requests are written -- out to a single socket without waiting for the corresponding responses. -- The pipelining of requests results in a dramatic improvement in protocol -- performance. -- -- [Optimal Pipelining] uses the least number of network packets possible -- -- [Automatic Pipelining] means that requests are implicitly pipelined as much -- as possible, i.e. as long as a request's response is not used before any -- subsequent requests. -- -- We use a BoundedChan to make sure the evaluator thread can only start to -- evaluate a reply after the request is written to the output buffer. -- Otherwise we will flush the output buffer (in hGetReplies) before a command -- is written by the user thread, creating a deadlock. -- -- -- # Notes -- -- [Eval thread synchronization] -- * BoundedChan performs better than Control.Concurrent.STM.TBQueue -- module Database.Redis.ProtocolPipelining ( Connection, connect, disconnect, request, send, recv, ConnectionLostException(..), HostName, PortID(..) ) where import Prelude hiding (catch) import Control.Concurrent (ThreadId, forkIO, killThread) import Control.Concurrent.BoundedChan import Control.Exception import Control.Monad import Data.Attoparsec import qualified Data.ByteString as S import Data.IORef import Data.Typeable import Network import System.IO import System.IO.Unsafe data Connection a = Conn { connHandle :: Handle -- ^ Connection socket-handle. , connReplies :: IORef [a] -- ^ Reply thunks. , connThunks :: BoundedChan a -- ^ See note [Eval thread synchronization]. , connEvalTId :: ThreadId -- ^ 'ThreadID' of the eval thread. } data ConnectionLostException = ConnectionLost deriving (Show, Typeable) instance Exception ConnectionLostException connect :: HostName -> PortID -> Parser a -> IO (Connection a) connect host port parser = do connHandle <- connectTo host port hSetBinaryMode connHandle True rs <- hGetReplies connHandle parser connReplies <- newIORef rs connThunks <- newBoundedChan 1000 connEvalTId <- forkIO $ forever $ readChan connThunks >>= evaluate return Conn{..} disconnect :: Connection a -> IO () disconnect Conn{..} = do open <- hIsOpen connHandle when open (hClose connHandle) killThread connEvalTId -- |Write the request to the socket output buffer. -- -- The 'Handle' is 'hFlush'ed when reading replies. send :: Connection a -> S.ByteString -> IO () send Conn{..} = S.hPut connHandle -- |Take a reply from the list of future replies. -- -- 'head' and 'tail' are used to get a thunk of the reply. Pattern matching (:) -- would block until a reply could be read. recv :: Connection a -> IO a recv Conn{..} = do rs <- readIORef connReplies writeIORef connReplies (tail rs) let r = head rs writeChan connThunks r return r request :: Connection a -> S.ByteString -> IO a request conn req = send conn req >> recv conn -- |Read all the replies from the Handle and return them as a lazy list. -- -- The actual reading and parsing of each 'Reply' is deferred until the spine -- of the list is evaluated up to that 'Reply'. Each 'Reply' is cons'd in front -- of the (unevaluated) list of all remaining replies. -- -- 'unsafeInterleaveIO' only evaluates it's result once, making this function -- thread-safe. 'Handle' as implemented by GHC is also threadsafe, it is safe -- to call 'hFlush' here. The list constructor '(:)' must be called from -- /within/ unsafeInterleaveIO, to keep the replies in correct order. hGetReplies :: Handle -> Parser a -> IO [a] hGetReplies h parser = go S.empty where go rest = unsafeInterleaveIO $ do parseResult <- parseWith readMore parser rest case parseResult of Fail{} -> errConnClosed Partial{} -> error "Hedis: parseWith returned Partial" Done rest' r -> do rs <- go rest' return (r:rs) readMore = do hFlush h -- send any pending requests S.hGetSome h maxRead `catchIOError` const errConnClosed maxRead = 4*1024 errConnClosed = throwIO ConnectionLost catchIOError :: IO a -> (IOError -> IO a) -> IO a catchIOError = catch
http://hackage.haskell.org/package/hedis-0.5.1/docs/src/Database-Redis-ProtocolPipelining.html
CC-MAIN-2015-18
refinedweb
556
50.94
The following C program using recursion finds a binary equivalent of a decimal number entered by the user. The user has to enter a decimal which has a base 10 and this program evaluates the binary equivalent of that decimal number with base 2. Here is the source code of the C program to find the binary equivalent of the decimal number. The C program is successfully compiled and run on a Linux system. The program output is also shown below. /* * C Program to Convert a Number Decimal System to Binary System using Recursion */ #include <stdio.h> int convert(int); int main() { int dec, bin; printf("Enter a decimal number: "); scanf("%d", &dec); bin = convert(dec); printf("The binary equivalent of %d is %d.\n", dec, bin); return 0; } int convert(int dec) { if (dec == 0) { return 0; } else { return (dec % 2 + 10 * convert(dec / 2)); } } $ cc pgm31.c $ a.out Enter a decimal number: 10 The binary equivalent of 10 is 1010..
http://www.sanfoundry.com/c-program-number-decimal-to-binary-recursion/
CC-MAIN-2017-39
refinedweb
163
54.32
> So whenever I play this scene in the editor - or in the built version, I open up a canvas object and all of a sudden everything in the canvas gets tinted blue. But then when I goto other scenes, all the canvases there are tinted blue as well. It does not reset until I quit and reopen Unity. It is triggered when a certain panel in enabled in just this one scene but affects the rest of the canvases in the project. Any ideas? I am on the latest version of Unity, 5.2.1f1 In the picture below, notice how yellow text appears dark blue. Answer by MikeChurvis · Mar 06, 2016 at 02:49 AM @Brainrush1 Howdy! Not sure if you're still having this issue, but the same thing just happened to me. Somewhere in a script, I had done this: void SetColor(Color newColor) { canvasElement.material.color = newColor; ... } The problem with this is twofold: Canvas elements have a default shared material called "none" (yes, "none" is an actual material, not the absence of one). When you modify canvasElement.material.color you're changing the "none" material's color, meaning you're changing the color of every canvas element with the "none" material. canvasElement.material.color This change persists even after you've quit out of the game, meaning it doesn't automatically reset like most other in-game changes do. To fix this, Comment out or Remove any code that's similar to the problem code above (tip: Command/Control + F for "material.color"). put this C# script on an empty gameObject in the scene: using UnityEngine; using UnityEngine.UI; public class MaterialFixer : MonoBehaviour { // When the game starts, void Awake() { // Add an Image component to this GameObject Image canvasElement = gameObject.AddComponent<Image>(); // and change the Image's material, shared between all canvas elements, back to white. canvasElement.material.color = Color.white; } } Hit the Play button. The color of all Canvas elements should return to normal and stay that way. You can now safely remove the script. Let me know if this helps, or if you have any other issues. Cheers! - Mike Churvis Thanks for the thorough response, as it turns out, the default material on the canvas was being set by some script an old coworker had written - took away that script and everything was ok. Thanks a lot though, Mike. -Pablo Leon-L. Working with UI makes both editor and game view jump between 1x and 2x scale. 1 Answer Canvas is scaling in the editor but not in the build 1 Answer Unable to create a coloured mesh on a canvas. 1 Answer How can I change the UI button color when clicked 1 Answer Cant Shoot client, Unet 0 Answers
https://answers.unity.com/questions/1079501/canvas-error-everything-tinted-blue.html
CC-MAIN-2019-22
refinedweb
456
63.19
26 July 2012 08:47 [Source: ICIS news] SINGAPORE (ICIS)--?xml:namespace> The first phase of the expansion, which ramped up the plant’s original 300,000 dry metric tonne (dmt)/year caustic soda capacity to 400,000 dmt/year, was completed at the end of March. The second phase, which consists of an additional 100,000 dmt/year caustic soda capacity, was initially expected to start up in July. The source did not specify the reason for the delay. The producer is currently operating its 400,000 dmt/year caustic soda facility at around 60-70%, the source said. Additional reporting by Sikee
http://www.icis.com/Articles/2012/07/26/9581125/chinas-ningbo-donggang-delays-second-chlor-alkali-unit.html
CC-MAIN-2013-48
refinedweb
104
56.05
Python Code To Generate Random Number Problem Statement Random number is a series of numbers with unpredictable sequence. Following code will generate random numbers in Python. Solution - Import “random” library which will be used to generate random numbers. - Create a function that will take input number of random numbers to be generated. - In the function, use a for loop to run as many times as the number of random numbers to be genarted. - In for loop return random numbers. How Does Python Random Generator Work? The given code above will print “x” random values of numbers between “a” and “b”. In the for loop, range(x) is the number will be the number of random numbers that you’ll have want. If you want 20 values, use range(20). Use range(5) if you only want 5 values returned, etc. Then the code will automatically select a random integer between “a” and “b” for you. NOTE: “a” and “b” should be positive values(including 0) only. Python Program/Code To Generate Random Numbers import random # import statement def random_num(no,a,b): # function definition l=[] # create an empty list for x in range(no): # for loop until number of random numbers to be generated l.append(random.randint(a,b)) # append random numbers to the list return l # return the list Output: random_num(10,1,100) - 97 - 75 - 47 - 88 - 24 - 58 - 13 - 50 - 73 - 58
https://officetricks.com/python-random-number-generator-code/
CC-MAIN-2020-29
refinedweb
236
65.22
One of my favorite parts of the summer is attending music festivals. Most festivals offer "early bird" tickets for a significantly lower price than general admission, however they typically sell out well before the actual event. Whether it is laziness, lack of money, or just plain stupidity I never seem to purchase these early bird tickets on time and have to look to different options. In recent years I have found success using Craigslist last minute, around a week before the festival, and getting tickets around or even lower than the early bird/pre sale prices. This year instead of sitting on craigslist day after day refreshing I decided to try and automate the process. After looking at the structure of the Craigslist results page and messing around with BeautifulSoup I came up with the following script. import requests from bs4 import BeautifulSoup from urlparse import urljoin URL = '' BASE = '' response = requests.get(URL) soup = BeautifulSoup(response.content) for listing in soup.find_all('p',{'class':'row'}): if listing.find('span',{'class':'price'}) != None: price = listing.text[2:6] price = int(price) if price <=250 and price > 100: print listing.text link_end = listing.a['href'] url = urljoin(BASE, link_end) print url print "\n" Requests is used to get all of the data from the webpage and then beautiful soup parses out everything I was interested in. Once the script is run it returns the most recently posted tickets between $100 and $250 with the price, listing title, location and link. Using this script in conjunction with something like cron or osx's launchd you can have the script run a few times a day and the output emailed to you. In the future I think it would be interesting to keep track of third party tickets sales as the event approaches on websites like ebay, stubhub, and craigslist and see when the best time to buy is. Something similar to this study on when to book a flight. Also a web app that allows you to search all three at the same time could prove interesting. Any feedback is appreciated, you can reach me on Twitter or email me at danforsyth1@gmail.com.
http://www.danielforsyth.me/finding-the-best-ticket-price-simple-web-scraping-with-python/
CC-MAIN-2018-09
refinedweb
361
62.78
Moved! Posted November 3, 2009on: - In: .NET | C# 4.0 If you dealt with COM Interop before, then you probably know what Indexed Properties mean. If you don’t, hang on with me and you will know in the coming few lines. Consuming Indexed Properties is a new feature to C# 4.0 Beta2. This is used to improve syntax like the following: var excel = new Microsoft.Office.Interop.Excel.ApplicationClass(); excel.get_Range(“A1”); This syntax can now be improved to get rid of get_Range(“A1”) and use an indexer accessor instead, here’s how C# 4.0 Beta 2 can do for you to improve this: var range = excel.Range[“A1”]; So now, every time you use COM Interop and have to call get_x() and set_x(), you can now replace this with the new indexer syntax. I have to tell you –Well, you might have guessed it- that this is just a syntatic sugar, the compiler will do emit calls to get_x() and set_x() ultimately. I think this little syntax improvement is pretty neat, however, people shouldn’t ask the very expected question “Well, now we can consume indexed properties in c#, why can’t we create it? we wanna create indexed properties! Indexed properties is a legal right! blah blah .. “. If C# allowed us to create indexed properties then I think, this will add an ambiguity that isn’t worth anything here. I mean take a look at the following code and tell me what would it mean to you, if C# enables you to create indexed properties? obj.SomeProperty[1] Does it mean that SomeProperty is a type that implements an indexer, or SomeProperty is a property that requires an indexer? See the ambiguity? What’s your thoughts on this, dear reader? When the increment is happening? Posted August 20, 2009on: - In: .NET | C# 3.0 | General | tech In my last post when I linked to a very interesting article on Eric Lippert’s blog. Now I want to follow up on the subject of the post increment (++) operator. Personally I used to think the the post increment operator will increment the value of its operand at the very end of the current statement (i.e. just before the terminating “;”). So according to my understanding, this expression (x+(y++)+y) would evaluate to 0– of course of both x and y were initialized to 0. If you tried this example in the C# compiler you would find that the result of the expression is 1! (yes not 0 :S). Obviously I was wrong! Here’s how I thought the sequence of evaluating this expression (z = x+(y++)+y;) would go: First, x is added to y and the result is stored somewhere (a for example) Second, the result of (#First) will be added to y again (note: y here is still 0, the increment didn’t happen yet) and the result of this evaluation is stored somewhere (b for example) Third, the assignment will happen; assigning the result of (#second “b”) to whatever on the left side of the “=” operator. Fourth: Now (and only now) increment y. y = 1 now. If this is right, I would get 0 as a result of Console.WriteLine(z); but it is NOT. This flow of evaluation is not correct. My understanding of the post increment operator is not right! (I’m doomed, eh!?). So I checked up the c# documentation, still couldn’t find any specific statement about when exactly the increment happens. After an hour on the internet, I turned to my favorite site, my third place now, Stackoverflow.com. I logged in and posted a question. Go ahead click the link and check the accepted answer, if you’re lazy, here’s my summary of the answer. First you have to know that the ++ operator has an operation to perform with a specified precedence and a SIDE EFFECT. In short, the ++ operator has the higher precednce than (* + – /). See the specs This means that the operator will execute instantly! Yes Instantly! So the increment will be the first thing to happen (even if it’s post increment or pre-increment). The difference between pre and post comes from the side effect. When post incrementing the old value is stored, then the increment happens (i.e. the value of the variable increments), and the value that’s used to continue evaluating the expression is the (yeah, you guessed it) old value. So to get back to the example (z = z +(y++) +y;), the order of evaluating this expression would be as follows: - Store the value of y somwhere -say a. a now = 0. - Increment y (remember the increment has the highest precednece). - Add x to the value of a(which is the old value of y; that’s the side effect part of the operator) and save the result of the addition somewhere (say b). now b = 0; - Add the value of b to the value of y (what’s the value of y now? yes you’re right, it’s 1), and store the result somewhere say c. say now = 1. - Assign the value of c to z. z now = 1. As you can see, the increment happens instantly. And its side effect forces the use of the old value of the incremented variable for evaluating the very next part of the expression. After that when trying to dereference the incremented value and get its value, you will actually get the incremented value. Makes perfect sense to me now. What about you dear reader? A little ago Eric Lippert took a stand to the programming myths that people have about c#. Starting by his awesome The stack is an implementation detail then Not everything derives from object and today’s Precedence vs Order, Redux. I have to say it was an awesome job, very nice articles -just like everything Eric writes. I strongly recommend that you scan through these articles, a lot of mysteries to be unveiled there. I’m not going to repeat what Eric said because I’m too proud to do! Well, not really! It’s just I’m not as good as Eric to think of providing new versions of his articles (what new would these new versions have anyway?) I just want to stop by the last article Precedence vs Order, Redux. I had a nice 12 parts conversation on facebook with one of my friends. At first I posted a Twitter post with a question and a link to Eric’s article. I got a comment from one of my friends –who stands to be one of the very best students at Century 21, IBM. The man’s comment was “value = 1″. Nice and easy! Well, that wasn’t what I expected from this friend -I know the answer, I run the sample code, and seen the value of value :), and I read Eric’s article. What I expected him to say is 0. Because as far as I know, this program should work this way: 1- Get the value of arr[0] .. the inner expression arr[0] .. 0 returned 2- Get the value of arr[0] again .. the outer arr[arr[0]] .. still 0 returned 3- Now assign the value of 0 to the variable value. 4- Now and not before now, Increment arr[0]. Apparently what I know is wrong. The increment didn’t happen the moment I expected it to happen –after assigning the value. Actually it happened before assigning the value. Frankly I don’t understand why!? I tried to search the C# spec and couldn’t find anything to state strictly when the increment should happen in case of post increment and pre-increment. Another interesting and confusing case is a snippet I saw in a comment by Peter Ibboston on the same article by Eric: The one that confused me is this C# code which gives different results in C & C# int[] data={11,22,33};int i=1;data[i++]=data[i]+5; I couldn’t find a bit in the C# spec that said anything about when increment should happen. I had presumed that the rvalue would be evaluated first followed by the lvalue. Any ideas where I missed it. To finalize I would say, there’s no documentation to specifically tell when the increment will happen –AFAIK of course. Eric emphasized an interesting statement in his article too, he said: I emphasize that the logical sequence is well-defined because the compiler, jitter and processor are all allowed to change the actual order of events for optimization purposes, subject to the restriction that the optimized result must be indistinguishable from the required result in single-threaded scenarios. The compiler or the Jitter or the processor are all able to modify the logical sequence for optimization purposes. The optimization shouldn’t affect the output in a single-threaded application, however, in a multi-threaded app, it might be observable. Yeah I see. I still don’t know why is this working this way? When exactly the increment is happening? What do you think dear reader? Please help me clarify the facts, and eliminate these confusions. Yeah, I’m back. I know it’s been a long time since I made my last blog post. Actually there’s no reason why I stopped blogging -besides being lazy of course. However, I’m back, and I’m back with big news. As you’ve read, the title of this post, is Google announced Chrome OS! -see the exclamation mark? I’m not going to claim -like others- that I was not shocked, actually I was. Why yet another operating system, ain’t Mac, Linux, and Windows enough already? Actually there have been a lot of explanations of this call of Goolge, since yesterday. Kent Beck wrote an interesting article about his vision of Chrome OS. Actually Kent’s article is very interesting -I’m sure you can judge that by reading only the title “Chrome OS is Worse, That’s the Point”. Kent, started his discussion with Chrome -the browser, it’s important to be specific from now on:). He mentioned that Chrome’s advantages overweights the disadvantages so that he uses it. So I decided to try it, to simulate using Chrome as my operating system. I made it my default browser (in spite of Microsoft’s periodic attempts to change my preference) and expanded it to full screen. From then on I did everything I could on the web. The key idea here -according to Kent- is that, It doesn’t matter if the new invention is better or worse, what matters is the alternatives that this new invention provides for people to do things they care for (I think email, and all web-stuff in general is something most people care for nowadays. Innovations that start out worse need to be better at something new that matters. Imagine never having to install an application again. Never having to back up. Never having to reinstall the OS because it’s just gotten way too weird. I’d give up a lot to gain that. That was the point of telling you about my experiment: I’ve seen the future and it’s not so bad. And here’s what Kent thinks MS, Apple, and Linux should do to stay strong on the desktop world: To remain strong in desktop operating systems, though, Apple or Microsoft or the Linux desktops would have to abandon their current profit model, find a fresh ultra-simplification twist, and run the new business far from rational-but-doomed headquarters (Merlin, Oregon has a lovely abandoned sawmill site ready for development, in case you’re interested). They aren’t likely to do so, though, because it makes no sense The question about the potential success or failure of Google Chrome OS will remain alive till a long time: -as Google Chrome OS will be shipped only later next year. I’m not sure! What do you think dear reader? Anonymous Methods and Lambdas. Posted January 10, 2009on: A while ago, when I posted my last post on this blog which was titles (Delegates and Events – A Look Back), I stated that it was an introduction to Lambda Expressions. Though it took me more than a month to re-blog again, here i’m continuing the journey on the C# 3.0 language enhancements features. The first thing we will discuss here is anonymous methods, as you know anonymous methods is just a shorthand way for subscribing to events and providing handlers all in one shot. The following code example shows the old way to subscribe to a Click event on a normal System.Windows.Forms.Button object. This is the very classic way of subscribing to event. You simply need to provide a pre-defined delegate (in this case EventHandler) that points to a function matching a specific signature (in this case returns void and accepts two input parameters an System.Object parameter, and System.EventArgs parameter) . Using anonymous methods, life can be easier. Check out the next code sample. As you can see here, all I needed to do in order to subscribe to the Click event on the Button object is to simply write my code that handles the event without having to define a whole new function just to handle this event, and also without having to know the event’s delegate signature. Lambdas Lambda expressions are another new way in C# 3.0 to substitute delegates in certain places. Now consider the following example, if you have a List<int> and you want to filter this list and get only all the odd numbers out of it. One solution that might come handy is to use the FindAll method of your generic List<int>. FindAll expects one argument which in fact is of type System.Predicate<T>. System.Predicate<T> is a delegate that can point to any methods returns bool and takes a single argument T. The point is when FindAll was designed it was designed as this(take each item, check it, and then tell me if it should be included in the result set of the call). The follwoing example illustrates the use of the smart FindAll method, take a look: As you see here I’ve defined a method called IsOdd that takes a single integer parameter and returns a boolean value indicating whether the passed in parameter is odd or not. I then called the FindAll method on my list of integers (numbers) passing in a new Predicated delegate that points to IsOdd. What is going to happen here is my numbers list will take each element of it and pass it to the IsOdd(the method that the Predicate delegate points to) and check the value returned, if true, then the item will be added to the result set. If false then the item will be ignored. If you run the above code you will get the following result. Now, what if you don’t want to define this whole IsOdd method that makes a really tiny job here and will not be reused by any other peace of code? Well, you guessed it, use anonymous method syntax like this: A lambda expression is another handy way for providing an anonymous method. The syntax might seem clumsy at first, but once you get it, you never quit it ;) What you see here is a lambda expression in action. The FindAll method expects a delegate and this time instead of passing it a delegate or an anonymous method I passed in well, a lambda, a lambda that will operate on one single integer parameter and return the the result of the expression x % 2 != 0. The thing that most people find uncomfortable about lambdas is that, they can’t pronounce it, yeah, they write it but they can’t pronounce it. Our above example will be pronounce as follows (My only parameter will be processed this way “as what between the {} stats”). A lambda expression can be in one of two forms; a single line form and a code block form. In our example we wrote a single statement in between two curly brackets, in fact I could have wrote any number of statements as needed. I will show you how lambda expressions can be appeared in a single line form: Well that’s it for lambdas, any questions or suggestions feel absolutely free to leave a comment. - In: C# 3.0 - 2 Comments Every .NET programmer has used delegates in a way or another and hopefully appreciated the flexibility delegates provide. A delegate is simply a type-safe, object oriented function pointer. A pointer that can points to a function and can be used to dynamically invoke methods. Take a look at the following example on how to use the old fashion function pointer. #include <iostream> using namespace std; //Function prototypes. int Add( int, int) ; int Subtract ( int, int); void main( void ) { int firstNumber = 0; int secondNumber = 0; int result = 0; char answer = ‘a’; //The declaration of function pointer. int (*funcPtr) (int, int); cout << “Plz enter the first number !” << endl; cin >> firstNumber; cout << “Plz enter the second number !”<< endl; cin >> secondNumber; cout << “Add or Subtract (A/S) ?” << endl; cin >> answer; if(answer == ‘a’) funcPtr = Add; // Assigning a function to the pointer. else funcPtr = Subtract; //calling the function using the function pointer. result = funcPtr(firstNumber, secondNumber); cout << “The result is “ << result << endl; getchar(); } //actual functions implementation. int Add( int x, int y) { return x + y; } int Subtract( int x, int y) { return x – y; } That’s a complete C++ program to demonstrate the use of a function pointer. As you can see I’ve declared a function pointer funcPtr that is ready to point to any function that returns integer and takes two integers as parameters. int (*funcPtr) (int, int); After declaring the pointer and prompting the user for input, it’s now time to assign a value to this pointer. As this pointer is declared to point to function, so the value assigned to it is a function. The syntax used to assign a value to the function pointer is: PointerName = FunctionName; (i.e. funcPtr = Add;) Now we have everything set, the function pointer is declared and is pointing to a valid function. We can now call that function using that pointer name (remember pointers are variables that holds the memory address of other variables or functions on the program, and can be used to indirectly access those variables values or call the target functions.) as follows: result = funcPtr(firstNumber, secondNumber); This line is used to indirectly invoke the method that funcPtr is pointing to and passing in two integer variables (firstNumber and secondNumber) and also holding the function’s return value in yet another integer variable called result. The benefit of using this technique (regardless of the clumsy syntax) is that you can determine function calls according to user input in a concise way. Delegates basically do more or less the same job but in a more cleaner way. If you took a look at the above example (specially at this line int (*funcPtr) (int, int);) you would notice that I have declared a function pointer funcPtr that can point to any function that returns integer and accepts two intger parameters. The dangerous thing about that line is that, I have not initialized that pointer which makes it poiting to a random place in memory and if I called it right a way without assigning a value to it, it would most probably cause my program to crash. (PS: you can use the following syntax to initialize that pointer to NULL at first declaration int (*funcPtr) (int, int) = NULL;). If you tried to use that pointer (funcPtr) before assigning value to it, the compiler will simply let you go, it will not generate an error to warn you from the potential risk. This is the real bad thing about function pointers and pointers in general, they’re flexible and powerful, but they are NOT safe. A delegate is a safe, object-oriented replacement to the old function pointer. You can use delegates to indirectly invoke methods both in synchronous mode and asynchronous mode. As you may know delegates in C# are types, like classes, interfaces, structs and enums. They can be declared on a namespace level so you can reuse them in all the types in that namespace and mybe even across namspaces inside the boundary of one assembly or cross assemblies. To declare a delegate you use the following syntax. public delegate int BinaryDel(int x, int y); Here we declared a delegate (modern function pointer) called BinaryDel that can point to any method that returns an integer and takes two integer parameters. To put this delegate in action take a look at the following example: class Program { static void Main(string[] args) { //Instantiating a new BinaryDel object BinaryDel myDel = new BinaryDel(Add); int result = myDel(5, 6); Console.WriteLine(“The result is {0}”, result); Console.ReadLine(); } static int Add(int x, int y) { return x + y; } static int Subtract(int x, int y) { return x – y; } } As you can see here, the first thing I’ve done in order to use this BinaryDel is to instantiate a new instance of this delegate using the new keyword and passing in a method name in the constructor. This is the name of the method that myDel (the instance of BinaryDel) will be pointing to. For now delegates do not have a lot to offer more than function pointer except for the type-safety that is provided through the compile-time check. Delegates are more than that, a delegate can point to more than one method and invoke them all respectfully. Each delegate itself is a class that extends the System.MulticastDelegate class, this class has a collection called invocation list which is a collection of .. well .. delegates. You can use the (+=)operator on any instance of a delegate to add method addresses to that invocation list and later then you can call them all. class Program { static void Main(string[] args) { //Instantiating a new BinaryDel object BinaryDel myDel = new BinaryDel(Add); myDel += new BinaryDel(Subtract); myDel += Multiply; int result = myDel(5, 6); Console.ReadLine(); } static int Add(int x, int y) { Console.WriteLine(“Inside Add method”); return x + y; } static int Subtract(int x, int y) { Console.WriteLine(“Inside Subtract method”); return x – y; } static int Multiply(int x, int y) { Console.WriteLine(“Inside Multiply method”); return x * y; } } That’s an update to the first example. In this example I’ve updated the Add and Subtract methods to print a line to the console indicating that each of them has been called. I also added a brand new method called multiply that also returns an integer and takes two integer parameters (i.e. matches the delegate signature.)I then used the += operator to add both Subtract and Multiply methods to the invocation list of myDel. Now myDel points to three methods Add which is passed to it in the constructor, Subtract, and Multiply that are added to it using the += operator. PS: when using the += operator on a delegate instance, it expects another instance of the same delegate to be added to the invocation list of the left side delegate of the assignment. So at the first time I used the syntax (myDel += new BinaryDel(Subtract);) to add new instance of BinaryDel pointing to Subtract to the invocation list of my current instance myDel, this is the expected behavior because as you may recall I mentioned that each delegate’s invocation list is simply a list of the same delegate items, each of which points to one or more methods. However, the syntax I used next is different than that (myDel += Multiply;). This is the same as the previous syntax. The compiler here will create a new instance of BinaryDel pointing to Multiply and then will add that instance to the invocation list of myDel. It’s also worth mentioning that you can remove method addresses from any delegates invocation list using the -= operator. Consult the .NET framework documentation for more details. When calling these methods using myDel like so myDel(5, 6); they will be called in order meaning that the Add method will be called first then the Subtract then the Multiply (you may know guess that the invocation list of myDel is implemented as a Queue .. smart cookie J ). Now the question is “What is the value of result?” the answer is “The value of result is the return value of the last method in the invocation list, in our case multiply. One more feature of delegates is that they provide the ability to invoke methods asynchronously throught the delegates bulit in method BeginInvoke. BeginInvoke will invoke the methods on a new background thread, and the main thread will keep executing at the same time, then when you are ready to receive the return of that method you called by the delegate you can call EndInvoke on the very same delegate. You can also provide an AsyncCallBack delegate that point to some method –Say A- as a parameter to begin invoke, this method –A- will be called automatically after the execution on the background thread ends. or further details consult the .NET framework decumentation. Delegates can also be used as function parameters and function return types. However, the primary usage for delegates in the .NET universe is to subscribe to events. An event is a flag raised by an object to indicate the various phases of the object life cycle. Assume we have a class called Employee that represents an actual employee. We will use events to flag some key time-points in the employee life cycle, points like when the employee gets hired or when picked for an external mission or when he or she gets fired. The code for the employee class my more or less look like this: //Custom delegate to use for the employee events. public delegate void EmployeeDel(Employee e, string action); public class Employee { //Creating the events. public event EmployeeDel OnGettingARaise = null; public event EmployeeDel OnPromoting = null; public int Age { get; set; } public float Salary { get; set; } public string Name { get; set; } public string Position { get; set; } public void ShowDirectMangerSomeLove() { Console.WriteLine(“EMployee>> Mr Manager .. you’re the best :) .. !”); Console.WriteLine(“Manger>> thanks {0} you got a raise !”, this.Name); //Increasing the salary by 5 this.Salary += 5; //Raising the event if there are any subscribers. if (OnGettingARaise != null) OnGettingARaise(this, “Having a raise”); } public void ShowCEOSomeLove() { Console.WriteLine(“>>Employee: Mr CEO .. You’re leading us right to the top”); Console.WriteLine(“>>CEO: thanks {0} .. you know what .. I really think you should be in a higher position”, this.Name); this.Position = “Manager”; this.Salary += 200; //raising the event if there’s any subscribers. if (this.OnPromoting != null) OnPromoting(this, “Got promoted !”); } } Here I have declared a custom delegate called EmployeeDel that can point to any method that returns void and takes an Employee and a string as parameters. Then inside the Employee class declaration I created two events OnGettingARaise and OnPromoting, both using the EmployeeDel delegate. In the ShowDirectManagerSomeLove() method I raise the OnGettingARaise event passing in the current object and a string indicating that the employee has a raise, I then increase the employee’s salary as a result of showing his manager some love J. As you notice here because the OnGettingARaise event is declared of type EmployeeDel, it has to pass in the required parameters for this EmployeeDel (Employee, string) when it’s being raised. To see this Employee class in action, take a look at the following: class Program { static void Main(string[] args) { Employee shankl = new Employee() { Name = “Shankl Shankool”, Age = 30, Salary = 2000f, Position = “Sales Man” }; shankl.OnGettingARaise += new EmployeeDel(HandleEmployeeGettingARaise); shankl.ShowDirectMangerSomeLove(); } static void HandleEmployeeGettingARaise(Employee e, string action) { Console.WriteLine( “{0} who works as {1} has got a raise .. \nhis salary now is {2}”, e.Name, e.Position, e.Salary); } } The program class containing the Main method defines a method called HandleEmployeeGettingARaise that matches the EmployeeDel delegate singuature as it returns void and takes two input parameters (Employee, string). This method can be used to handle the OnGettingARaise and OnPromoting events for the employee class as it matches the type of these events (EmployeeDel). In Main I’ve declared a new Employee with the name “Shankl Shakol” and 30 years of Age and 2000$ salary and works as a “Sales Man” using the object initializaiton syntax. The next line is the line where I subscribe to the OnGettingARaise event of the Employee object-shankl-and provide delegate of the type EmployeeDel that points to my HandleEmployeeGettingARaise method. This means that when this event is raised my handler (HandleEmployeeGettingARaise mehtod) will be executed. Run the code and examine it yourself. Recent Comments
https://halwagy.wordpress.com/
CC-MAIN-2015-14
refinedweb
4,830
62.48
Macish Control Panel for Plone. Project description Introduction collective.mcp is a Plone product that helps creating a custom control panel for the site’s users. mcp stands for Mac Control Panel, as the base theme is inspired by the Mac OSX control panel. The goal is not to replace Plone’s control panel but to help creating a new one decidated to the users. This will might be usefull for web applications based on Plone. You can see some screenshots of the product at this page: This product does not magically create the pages for your site, it only provides some API to create them, as we’ll see later in this README. Compatibility This has been tested with Plone 3.3.5. Installing collective.mcp In your buildout, add collective.mcp in the eggs directory. Run buildout, start your instance again and add the product using the quick_installer (or the Plone equivqlent). You can now access the control panel by accessing. As you have not added any page yet, you will get a message telling you that you can not manage anything. >>> from collective.mcp import categories, pages >>> categories [] >>> pages [] Viewing samples You can load the file samples.zcml file from collective.mcp to get some samples. For example, in the configure.zcml file of your theme: <include package="collective.mcp" file="samples.zcml" /> Restart the instance, reload the control panel page and you should see some pages ready to be used. Implementing your control panel collective.mcp provides a place for the control panel, that you can find at. Normally, the page will tell you “There is nothing you can manage.”, as you did not add any page yet. To simplify, we’ll consider that you already have a Plone product for which you want to add a control panel. This one has a ‘browser’ package. Inside the browser package, create a ‘control_panel’ package, containing __init__.py and configure.zcml. The __init__.py file you look like this (except you replace the message factory by your own product message factory): >>> from collective.mcp import Category, register_category, register_page >>> from collective.mcp import McpMessageFactory as _ And the configure.zcml file like this: <configure xmlns="" xmlns: </configure> In the configure.zcml file of the browser package, include your new package: <include package=".control_panel" /> Now you have the base, we can start adding thing to the control panel. Creating categories The first step is to create the categories to which the pages will belong. If you have a look at the screenshots in the docs folder or on the project wiki, there is four categories: - personal preferences - clients - templates - settings In our example, we’ll only create the first and last categories. To do so, in the __init__.py, we’ll add the following code: >>> register_category( ... Category('personal', ... _(u'label_cpcat_personal_prefs', ... default=u'Personal preferences'))) >>> register_category( ... Category('settings', ... _(u'label_cpcat_settings', ... default=u'Settings'), ... after='personal')) As you can see, we specified that the ‘settings’ category will appear after the ‘personal’ one. We could also have specified that ‘personal’ is before ‘settings’ and get the same result. If you reload now the control panel, nothing has changed. That is normal, the system does not display categories for which there is no page (or the user can not use any of the pages). >>> categories [Category: personal, Category: settings] >>> pages [] Creating a simple page collective.mcp is based on collective.multimodeview. For the pages, we will rely on a view defined in the samples called ‘multimodeview_notes_sample’. If you have already activated the samples for multimodeview, you do not have to do anything. In the other case, add the following lines to your configure.zcml file: <browser:page The first page we will create allows to update the ‘home message’, using the API provded by the view declared above. The API is pretty simple and do not realy need explanations: - get_home_message() - set_home_message(msg) This message is not displayed anywhere. It could, but that’s not covered by this README. To create our page, we’ll first create a new python file in the control panel package, called ‘home_message.py’, that contains the following code: from collective.mcp.browser.control_panel_page import ControlPanelPage class HomeMessage(ControlPanelPage): category = 'settings' zcml_id = 'collective_mcp_home_message' widget_id = 'collective_mcp_home_message' modes = {'back': {}, 'default': {'submit_label': 'Update home message', 'success_msg': 'The home message has been updated'}} default_mode = 'default' @property def notes_view(self): return self.context.restrictedTraverse('@@multimodeview_notes_sample') def _check_default_form(self): return True def _process_default_form(self): self.notes_view.set_home_message( self.request.form.get('msg', '')) return 'back' Let’s have a look to what we defined: - 'category': this is the category to which our now page belongs - 'zmcl_id': this is the name of the page, as defined in the zcml file (we'll see it later) - 'widget_id': this is a unique identifier for your page. Here we used the same one that for the zcml_id ust to avoid any conflict, but it could have benn 'home_message' for example. - modes: this dictionnary defines the list of modes in which the page can be. We defined a 'back' mode, that means that when the form is submitted or when the user cancels, the home of the conrol panel will be shown instead of the form again. For the default mode, we also defined the name of the button to save and the message displayed on success. Have a look to collective.multimodeview README file to see more options you can define for modes. - notes_view: just a helper property to easily get the view with the API. - _check_default_form: a function that checks that the form submitted did not contain error. Here we do not check anything so it's prettu quick, the second example will show more. see colective.multimodeview for more explanation). - _process_default_form: the function called if no errors were found by the previous method. As you can guess by the name, it processes the form (here it updates the home message). Now we need a template for our view: <form method="post" tal: <div class="field"> <label for="msg">Message:</label> <input type="text" name="msg" tal: </div> <span tal: </form> There is nothing fancy here, except the use of two methods from multimodeview: - view/get_form_action: gives the action for the form - view/make_form_extra: generates some HTML code with some hidden input fields and the submit buttons. Once again, have a look to collective.multimodeview for more explanations. The last step is to declare our view in the zcml file and register it. First, in the __init__.py file: >>> from collective.mcp.samples.home_message import HomeMessage >>> register_page(HomeMessage) This makes the page appear in the pages list: >>> pages [<class 'collective.mcp.samples.home_message.HomeMessage'>] Then in the ZCML file: <browser:page Now you can restart the server and reload the control panel. The ‘settings’ category will appear, containing one page with a question mark icon. >>> self.browser.open('') >>> 'There is nothing you can manage.' in self.browser.contents False>>> '<span class="spacer">Settings</span>' in self.browser.contents True>>> '<span class="spacer">Personal preferences</span>' in self.browser.contents False First, let’s solve the icon problem. In the sample directory you will find two icons taken from this set: Let’s declare the home.png file in the zcml: <browser:resource And now in our view, we will use this icon: class HomeMessage(ControlPanelPage): icon = "++resource++collective_mcp_home.png" The second problem is that our page does not have a title, this problem can easily be solved too: class HomeMessage(ControlPanelPage): title = 'Home message' The image now appears in the control panel and the title is also displayed: >>> '<img src="++resource++collective_mcp_home.png"' in self.browser.contents True >>> '<span>Home message</span>' in self.browser.contents True If we click on the icon, the main page is not displayed anymore and we see our form instead: >>> self.browser.getLink('Home message').click() >>> self.browser.url '' >>> '<img src="++resource++collective_mcp_home.png"' in self.browser.contents False >>> '<label for="msg">Message:</label>' in self.browser.contents True We can fill the home message and validate. We get a sucess message displayed and we are back on the control panel home page: >>> self.browser.getControl(>> self.browser.getControl(name='form_submitted').click() >>> "<dd>The home message has been updated</dd>" in self.browser.contents True If we had cancelled, we would have got a different message (which is the default cancel message inherited from collective.multimodeview) >>> self.browser.getLink('Home message').click() >>> self.browser.getControl(name='form_cancelled').click() >>> "<dd>Changes have been cancelled.</dd>" in self.browser.contents True And that’s all, you have your first page of the control panel working. Ok it’s not really usefull, but that’s a good start. In Prettig personeel ( - the website for which this product has been developed), there is many pages based on the same principle (two modes: default and back) such as changing the password, setting the user’s theme, managing contact information etc. But now we want to do something a bit harder: create a page to manage multiple objects. Creating a multi-object managing page If ou had a look at the ‘collective_multimodeview_notes_samples’ page, you see that its main goal it to manage a list of notes attached to the portal of the site. We will create a control panel page to manage those notes. To do so, creates notes.py and notes.pt in the control_panel package. The notes.py will look like this: from collective.mcp.browser.control_panel_page import ControlPanelPage class Notes(ControlPanelPage): category = 'settings' zcml_id = 'collective_mcp_notes' widget_id = 'collective_mcp_notes' icon = "++resource++collective_mcp_notes.png" title = 'Notes' modes = {'add': {'success_msg': 'The note has been added', 'error_msg': 'Impossible to add a note: please correct the form', 'submit_label': 'Add note'}, 'edit': {'success_msg': 'The note has been edited', 'submit_label': 'Edit note'}, 'delete': {'success_msg': 'The note has been deleted', 'submit_label': 'Delete note'} } default_mode = 'edit' multi_objects = True @property def notes_view(self): return self.context.restrictedTraverse('@@multimodeview_notes_sample') def list_objects(self): notes = self.notes_view.get_notes() return [{'id': note_id, 'title': note_text} for note_id, note_text in enumerate(notes) if note_text] def _get_note_id(self): notes = self.notes_view.get_notes() note_id = self.current_object_id() try: note_id = int(note_id) except: # This should not happen, something wrong happened # with the form. return if note_id < 0 or note_id >= len(notes): # Again, something wrong hapenned. return if notes[note_id] is None: # This note has been deleted, nothing should be done # with it. return return note_id def get_note_title(self): """ Returns the title of the note currently edited. """ if self.errors: return self.request.form.get('title') if self.is_add_mode: return '' note_id = self._get_note_id() if note_id is None: # This should not happen. return '' return self.notes_view.get_notes()[note_id] def _check_add_form(self): if not self.request.form.get('title'): self.errors['title'] = 'You must provide a title' return True def _check_edit_form(self): if self._get_note_id() is None: return return self._check_add_form() def _check_delete_form(self): return self._get_note_id() is not None def _process_add_form(self): self.notes_view.add_note(self.request.form.get('title')) self.request.form['obj_id'] = len(self.notes_view.get_notes()) - 1 def _process_edit_form(self): self.notes_view.edit_note( self._get_note_id(), self.request.form.get('title')) def _process_delete_form(self): self.notes_view.delete_note(self._get_note_id()) self.request.form['obj_id'] = None So let’s see what is different from the previous page (obviously a lot): - modes: there is no more ‘back’ mode, so when submitting the form, we will still see the same page. Some extra modes appears to manage the notes. - default_mode: it is set to ‘edit’. It means that the page will try, by default, to edit the first object found. - multi_objects: is is set to True. That means that this page can be used to manage multiple object. A sidebar will be shown to display the list of objects. - list_objects: when setting ‘multi_objects’ to True, you have to define this method. It returns a list of dictionnary having two keys: one define the id of the object and the second one the title displayed. The _check_xxx_form amd _process_xxx_form are quite similar to what we saw previously. One point to look at is the fact that we modify the ‘obj_id’ entry of the request in both _process_add_form and _process_delete_form. In the first case, we do that so the note that has just been added with be considered as the current one. In the second case, we delete the entry so the system will not consider the deleted note as the current one (as it does not exist anymore) and will pick the first available one. Now let’s create a template for our page: <tal:block tal: <form method="post" tal: <tal:block tal: <div tal: <label for="title">Title</label> <div class="error_msg" tal: <input type="text" name="title" tal: </div> </tal:block> <tal:block tal: <p>Are you sure you want to delete this note ?</p> <p class="discreet" tal: </tal:block> <input type="hidden" name="obj_id" tal: <span tal: </form> <p tal: There is no note to manage, click the '+' button to create a new one. </p> </tal:block> In this template, we can see three important things: - the use of view/is_xxx_mode: this is a helper provided by collective.multimodeview to now what o display depending on what you are doing. - there is an hidden field called ‘obj_id’. This is important, as it is used to know which object you are currently editing. - there is a default message displayed when there is no notes. Do not forget it. If your page rendered an empty string, the system will show the home page of the menu instead. Now let’s register our page. First in the __init__.py file: >>> from collective.mcp.samples.notes import Notes >>> register_page(Notes) >>> pages [<class 'collective.mcp.samples.home_message.HomeMessage'>, <class 'collective.mcp.samples.notes.Notes'>] and in the configure.zcml: <browser:page Restart your server and reload the control panel, you now have two pages available. >>> self.browser.open('') >>> self.browser.getLink('Notes').click() >>> self.browser.url '' As you have not played with the notes yet, the list on the right is empty and you get a message telling you to add some notes: >>> import re >>> re.search('(<ul class="objects">\s*</ul>)', self.browser.contents).groups() ('<ul class="objects">...</ul>',) >>> "There is no note to manage, click the '+' button to create a new one." in self.browser.contents True collective.mcp automatically added a ‘+’ and a ‘-’ button that will trigger the add and delete modes of your new page. We’ll click on the add button that will display the form to create a note: >>> self.browser.getLink('+').click() >>> self.browser.url '' >>> '<label for="title">Title</label>' in self.browser.contents True You can also notice that, when adding a new object, a new line appears in the objects list and is shown as selected: >>> re.search('(<li\s*\s*<a>...</a>\s*</li>)', self.browser.contents).groups() ('<li class="current">...<a>...</a>...</li>',) Now we’ll add a note objects: >>> self.browser.getControl(>> self.browser.getControl(name='form_submitted').click() This time we are not redirected to the control panel home page but to the edit page of the object we just added and we get a success message: >>> '<dd>The note has been added</dd>' in self.browser.contents True >>> re.search('(<li class="current">\s*<a href=".*">A new note</a>\s*</li>)', self.browser.contents).groups() ('<li class="current">...<a href="...">A new note</a>...</li>',) >>> re.search('(<input type="text" name="title"\s*)', self.browser.contents).groups() ('<input type="text" name="title" value="A new note" />',) We now add a second note: >>> self.browser.getLink('+').click() >>> self.browser.getControl(>> self.browser.getControl(name='form_submitted').click() When saving this note is selected by default: >>> re.search('(<li class="current">\s*<a href=".*">My second note</a>\s*</li>)', self.browser.contents).groups() ('<li class="current">...<a href="...">My second note</a>...</li>',) >>> re.search('(<input type="text" name="title"\s*)', self.browser.contents).groups() ('<input type="text" name="title"...',) More documentation You will find more documentation in collective/mcp/doc. There is four extra documentations there: - modes.rst - some extra explanation about the ``modes`` attributes of the class. - restriction.rst - explains the diferent methods to restrict access to the pages. - multiobjects.rst - going a bit deeper with the multi-objects views. - defect.rst - some examples of what you should not do. - theming.rst - some hints for theming the control panel. Changelog 0.5 (2015-08-27) - Code cleanup. [maurits] 0.4 (2013-09-24) - Moved to github. Cleanup a bit. [maurits] 0.3 (2012-10-30) - better display of the buttons in the left panel. [vincent] 0.2 (2011-12-15) - Added possibility to set custom CSS class to the subpage. By default, it has the class ‘mcp_widget_XXX’ where XXX is the widget id. [vincent] 0.1 (2011-02-25) - Initial release. [vincent] Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/collective.mcp/
CC-MAIN-2022-33
refinedweb
2,807
59.19
By Tom Christiansen, Nathan Torkington Book Price: $49.95 USD £35.50 GBP PDF Price: $34.99 Cover | Table of Contents | Colophon. undef. All other values are defined, even numeric and the empty string. Definedness is not the same as Boolean truth, though; to check whether a value is defined, use the substrfunction lets you read from and write to specific portions of the string. $value = substr($string, $offset, $count); $value = substr($string, $offset); substr($string, $offset, $count) = $newstring; substr($string, $offset, $count, $newstring); # same as previous substr($string, $offset) = $newtail; unpackfunction); unpackor substrto access individual characters or a portion of the string. substrindicates" ||or ||=operator, which work on both strings and numbers: # use $b if $b is true, else $c $a = $b || $c; # set $x to $y unless $x is already true $x ||= $y; 0, " 0", and "" are valid values for your variables, use definedinstead: # use $b if $b is defined, else $c $a = defined($b) ? $b : $c; # the "new" defined-or operator from future perl use v5.9; instead. It's often convenient to arrange for your program to care about only true or false values, not defined or undefined ones. |' return values more useful. ( its corresponding); chrand ordto convert between a character and its corresponding ordinal value: use charnamesat the top of your file, then freely insert " \N{CHARSPEC}" escapes into your string literals. use charnamespragma lets you use symbolic names for Unicode characters. These are compile-time constants that you access with the \N{CHARSPEC} double-quoted string sequence. Several subpragmas are supported. The :fullsubpragma grants access to the full range of character names, but you have to write them out in full, exactly as they occur in the Unicode character database, including the loud, all-capitals notation. The :shortsubpragma gives convenient shortcuts. Any import without a colon tag is taken to be a script name, giving case-sensitive shortcuts for those scripts. use charnames ':full'; print "\N{GREEK CAPITAL LETTER DELTA} is called delta.\n"; Δ is called delta use charnames ':short'; print "\N{greek:Delta} is an upper-case delta.\n"; Δ is an upper-case delta use charnames qw(cyrillic greek); print "\N{Sigma} and \N{sigma} are Greek sigmas.\n"; print "\N{Be} and \N{be} are Cyrillic bes.\n"; Σ and σ are Greek sigmas Б and б are Cyrillic bes charnames::viacodeand charnames::vianame, can translate between numeric code points and the long names. The Unicode documents use the notation U+XXXX to indicate the Unicode character whose code point is XXXX, so we'll use that here in our output. use charnames qw(:full); for $code (0xC4, 0x394) { printf "Character U+%04X (%s) is named %s\n", $code, chr($code), charnames::viacode($code); } Character U+00C4 (Ä) is named LATIN CAPITAL LETTER A WITH DIAERESIS splitwith a null pattern to break up the string into individual characters, or use unpackif you just want the characters' values: @array = split(//, $string); # each element a single character @array = unpack("U*", $string); # each element a code point (number) while (/(.)/g) { # . is never a newline here # $1 has character, ord($1) its number } /X*/matches all possible strings, including the empty string. Odds are you will find others when you don't mean to. an apple a day", sorted in ascending order: %seen = ( ); $string = "an apple a day"; foreach $char (split //, $string) { $seen{$char}++; } print "unique chars are: ", sort(keys %seen), "\n"; unique chars are: adelnpy splitand unpacksolutions give an array of characters to work with. If you don't want an array, use a pattern match with the /gflag in a whileloop, extracting one character at a time: %seen = ( ); $string = "an apple a day"; while ($string =~ /(.)/g) { $seen{$1}++; } print "unique chars are: ", sort(keys %seen), "\n"; reversefunction in scalar context for flipping characters: $revchars = reverse($string); reversein list context with splitand join: $revwords = join(" ", reverse split(" ", $string)); reversefunction is two different functions in one. Called in scalar context, it joins together its arguments and returns that string in reverse order. Called in list context, it returns its arguments in the opposite order. When using reversefor its character): \Xin a regular expression. $string = "fac\x{0327}ade"; # "façade" $string =~ /fa.ade/; # fails $string =~ /fa\Xade/; # succeeds @chars = split(//, $string); # 7 letters in @chars @chars = $string =~ /(.)/g; # same thing @chars = $string =~ /(\X)/g; # 6 "letters" in @chars \x{E7}", a character right out of Latin1 (ISO 8859-1). These characters might be encoded into a two-byte sequence under the UTF-8 encoding that Perl uses internally, but those two bytes still only count as one single character. That works just fine. \x{0327}". Code point U+0327 is a non-spacing combining character that means to go back and put a cedilla underneath the preceding base character. substr, length, and regular expression metacharacters, such as in /./or /[^abc]/. NFD( )function from the Unicode::Normalize module. use Unicode::Normalize; $s1 = "fa\x{E7}ade"; $s2 = "fac\x{0327}ade"; if (NFD($s1) eq NFD($s2)) { print "Yup!\n" } $string = v231.780; # LATIN SMALL LETTER C WITH CEDILLA # COMBINING CARON $string = v99.807.780; # LATIN SMALL LETTER C # COMBINING CARON # COMBINING CEDILLA $string = v99.780.807 # LATIN SMALL LETTER C # COMBINING CEDILLA # COMBINING CARON NFD( )for canonical decomposition and NFC( )for canonical decomposition followed by canonical composition. No matter which of these three ways you used to specify your use bytesprag Encodemodule length) check this flag and give character or octet semantics accordingly. Content-Lengthheader that specifies the size of the body of a message in octets. You can't simply use Perl's lengthfunction to calculate the size, because if the string you're calling lengthon is marked as UTF-8, you'll get the size in characters. use bytes while ($string =~ s/\t+/' ' x (length($&) * 8 - length($`) % 8)/e) { # spin in empty loop until substitution finally fails } use Text::Tabs; @expanded_lines = expand(@lines_with_tabs); @tabulated_lines = unexpand(@lines_without_tabs); $; } $`, you could use a slightly more complicated alternative that uses the numbered variables for explicit capture; this one expands tabstops to four each instead of eight: 1 while s/^(.*?)(\t+)/$1 . ' ' x (length($2) * 4 - length($1) % 4)/e; @+and @-arrays. This also expands to four-space positions: 1 while s/\t+/' ' x (($+[0] - $-[0]) * 4 - $-[0] % 4)/e; 1 whileloops and wondering why they couldn't have been written as part of a simple You owe $debt to me. $debtin the string with its value. $text =~ s/\$(\w+)/${$1}/g; /eeif they might be lexical ( my) variables: $text =~ s/(\$\w+)/$1/gee; $1contains the string somevar, ${$1}will be whatever $somevarcontains. This won't work if the use strict' refs' pragma is in effect because that bans symbolic dereferencing. our ($rows, $cols); no strict 'refs'; # for ${$1}/g below my $text; ($rows, $cols) = (24, 80); $text = q(I am $rows high and $cols long); # like single quotes! $text =~ s/\$(\w+)/${$1}/g; print $text; I am 24 high and 80 long /esubstitution; ) # titlecase each word's first character, lowercase the rest $text = "thIS is a loNG liNE"; $text =~ s/(\w+)/\u\L$1/g; print $text; This Is A Long Line tc( )titlecasing function: INIT { our %nocap; for (qw( a an the and but or as at but by for from in into of off on onto per to with )) { $nocap{$_}++; } } sub tc { local $_ = shift; # put into lowercase if on stop list, else titlecase s/(\pL[\pL']*)/$nocap{$1} ? lc($1) : ucfirst(lc($1))/ge; s/^(\pL[\pL']*) /\u\L$1/x; # first word guaranteed to cap s/ (\pL[\pL']*)$/\u\L$1/x; # last word guaranteed to cap # treat parenthesized portion as a complete title s/\( (\pL[\pL']*) /(\u\L$1/x; s/(\pL[\pL']*) \) /\u\L$1)/x; # capitalize first word following colon or semi-colon s/ ( [:;] \s+ ) (\pL[\pL']* ) /$1\u\L$2/x; return $_; } s/(\w+\S*\w*)/\u\L$1/g; tcfunction is as defined in the Solution. # with apologies (or kudos) to Stephen Brust, PJF, # and to JRRT, as always. @data = ( "the enchantress of \x{01F3}ur mountain", "meeting the enchantress of \x{01F3}ur mountain", "the lord of the rings: the fellowship of the ring", ); $mask = "%-20s: %s\n"; sub tc_lame { local $_ = shift; s/(\w+\S*\w*)/\u\L$1/g; return $_; } for $datum (@data) { printf $mask, "ALL CAPITALS", uc($datum); printf $mask, "no capitals", lc($datum); printf $mask, "simple titlecase", tc_lame($datum); printf $mask, "better titlecase", tc($datum); print "\n"; } . $phrase = "I have ${\( count_em( ) )} guanacos."; \s, meaning one whitespace character, which will also match newlines. This means they will remove any blank lines in your here document. If you don't want this, replace \swith [^\S\n]in the patterns. =~.. use Text::Wrap; @output = wrap($leadtab, $nexttab, @para); use Text::Autoformat; $formatted = autoformat $rawtext; wrapfunction, shown in Example 1-3, which takes a list of lines and reformats them into a paragraph with of silicon and; systemand execto run programs, shown in Recipe+//; # trim left s/\s+$//; # trim right } return @out = = 1 ? $out[0] # only one to return : @out; # or many } chopfunction. Be careful not to confuse this with the similar but different chompfunction, which removes the last part of the string contained within that variable if and only if it is contained in the $/variable, " \n" by default. These are often used to remove the trailing newline from input: # print what's typed, but surrounded by > < symbols while (<STDIN>) { chomp; print ">$_<\n"; } like \"this\"", use the standard Text::ParseWords and this simple code: use Text::ParseWords; sub parse_csv0 { return quotewords("," => 0, $_[0]); } like ""this""", you could use the following procedure from Mastering Regular Expressions, Second Edition:; } use Text::CSV; sub parse_csv1 { my $line = shift; my $csv = Text::CSV->new( ); return $csv->parse($line) && $csv->fields( ); } tie @data, "Tie::CSV_File", "data.csv"; for ($i = 0; $i < @data; $i++) { printf "Row %d (Line %d) is %s\n", $i, $i+1, "@{$data[$i]}"; for ($j = 0; $j < @{$data[$i]}; $j++) { print "Column $j is <$data[$i][$j]>\n"; } } use constantpragma will work: use constant AVOGADRO => 6.02252e23; printf "You need %g of those for guac\n", AVOGADRO; *AVOGADRO = \6.02252e23; print "You need $AVOGADRO of those for guac\n"; tieclass whose STOREmethod raises an exception: package Tie::Constvar; use Carp; sub TIESCALAR { my ($class, $initval) = @_; my $var = $initval; return bless \$var => $class; } sub FETCH { my $selfref = shift; return $$selfref; } sub STORE { confess "Meddle not with the constants of the universe"; } use constantpragma is the easiest to use, but has a few drawbacks. The biggest one is that it doesn't give you a variable that you can expand in double-quoted strings. Another is that it isn't scoped; it puts a subroutine of that name into the package namespace. sub AVOGADRO( ) { 6.02252e23 } use subs qw(AVOGADRO); local *AVOGADRO = sub ( ) { 6.02252e23 }; use Text::Soundex; $CODE = soundex($STRING); @CODES = soundex(@LIST); use Text::Metaphone; $phoned_words = Metaphone('Schwern');
http://www.oreilly.com/catalog/9780596003135/toc.html
crawl-001
refinedweb
1,816
58.62
public class LogRecord extends java.lang.Object, and if it wishes to subsequently obtain method name or class name information it should call one of getSourceClassName or getSourceMethodName to force the values to be filled in. clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait public LogRecord(Level level, java.lang java.lang.String getLoggerName() public void setLoggerName(java.lang.String name) name- the source logger java.lang(java.lang.String sourceClassName) sourceClassName- the source class name (may be null) public java.lang(java.lang.String sourceMethodName) sourceMethodName- the source method name (may be null) public java.lang(java.lang.String message) message- the raw message string (may be null) public java.lang.Object[] getParameters() public void setParameters(java.lang java.lang.Throwable getThrown() If the event involved an exception, this will be the exception object. Otherwise null. public void setThrown(java.lang.Throwable thrown) thrown- a throwable (may be null)
http://docs.oracle.com/javame/config/cldc/opt-pkgs/api/logging-3.4/com/oracle/util/logging/LogRecord.html
CC-MAIN-2015-27
refinedweb
153
52.56
XML (see Resources).. Writing years before the specific technology existed, Gabriel identifies the virtues of XML-RPC perfectly. I have written a moderately popular module for Python called xml_pickle. The purpose of this module (discussed previously in this column, see Resources) is to serialize Python objects, using an interface that's mostly the same as those of the standard cPickle and pickle modules. The only difference is that in my module, the representation is in XML. My intention all along with xml_pickle was to create a very lightweight format that could also be read from other programming languages (and across Python versions). A DTD accompanies the module for users who want to validate XML pickles, but feedback from users has suggested that formal validation is rarely a concern. A recurrent question I have received from users of xml_pickle is whether XML-RPC would be a better choice, given its more widespread use and existing implementations in many programming languages. While the answer to the narrow question probably favors xml_pickle, the comparison is worthwhile -- and it raises some points about data-type richness. On first pass, XML-RPC seems to do something different from xml_pickle: XML-RPC calls remote procedures and gets results back. The typical usage example in Listing 1 appears at the XML-RPC Web site and in the Programming Web Services with XML-RPC book (see Resources): Listing 1. Python shell example of XML-RPC usage By contrast, xml_pickle creates string representations of local in-memory objects. These may not seem the same, but in order to call a remote procedure, XML-RPC first needs to convert its arguments to suitable XML representations (in other words, pickle/serialize the parameters). Similarly, a return value from an XML-RPC call can contain a nested data structure. Moreover, the .dumps() method of the xmlrpclib shares its name with an xml_pickle module (both inspired by several standard modules), and does the same thing -- writes the XML serialization without performing an actual call. On first examination, xml_pickle and xmlrpclib appear to be functionally interchangeable, at least if one only cares about the serialization aspect. But as we will see, a closer look reveals some differences. Let's create an object, then serialize it using two different approaches. Some contrasts will come to the fore: Listing 2. Python shell example of XML-RPC serialization You should note a few things already. First, the whole XML document has a root <methodCall> element which is irrelevant to our current purposes. Other than a few bytes extra, however, the additional enclosing element is unimportant. Likewise, the <methodName> is superfluous, but the example gives a name that indicates the role of the document. Moreover, a call to xmlrpclib.dumps() accepts a tuple of objects, but we are only interested in "pickling" one (if there were others, they would have their own <param> element). But other than some wrapping, the attributes of our object are well-contained within the <struct> element's <member> elements. Now let's look at what xml_pickle does (the object is the same as above): Listing 3. Python shell example of XML-RPC serialization There is both less and more to the xml_pickle version (the actual sizes of both are comparable). Notice that even though Python does not have a built-in Boolean type, when you use a class to represent a new type, xml_pickle adjusts readily (albeit more verbosely). XML-RPC, by contrast, is limited to serializing its eight data types, and nothing else. Of course, two of those types, <array> and <struct>, are themselves collections and can be compound. In addition, xml_pickle can point multiple collection members to the same underlying object; this is absent by design from XML-RPC (and introduced in later versions of xml_pickle also). As a small matter, xml_pickle contains only a single numeric type attribute, but the actual pattern of the value attribute allows for decoding to integer, float, complex, and so on. No real generality is lost or gained by these strategies, although the XML-RPC style will appeal aesthetically to programmers working with statically typed languages. The problem with XML-RPC as an object-serialization format is that it just plain does not have enough types to handle the objects in most high-level programming languages. Listing 4 illustrates this shortcoming. Listing 4. Python shell example of XML-RPC overloading In Listing 4, two things are serialized -- an object instance and a dictionary. While it is fair to say that Python objects are particularly dictionary-like, you lose a lot of information by representing a dictionary and an object in exactly the same way. Additionally, the excessively generic meaning for <struct> in XML-RPC affects pretty much any OOP language, or at least any language that has native hash/dictionary constructs; it is not a Python quirk here. On the other hand, failing to distinguish Python tuples and lists within the <array> type of XML-RPC is a fairly Python-specific limitation. xml_pickle handles all the Python types much better (including data types defined by user classes, as we saw). Actually, there is no direct pickling of dictionaries in xml_pickle, basically because no one has asked for this (it would be easy to add). But dictionaries that are object attributes get pickled, as shown in Listing 5. Listing 5. Python shell example of xml_pickle dictionaries Another virtue of the xml_pickle approach that is implied in the example is that dictionary keys need not be strings. In XML-RPC <struct> elements, <name> keys are always strings. However, Perl, PHP, and most languages are closer to the XML-RPC model in: Listing 6. xml_pickle version of"). Conclusion: Where to go from here? Neither XML-RPC nor xml_pickle are entirely satisfactory as means of representing the object instances of popular programming languages. But they both come pretty close. Let me suggest some approaches to patching up the short gap between these protocols and offer a general object serialization format. "Fixing" xml_pickle is actually amazingly simple -- just add more types to the format. For example, since xml_pickle was first developed, the UnicodeType has been added to Python. Adding complete support for it took exactly four lines of new code (although this was simplified slightly by the fact that XML is natively Unicode). Or again, at the request of users, the numeric module's ArrayType was added with little more work. Even if a type is not present in Python, a custom class can be defined within xml_pickle to add the behavior of that type -- for example, REBOL's "e-mail address" type may be supported with a fragment like this: Once unpickled, either xml_pickle could just treat "email" as a synonym for "string," or we could implement an EmailAddress class with some useful behaviors. One such behavior, if we took the latter route would be pickling into the above xml_pickle fragment. "Fixing" XML-RPC is more difficult. It would be easy to suggest simply adding a bunch of new data types, and from a purely technical point of view there would be no particular problem with this. But as a social matter, XML-RPC's success makes it difficult to introduce incompatible changes: A hypothetical "data-enhanced" XML-RPC would not play nice with all the existing implementations and installations. Actually, some implementors have felt sufficiently bothered by the lack of a "nil" type that they have added a nonstandard (or at best semi-standard) type to correspond to Java null, Python None, Perl undef, SQL NONE, and the like. But the addition of many more types that only some programming languages use is not going to fly. One approach to enhancing XML-RPC as an object serializer is to coopt the <struct> element to do double duty. Everything that is incompletely typed by standard XML-RPC could be wrapped in a <struct> with a single <member>, where the <name> indicates the special type. While existing XML-RPC libraries do not do this, the XML-RPC protocol and DTD are so simple that adding this behavior is fairly trivial (but in most cases requires that the libraries be modified, not just wrapped). For example, XML-RPC cannot natively describe the difference between Python lists and tuples. So the fragment in Listing 7 is incomplete as a description of a Python object. Listing 7. XML-RPC fragment for either list or tuple One could substitute the following representation, which is valid XML-RPC, and a suitable implementation could restore to a specific Python object: Listing 8. XML-RPC fragment for a tuple A true <struct> can be represented in two (or more) ways. First, every <struct> can be wrapped in another <struct> (maybe with the <name> OLDTYPE:struct, or the like). For Python, this is probably best anyway, since dictionaries and object instances are both NEWTYPEs. Second, the namespace-like prefix NEWTYPE: can be reserved for this special usage (accidental collision seems unlikely).. - Userland's XML-RPC home page () is, naturally, the place to start investigating XML-RPC. Many useful resources can be found there. - While at the XML-RPC home page, it is particularly worthwhile to investigate the tutorial and article links they provide (). - Kate Rhodes has written a nice comparison called "XML-RPC vs. SOAP" (). In it, she points to a number of details that belie SOAP's description as a "lightweight" protocol. - Richard P. Gabriel wrote the rather famous paper "Lisp: Good News, Bad News, How to Win Big" (). What everyone reads and refers to is the section called "The Rise of 'Worse is Better'". - The O'Reilly title Programming Web Services with XML-RPC (), by Simon St. Laurent, Joe Johnston, and Edd Dumbill, is quite excellent. Its spirit matches that of XML-RPC itself. xml_picklecan be found at:. - The associated DTD lives at:. - Secret Lab's xmlrpcPython module can be found at:. - If you want to know how IBM's WebSphere Application Server (WAS) supports XML development, see this technical background info on XML in the WAS Advanced Edition 3.5 online help. - Find out more on the WebSphere Developer Domain Studio zone. - Find other articles in David Mertz's XML Matters column. David Mertz puts the "lite" in "lightweight." David may be reached at mertz@gnosis.cx; his life pored over at.
http://www.ibm.com/developerworks/xml/library/x-matters15.html
crawl-002
refinedweb
1,707
53.41
Content-type: text/html dbm_clearerr, dbm_close, dbm_delete, dbm_error, dbm_fetch, dbm_firstkey, dbm_forder, dbm_nextkey, dbm_open, dbm_setpblksiz, dbm_store - Database subroutines #include <ndbm.h> typedef struct { void *dptr; size_t dsize; } datum; int dbm_clearerr( DBM *db); void dbm_close( DBM *db); int dbm_delete( DBM *db, datum key); int dbm_error( DBM *db); datum dbm_fetch( DBM *db, datum key); datum dbm_firstkey( DBM *db); long dbm_forder( DBM *db, datum key); datum dbm_nextkey( DBM *db); DBM *dbm_open( const char *file, int flags, mode_t mode); int dbm_setpblksiz( DBM *db, int size); int dbm_store( DBM *db, datum key, datum content, int store_mode ); The following declarations do not conform to current standards and are supported only for backward compatibility: typedef struct { char *dptr; int dsize; } datum; DBM *dbm_open( char *file, int flags, int mode ); Interfaces documented by this reference page conform to industry standards as follows: XPG4-UNIX: dbm_clearerr, dbm_close, dbm_delete, dbm_error, dbm_fetch, dbm_firstkey, dbm_nextkey, dbm_open, dbm_store Refer to the standards(5) reference page for more information about industry standards and associated tags. Specifies a value associated with key. Points to a database structure that has been returned from a call to the dbm_open() function. Specifies the file to be opened. If the file parameter refers to a symbolic link, the dbm_open() function opens the file pointed to by the symbolic link. See the open(2) reference page for further details. Specifies the type of access, special open processing, the type of update, and the initial state of the open file. The parameter value is constructed by logically ORing special processing flags described in the fcntl.h header file. See the open(2) reference page for further details. A datum that has been initialized by the application program to the value of the key that identifies the record that the program is handling. Specifies the read, write, and execute permissions of the file to be created (requested by the O_CREAT flag). If the file already exists, this parameter is ignored. This parameter is constructed by logically ORing values described in the sys/mode.h header file. See the open(2) reference page for further details. The new page file block size set by dbm_setpblksiz(). This function forces values to a minimum setting of 1024 bytes and a maximum setting of 32,768 bytes. It also rounds values up to a multiple of 1024. Specifies one of the following flags to dbm_store(): Only insert new entries into the database. Do not change an existing entry with the same key. Replace an existing entry if it has the same key. The dbm_open(), dbm_close(), dbm_fetch(), dbm_store(), dbm_delete(), dbm_firstkey(), dbm_nextkey(), dbm_forder(), dbm_setpblksiz(), dbm_error(), and dbm_clearerr() functions maintain key/content pairs in a database. The functions handle very large databases (a billion blocks) and access a keyed item in one or two file system accesses. Arbitrary binary data, as well as normal ASCII strings, are allowed. The database is stored in two files. One file is a directory containing a bit map and has .dir as its suffix. The second file contains all data and has .pag as its suffix. Before a database can be accessed, it must be opened by the dbm_open() function. The dbm_open() function opens (and if necessary, creates) the file.dir and file.pag files, depending on the flags parameter. The flags parameter has the same meaning as the oflag parameter of open() except that a database opened for write-only access opens the files for read and write access. Once open, the data stored under a key is accessed by the dbm_fetch() function and data is placed under a key by the dbm_store() function. The store_mode parameter controls whether dbm_store() replaces any preexisting record whose key matches the key specified by the key parameter. The dbm_delete() function deletes a record and its key from the database. The dbm_firstkey() and dbm_nextkey() functions can be used to make a linear pass through all keys in a database, in an (apparently) random order. The dbm_firstkey() function returns the first key in the database. The dbm_nextkey() function returns the next key in the database. The order of keys presented by the dbm_firstkey() and dbm_nextkey() functions depends on a hashing function. The following code traverses the database: for (key = dbm_firstkey(db); key.dptr != NULL; key = dbm_nextkey(db)) The dbm_setpblksiz() function sets the page file block size, which is 1024 bytes by default. This function should only be called immediately after a call to dbm_open() and prior to calls to other ndbm functions. For an existing database, dbm_open() automatically sets the page file block size to the size set at the time of its creation. The dbm_error() function returns the error condition of the database. The dbm_clearerr() function clears the error condition of the database. [Digital] The dbm_forder() function returns the block number in the .pag file to which the specified key maps. [Digital] When compiled in the X/Open UNIX environment, calls to the dbm_delete(), dbm_fetch(), dbm_firstkey(), dbm_forder(), dbm_nextkey(), and dbm_store() functions are internally renamed by prepending _E to the function name. When you are debugging a module that includes any of these functions and for which _XOPEN_SOURCE_EXTENDED has been defined, use _Ename to refer to the name() call. For example, use _Edbm_delete to refer to the dbm_delete call. See standards(5) for further information. When using key structures containing gaps, make sure that the whole structure, including gaps, is initialized to a known value; otherwise, the keys may not match. Upon successful completion, all functions that return an int return a value of 0 (zero). Otherwise, a negative value is returned. Functions that return a datum indicate errors with a null (0) dptr. The dbm_store() function returns 1 if it is called with a flags value of DBM_INSERT and the function finds an existing entry with the same key. If any of the following conditions occurs, the dbm_open(), dbm_delete(), and dbm_store() functions set errno to the value that corresponds to the condition: [Digital] Insufficient space to allocate a buffer. [Digital] An attempt was made to store or delete a key (and its associated contents) in a database that was opened read-only. [Digital] An attempt was made to store a key whose size exceeds the page block size limit as defined by PBLKSIZ in /usr/include/ndbm.h or a key whose size plus the size of its associated contents exceeds the page block size limit set by dbm_setpblksiz(). Functions: dbm(3), open(2) Standards: standards(5) delim off
http://backdrift.org/man/tru64/man3/ndbm.3.html
CC-MAIN-2017-22
refinedweb
1,062
55.34
The Sun BabelFish Blog Don't panic ! Serialising Java Objects to RDF with Jersey Jersey is the reference implementation of JSR311 (JAX-RS) So(m)mer's @rdf annotation one can remove remove the hurdle of having to create yet another format, and do this in a way that should be really easy to understand. I have been wanting to demonstrate how this could be done, since the JavaOne 2007 presentation on Jersey. Last week I finally got down to writing up some initial code with the help of Paul Sandoz whilst in Grenoble. It turned out to be really easy to do. Here is a description of where this is heading. Howto The code to do this available from the so(m)mer subversion repository, in the misc/Jersey directory. I will refer and link to the online code in my explanations here. Annotate one's classes Person class can be written out like this: @rdf(foaf+"Person") public class Person extends Agent { static final String foaf = ""; ) Map the web resources to the model Next one has to find a mapping for web resources to objects. This is done by subclassing the RdfResource<T> template class, as we do three times in the Main class. Here is a sample: @Path("/person/{id}") public static class PersonResource extends RdfResource<Employee> { public PersonResource(@PathParam("id") String id) { t = DB.getPersonWithId(id); } } This just tells Jersey to publish any Employee object on the server at the local /person/{id} url. When a request for some resource say /person/155492 is made, a PersonResource object will be created whose model object can be found by querying the DB for the person with id 155492. For this of course one has to somehow link the model objects ( Person, Office,... in our example ) to some database. This could be done by loading flat files, querying an ldap server, or an SQL server, or whatever... In our example we just created a simple hard coded java class that acts as a DB. Map the Model to the resource An object can contain pointers to other objects. In order for the serialiser to know what the URL of objects are one has to map model objects to web resources. This is done simply with the static code in the same Main class [ looking for improovements here too ] static { RdfResource.register(Employee.class, PersonResource.class); RdfResource.register(Room.class, OfficeResource.class); RdfResource.register(Building.class, BuildingResource.class); } Given an object the serialiser (RdfMessageWriter) can then look up the resource URL pattern, and so determine the object's URL. So to take an example, consider an instance of the Room class. From the above map, the serialiser can find that it is linked to the OfficeResource class from which it can find the /building/{buildingName}/{roomId} URI pattern. Using that it can then call the two getters on that Room object, namely getBuildingName() and getRoomId() to build the URL referring to that object. Knowing the URL of an object means that the serialiser can stop its serialisation at that point if the object is not the primary topic of the representation. So when serialising /person/155492 the serialiser does not need to walk through the properties of /building/SCA22/3181. The client may already have that information and if not, the info is just a further GET request away. Running it on the command line If you have downloaded the whole repository you can just run from the command line. This will build the classes, recompile the @rdf annotated classes, and start the simple web server. You can then just $ ant run curl for a few of the published resources like this:" . The representation returned not a very elegant serialisation of the Turtle subset of N3. This makes the triple structure of RDF clear- subject relation object - and it uses relative URLs to refer to local resources. Other serialisers could be added, such as for rdf/xml. See the todo list at the end of this article. The represenation says simple that this resource <> has as primary topic the entity named by #HS in the document. That entity's name is "Henry Story" and knows a few people, one of which is refered to via a global URL, and the other via a local URL /person/528#JG. We can find out more about the /person/528#JG thing by making the following request:" . ... where we find out that the resource named by that URL is James Gosling. We find that James has an office named by a further URL, which we can discover more about with yet another request" . Here we have a Location that has an Address. The address does not have a global name, so we give it a document local name, _:2828781 and serialise it in the same representation, as shown above. Because every resource has a clear hyperlinked representation we don't need to serialise the whole virtual machine in one go. We just publish something close to the Concise Bounded Description of the graph of objects. Browsing the results Viewing the data through a command line interface is nice, but it's not as fun as when viewing it through a web interface. For that it is best currently to install the Tabulator Firefox plugin. Once you have that you can simply click on our first URL. This will show up something like this: If you then click on JG you will see something like this: This it turns out is a resource naming James Gosling. James knows a few people including a BT. The button next to BT is in blue, because that resource has not yet been loaded, whereas the resource for "Henry Story" has. Load BT by clicking on it, and you get This reveals the information about Bernard Traversat we placed in our little Database class. Click now on the i and we get Now we suddenly have a whole bunch of information about Tim Berners Lee, including his picture, some of the people he has listed as knowing, where he works, his home page, etc... This is information we did not put in our Database! It's on the web of data. One of the people Tim Berner's Lee knows is our very own Tim Bray. And you can go on exploring this data for an eternity. All you did was put a little bit of data on a web server using Jersey, and you can participate in the global web of data. Todo There are of course a lot of things that can be done to improove this Jersey/so(m)mer mapper. Here are just a few I can think of now: - Improove the N3 output. The code works with the examples but it does not deal well with all the literal types, nor does it yet deal with relations to collections. The output could also be more human readable by avoiding repetiations. - Refine the linking between model and resources. The use of getters sounds right, but it could be a bit fragile if methods are renamed.... - Build serialisers for rdf/xml and other RDF formats. - Deal with publication of non information resources, such as http:// xmlns.com/foaf/0.1/Person which names the class of Persons. When you GET it, it redirects you to an information resources: This should also be made easy and foolproof for Java developers.> - make it industrial strength... - RdfSerialiser interface to make it easy to access the private fields. - One may want to add support for serialising @rdf annotated getters - Add some basic functionality for POSTing to collections or PUTing resources. This will require some thought. Bookmarks: digg+, reddit+, del.icio.us+, dzone+, facebook+ Posted at 05:31PM Sep 24, 2008 [permalink/trackback] by Henry Story in Java |... In the InfoQ article "JSR 311 Final: Java API for RESTful Web Services", Paul Sandoz and Mark Hadley answer some questions and refer to this article. see: Posted by Henry Story on September 24, 2008 at 09:10 PM CEST #
http://blogs.sun.com/bblfish/entry/serialising_java_objects_to_rdf
crawl-002
refinedweb
1,340
62.48
Exploring Recurrent Neural Networks We explore recurrent neural networks, starting with the basics, using a motivating weather modeling problem, and implement and train an RNN in TensorFlow. By Packtpub. In this tutorial, taken from Hands-on Deep Learning with Theano by Dan Van Boxel, we’ll be exploring recurrent neural networks. We’ll start off by looking at the basics, before looking at RNNs through a motivating weather modeling problem. We’ll also implement and train an RNN in TensorFlow. In a typical model, you have some X input features and some Y output you want to predict. We usually consider our different training samples as independent observations. So, the features from data point one shouldn't impact the prediction for data point two. But what if our data points are correlated? The most common example is that each data point, Xt, represents features collected at time t. It's natural to suppose that the features at time t and time t+1 will both be important to the prediction at time t+1. In other words, history matters. Now, when modeling, you could just include twice as many input features, adding the previous time step to the current ones, and computing twice as many input weights. But, if you're going through all the effort of building a neural network to compute transform features, it would be nice if you could use the intermediate features from the previous time step, in the current time step network. RNNs do exactly this. Consider your input, Xt as usual, but add in some state, St-1 that comes from the previous time step, as additional features. Now you can compute weights as usual to predict Yt, and you produce a new internal state, St, to be used in the next time step. For the first time step, it's typical to use a default or zero initial state. Classic RNNs are literally this simple, but there are more advanced structures common in literature today, such as gated recurrent units and long short-term memory circuits. These are beyond the scope of this tutorial, but work on the same principles and generally apply to the same types of problems. Modeling the weights You might be wondering how we'll compute weights with all these dependents on the previous time step. Computing the gradients does involve recursing back through the time computation, but fear not, TensorFlow handles the tedious stuff and let's us do the modeling: # read in data filename = 'weather.npz' data = np.load(filename) daily = data['daily'] weekly = data['weekly'] num_weeks = len(weekly) dates = np.array([datetime.datetime.strptime(str(int(d)), '%Y%m%d') for d in weekly[:,0]]) To use RNNs, we need a data modeling problem with a time component. The font classification problem isn't really appropriate here. So, let's take a look at some weather data. The weather.npz file is a collection of weather station data from a city in the United States over several decades. The daily array contains measurements from every day of the year. There are six columns to the data, starting with the date. Next, is the precipitation, measuring any rainfall in inches that day. After this, come two columns for snow—the first is measured snow currently on the ground, while the latter is snowfall on that day, again, in inches. Finally, we have some temperature information, the daily high and the daily low in degrees Fahrenheit. The weekly array, which we'll use, is a weekly summary of the daily information. We'll use the middle date to indicate the week, then, we'll sum up all rainfall for the week. For snow, however, we'll average the snow on the ground, since it doesn't make sense to add snow from one cold day to the same snow sitting on the ground the next day. Snowfall though, we'll total for the week, just like rain. Finally, we'll average the high and low temperatures for the week respectively. Now that you've got a handle on the dataset, what shall we do with it? One interesting time-based modeling problem would be trying to predict the season of a particular week using it's weather information and the history of previous weeks. In the Northern Hemisphere, in the United States, it's warmer during the months of June through August and colder during December through February, with transitions in between. Spring months tend to be rainy, and winter often includes snow. While one week can be highly variable, a history of weeks should provide some predictive power. Understanding RNNs First, let's read in the data from a compressed NumPy array. The weather.npz file happens to include the daily data as well, if you wish to explore your own model; np.load reads both arrays into a dictionary and will set weekly to be our data of interest; num_weeks is naturally how many data points we have, here, several decades worth of information: num_weeks = len(weekly) To format the weeks, we use a Python datetime.datetime object reading the storage string in year month day format: dates = np.array([datetime.datetime.strptime(str(int(d)), '%Y%m%d') for d in weekly[:,0]]) We can use the date of each week to assign its season. For this model, because we're looking at weather data, we use the meteorological season rather than the common astronomical season. Thankfully, this is easy to implement with the Python function. Grab the month from the datetime object and we can directly compute this season. Spring, season zero, is March through May, summer is June through August, autumn is September through November, and finally, winter is December through February. The following is the simple function that just evaluates the month and implements that: def assign_season(date): ''' Assign season based on meteorological season. Spring - from Mar 1 to May 31 Summer - from Jun 1 to Aug 31 Autumn - from Sep 1 to Nov 30 Winter - from Dec 1 to Feb 28 (Feb 29 in a leap year) ''' month = date.month # spring = 0 if 3 <= month < 6: season = 0 # summer = 1 elif 6 <= month < 9: season = 1 # autumn = 2 elif 9 <= month < 12: season = 2 # winter = 3 elif month == 12 or month < 3: season = 3 return season Let's note that we have four seasons and five input variables and, say, 11 values in our history state: # There are 4 seasons num_classes = 4 # and 5 variables num_inputs = 5 # And a state of 11 numbers state_size = 11 Now you're ready to compute the labels: labels = np.zeros([num_weeks,num_classes]) # read and convert to one-hot for i,d in enumerate(dates): labels[i,assign_season(d)] = 1 We do this directly in one-hot format, by making an all-zeroes array and putting a one in the position of the assign season. Cool! You just summarized decades of time with a few commands. As these input features measure very different things, namely rainfall, snow, and temperature, on very different scales, we should take care to put them all on the same scale. In the following code, we grab the input features, skipping the date column of course, and subtract the average to center all features at zero: # extract and scale training data train = weekly[:,1:] train = train - np.average(train,axis=0) train = train / train.std(axis=0) Then, we scale each feature by dividing by its standard deviation. This accounts for temperatures ranging roughly 0 to 100, while rainfall only changes between about 0 and 10. Nice work on the data prep! It isn't always fun, but it's a key part of machine learning and TensorFlow. Let's now jump into the TensorFlow model: # These will be inputs x = tf.placeholder("float", [None, num_inputs]) # TF likes a funky input to RNN x_ = tf.reshape(x, [1, num_weeks, num_inputs]) We input our data as normal with a placeholder variable, but then you see this strange reshaping of the entire data set into one big tensor. Don't worry, this is because we technically have one long, unbroken sequence of observations. The y_ variable is just our output: y_ = tf.placeholder("float", [None,num_classes]) We'll be computing a probability for every week for each season. The cell variable is the key to the recurrent neural network: cell = tf.nn.rnn_cell.BasicRNNCell(state_size) This tells TensorFlow how the current time step depends on the previous. In this case, we'll use a basic RNN cell. So, we're only looking back one week at a time. Suppose that it has state size or 11 values. Feel free to experiment with more exotic cells and different state sizes. To put that cell to use, we'll use tf.nn.dynamic_rnn: outputs, states = tf.nn.dynamic_rnn(cell,x_, dtype=tf.nn.dtypes.float32, initial_state=None) This intelligently handles the recursion rather than simply unrolling all the time steps into a giant computational graph. As we have thousands of observations in one sequence, this is critical to attain reasonable speed. After the cell, we specify our input x_, then dtype to use 32 bits to store decimal numbers in a float, and then the empty initial_state. We use the outputs from this to build a simple model. From this point on, the model is almost exactly as you would expect from any neural network: We'll multiply the output of the RNN cells, some weights, and add a bias to get a score for each class for that week: W1 = tf.Variable(tf.truncated_normal([state_size,num_classes], stddev=1./math.sqrt(num_inputs))) b1 = tf.Variable(tf.constant(0.1,shape=[num_classes])) # reshape the output for traditional usage h1 = tf.reshape(outputs,[-1,state_size]) Our categorical cross_entropy loss function and train optimizer should be very familiar to you: # Climb on cross-entropy cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(y + 1e-50, y_)) # How we train train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy) # Define accuracy correct_prediction = tf.equal(tf.argmax(y,1),tf.argmax(y_,1)) accuracy=tf.reduce_mean(tf.cast(correct_prediction, "float")) Great work setting up the TensorFlow model! To train this, we'll use a familiar loop: # Actually train epochs = 100 train_acc = np.zeros(epochs//10) for i in tqdm(range(epochs), ascii=True): if i % 10 == 0: # Record summary data, and the accuracy # Check accuracy on train set A = accuracy.eval(feed_dict={x: train, y_: labels}) train_acc[i//10] = A train_step.run(feed_dict={x: train, y_: labels}) Since this is a fictitious problem, we'll not worry too much about how accurate the model really is. The goal here is just to see how an RNN works. You can see that it runs just like any TensorFlow model: If you do look at the accuracy, you can see that it's doing pretty well; much better than the 25 percent random guessing, but still has a lot to learn. We hope you enjoyed this extract from Hands On Deep Learning with TensorFlow. If you’d like to learn more visit packtpub.com. Related: - A Guide For Time Series Prediction Using Recurrent Neural Networks (LSTMs) - Going deeper with recurrent networks: Sequence to Bag of Words Model - 7 Steps to Mastering Deep Learning with Keras
https://www.kdnuggets.com/2017/12/exploring-recurrent-neural-networks.html
CC-MAIN-2021-25
refinedweb
1,879
53.71
Hi, i’ve stumbled upon a weird error in ghpython: as I’m trying to execute a very simple piece of code (basic implementation of scipy.optimize.curve_fit): def func(x,k1,k2): - return k1 * x + k2* xdata = [661.657, 1173.228, 1332.492, 511.0, 1274.537] ydata = [242.604, 430.086, 488.825, 186.598, 467.730] popt, pcov = spo.curve_fit(func, xdata, ydata) it shows me a following error: Runtime error (Exception): <function func at 0x0000000000003949> is not a Python function The same code in VSC works without any problem. I guess there is something with IronPython, or? I didn’t find any info in google and will be very glad, if someone could explain me, what’s wrong? I use ghpythonremote for scipy, Rhino 7.
https://discourse.mcneel.com/t/function-is-not-a-python-function/145584
CC-MAIN-2022-33
refinedweb
129
74.69
Provides Currency and Money classes for use in your Python code. Project description Customized fork of which adds ability to extend the core Money class. The need to represent instances of money frequently arises in software development, particularly any financial/economics software. To address that need, the py-moneyed package provides the classes of Money and Currency, at a level more useful than just using Python’s Decimal class, or ($DEITY forbid) the float primitive. The package is meant to be stand-alone and easy to either use directly, or subclass further. py-moneyed is BSD-licensed. Some of the py-moneyed code was first derived from python-money available via this URL: Because that Google Code version has been inactive since May 2008, I forked it and modified it for my needs in 2010. Compared to python-money, major changes here in py-moneyed include separating it from Django usage, tightening types handling in operators, a complete suite of unit tests, PEP8 adherence, providing a setup.py, and local currency formatting/display. Usage On to the code! The Money class is instantiated with: - An amount which can be of type int, string, float, or Decimal. It will be converted to a Decimal internally. Therefore, it is best to avoid float objects, since they do not convert losslessly to Decimal. - A currency, which usually is specified by the three-capital-letters ISO currency code, e.g. USD, EUR, CNY, and so on. It will be converted to a Currency object. For example, from moneyed import Money sale_price_today = Money(amount='99.99', currency='USD') You then use Money instances as a normal number. The Money class. ‘USD’) mapping to a Currency instance with ISO numeric code, canonical name in English, and countries using the currency. Thanks to the python-money developers for their (possibly tedious) data-entry of the ISO codes! All of these are available as pre-built Currency objects in the moneyed module. You can also pass in the arguments to Money as positional arguments. So you can also write: >>> from moneyed import Money, USD >>> price = Money('19.50', USD) >>> price 19 USD >>> price.amount Decimal('19.50') >>> price.currency USD >>> price.currency.code 'USD' Formatting You can print Money object as follows: >>> from moneyed.localization import format_money >>> format_money(Money(10, USD), locale='en_US') '$10.00' Testing Unit-tests have been provided, and can be run with tox (recommended) or just py.test. If you don’t have tox installed on your system, it’s a modern Python tool to automate running tests and deployment; install it to your global Python environment with: sudo pip install tox Then you can activate a virtualenv (any will do - by design tox will not run from your globally-installed python), cd to the py-moneyed source directory then run the tests at the shell: cd where/py-moneyed-source/is tox If you do not have all versions of Python that are used in testing, you can use pyenv. After installing pyenv, install the additional plugin pyenv-implict. The py-moneyed package has been tested with Python 2.6, 2.7, 3.2, 3.3 and PyPy 2.1. Future Future versions of py-moneyed may provide currency conversions or other capabilities, dependent on feedback and usage. Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/ud-py-moneyed/
CC-MAIN-2020-05
refinedweb
572
56.86
CoreCoder LANGUAGES: C# ASP.NET VERSIONS: 3.5 Inject Services in ASP.NET Pages When a Page Depends on One or More External Components, Code Injection Is a Good Thing By Dino Esposito In software (and specifically in Web software), the term injection has an ominous overtone. You immediately think of SQL injection or perhaps script injection. There is, however, a form of injection in software that is absolutely benign and helpful. It is often referred to as dependency injection (DI). Admittedly, the term dependency injection is high-sounding, maybe even a bit bombastic. But it sounds cool, doesn t it? When you say I use a lot of dependency injection you sound esoteric; when you say I write event handlers you sound pragmatic, at best. But under the hood in both cases you re applying the same object-oriented principle: the Dependency Inversion Principle (DIP). In this article I ll first introduce DIP and provide a few sample programming scenarios where you use plenty of it. Next, I ll move on to consider a popular pattern that springs from the principle: dependency injection. I ll also clarify some of the confusion surrounding the terms, and finish by showing a practical example of how to leverage DI in ASP.NET pages. The Principle of Inversion In the theory of object-oriented design (OOD), DIP states that in the design of a class you should isolate dependencies and design the class in a way that makes it independent from any low-level module. For example, the Task class may depend on your implementation of a Logger class, and a Membership class may depend on a validation module that checks credentials. These dependencies are part of the contract, so they cannot be avoided or skipped. However, as a good designer you should design the class in a way that makes an instance of the internal module just a parameter. Like many of you, I guess, I was scared at first by the name dependency injection , but I did know all too well the principle of creating functions and classes in a parametric way. It is the same old concept, just revamped and given a new, fancier name. Although in different forms, this principle has been around since the late 1980s; it was formalized as DIP by Robert Martin in the mid-1990s (see for more information). Why is the term inversion being used to describe the principle? The term inversion refers to the fact that during the implementation you proceed in a top-down manner and are interested in reusing the biggest possible outermost container rather than a small piece of functionality. Suppose you have a Membership class with a ValidationChecker member. What would you like to reuse? The validation checker or the Membership class? Reusing the ValidationChecker class is straightforward it s already a reusable class. The challenge is in being able to reuse the outermost container. But the outermost container the Membership class depends on the validation checker. If you can break this dependency, you ll win. DIP contrasts the vision according to which you build larger classes by aggregating smaller pieces of code in a bottom-up approach. With DIP, you go top-down both in design and implementation. This is the inversion. The Dependency Injection Pattern Dependency injection is merely a pattern that implements DIP. It consists of designing classes with well-known injection points where any required dependency can be passed in as an argument. A real-world example of DI are Windows shell extensions or perhaps ASP.NET HTTP modules. In both cases, you have some mainstream code that at some point expects to receive input from an external dependency, a registered shell extension, or a registered HTTP module. It looks around, loads the dependency, and interacts with that through a known interface. Wait a moment. Doesn t this sound a lot like plug-ins? What s the difference, if any? Plugin and DI are two related patterns for making a piece of code independent from some low-level modules in a way that allows you to change the module at will (even on the fly) without affecting the behavior and functionality of the outermost container. I ll return to plug-ins in a moment. Before going any further, I need to clarify how DI relates to Inversion of Control (IoC). Many consider DI and IoC synonymous and use the terms interchangeably; others differentiate. My guess is that anybody who wants to intervene in the debate probably understands that there should be a general principle and a practical pattern. The problem is merely with naming. In many articles and books, I ve seen IoC presented as the principle and DI as the pattern. In those references there s no mention of DIP. It seems to me that, instead, IoC originates in the Java community as the pattern that applies DIP to classes. DI is a relatively newer term coined a few years ago by Martin Fowler (see). According to Fowler, DI is a more precise term than IoC and expresses better what s going on. With DI being so similar to DIP, the confusion is warranted. And a key fact that often goes unnoticed is that the I in DI stands for injection; the I in DIP stands for inversion. In the end, given this description, I consider IoC and DI synonymous, but both patterns of DIP. Plugin vs. DI/IoC What s the difference between the Plugin pattern and the DI/IoC pattern? With DI/IoC you can spot dependencies right from the constructor of the class. If you use a Plugin pattern, instead, you must inspect the source code of the class to spot dependencies. Some people argue that DI/IoC is preferable because it makes a solution easier to test. Honestly, I don t see the point. In both cases, you can provide a fake or a mock-up for testing purposes. Another point that is often made in the debate is that a class designed with DI/IoC in mind is inherently more reusable across applications. A class that accepts all of its dependent components via, say, the constructor is easier to manage than a class that uses an internal factory to locate and instantiate all dependencies. With DI/IoC you have more self-contained solutions. Finally, I d say that the Plugin+Factory pair works better when you have a list of potential components to load, but you don t know how many of them are actually registered. A DI/IoC works better when you know exactly how many components you re going to load. That said, both Plugin and DI/IoC can be used where appropriate. The biggest, and indisputable, benefit associated with DI/IoC solutions is the availability of ad hoc frameworks. When you get to use one of these frameworks, as you ll see in a moment, you configure some mappings between interfaces and actual types, then instruct the framework to return the actual instance of a class that implements a given interface. Let s examine a practical case. Figure 1 lists some popular DI/IoC frameworks. Figure 1: Popular DI/IoC frameworks. Injection in Action Microsoft Unity comes with Microsoft s Enterprise Library 4.0; I ll be using that for the purposes of this article. All DI/IoC frameworks are built around a container object. Bound to some configuration information, the container resolves dependencies. The caller code instantiates the container and passes the desired interface as an argument. In response, the IoC/DI framework returns a concrete object that implements that interface. A class designed for DI/IoC needs to have an injection point for its dependencies. Each dependency is represented with an interface. There are three ways to inject dependencies into a class: using the constructor, a setter property, or an interface. More in detail, you can add a new constructor to the class and make it accept as many dependencies as required. As mentioned, each dependency is an interface. Here s an example: public class Task { private IDependency1 _dep1 = null; private IDependency2 _dep2 = null; // Inject the dependency via the ctor public Task(IDependency1 dep1, IDependency2 dep2) { this._dep1 = dep1; this._dep2 = dep2; } : } Alternatively, you can define a setter property (or method) on the class and pass dependencies through that. A third possibility is represented by a combination of methods and properties. In this case, the class implements the interface and supports injection through the members. All techniques are valid; the choice is up to you. Most of the time you would use a constructor or a setter. Applying DI to ASP.NET Pages To begin, add to your ASP.NET project a reference to Unity. Next, edit the web.config file as shown in Figure 2. <configuration> <configSections> <section name="unity" type="Microsoft.Practices.Unity. Configuration.UnityConfigurationSection, Microsoft.Practices.Unity.Configuration, Version=1.1.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" /> </configSections> <unity> <containers> <container> <types> <type type="Samples.IDataFinder, base" mapTo="Samples.NWindFinder, appNWind" /> </types> </container> </containers> </unity> </configuration> Figure 2: Registering a DI/IoC mapping. The <types> section lists the type that the DI/IoC container should look for: <types> <type type="Samples.IDataFinder, base" mapTo="Samples.NWindFinder, appNWind" /> : </types> The type attribute indicates the interface on which a class depends. The mapTo attribute indicates the actual type to instantiate whenever the container requests the specified type. For example, whenever the DI/IoC container requests a type IDataFinder from an assembly named base.dll, it actually returns an instance of NWindFinder from the assembly appNWind. Let s see what else is required in an ASP.NET application. To smooth the propagation of DI/IoC capabilities throughout the pages of the site, I created a base page class and exposed a Resolve method, as shown in Figure 3. public class MyBasePage : System.Web.UI.Page { protected IUnityContainer container = null; public MyBasePage() { // Initialize the DI/IoC container container = new UnityContainer(); // Instruct the container to read information // from web.config UnityConfigurationSection section; section = ConfigurationManager.GetSection("unity") as UnityConfigurationSection; section.Containers.Default.Configure(container); } public IDataFinder DataFinder { get; set; } public T Resolve<T>() { // Take type T and look in the configuration for // matching types return container.Resolve<T>(); } } Figure 3: The DI/IoC container in action. You typically create an assembly where you put the base page class and the interfaces on which the application depends, such as IDataFinder. Next, you derive from the common base page each page where you need DI/IoC capabilities (see Figure 4). public partial class TestPage : Samples.MyBasePage { protected void Page_Load(object sender, EventArgs e) { // Retrieve the data finder object to use this.DataFinder = this.Resolve<IDataFinder>(); if (this.DataFinder == null) return; // Use the data finder object data = this.DataFinder.GetData(); GridView1.DataSource = data; GridView1.DataBind(); } } Figure 4: A page that uses DI/IoC. From a design perspective, all you do is decide on which external services your page depends and define a property for each. Next, you resolve the dependency through the container and go. To change the dependency, you don t need to touch the code you simply edit the configuration file. Conclusion Dependency injection has perhaps a high-sounding name, but hides a very handy practice. With the help of DI/IoC containers, applying dependency injection is really easy and effective. You do some configuration work and the framework of choice does the rest. I haven t mentioned some of the additional (and really powerful) features that some containers include I ll save that for another article.. Personal Anecdote It was 1994 and I was working on my first important project. It was a standalone Windows 3.1 application doing some nice stuff with image-based documents. We had (too) many customers asking regularly for various forms of customization. In the end, it was good business for the company. Customers were first buying a good number of licenses, then paying for extra customization. But it was a nightmare for the development team as we had to maintain slightly different versions of the codebase. One Sunday morning I was shaving in front of the mirror when I realized my razor was broken. I had to locate and set up a second razor. And it worked just fine. So in the end, for just my face, I could use two razors interchangeably. This inspired me to consider a couple of new options for the software. (Again, it was 1994 and the DIP was still to come.) Monday morning I shared the shaving experience with the rest of the team and we promptly arranged two equally valid options. In a nutshell, the problem was extending some menus with extra features, but in a way that we could maintain one codebase and sell specific pluggable extensions. The first option we considered was defining a CreateMenu function (no classes and no OOP at the time for us) reading the standard menu from resources and extra items from a configuration file. The second option was passing the list of extra items to the CreateMenu function. Basically, the point was: where should the loading code go? We opted for what we considered the most packaged solution CreateMenu does it all. We opted for Plugin versus Dependency Injection. Today, the burden of the loading code is being taken care of by DI/IoC containers.
http://www.itprotoday.com/web-development/inject-services-aspnet-pages
CC-MAIN-2018-17
refinedweb
2,218
57.16
. Are you sure you're in a beginner's course? I didn't learn stacks, queues, or linked lists until my third semester. Have you started the programs? What have you got done so far? I have no clue how to do the first assignment. Last semester I took intro to CS and now I'm taking CS II, both of which are really introductory courses. It's my professor who decided to 'jump' ahead with learning more about java. So far this is what I have for the second assignment: import java.lang.*; import java.util.*; public class testSingllyLink{ public static void main(String[] a){ SLinkedList s1 = new SLinkedList(); //sl containes no nodes at this moment //how toadd new node into it. s1.addFirst(new Node("Math", null)); s1.addFirst(new Node("CS", null)); System.out.println(""+sl); } } class Node{ private String element; private Node next; public Node (String s, Node n) { element = s; next = n; } public String getElement(){ return element; } public Node getNext(){ return next; } public void setElement (String newElem){ element = newElem; } public void setNext(Node newNext){ next = newNext; } } class SLinkedList{ protected Node head; protected Node size; public SLinkedList(){ head = null; size = 0; } void addFirst(Node v){ v.setNext(head); head = v; size++; } void removeFirst(){ if (!(head == null)){ Node temp; temp = head; head = head.getNext(); temp.setNext(null); size--; } } public String toString(){ String convertion = ""; Node temp = head; for (int i=0; i<size; i++){ convertions += temp.getElement(); temp = temp.getNext(); } return convertion; } } When I run the program I keep getting an error. I'm having a problem with putting L and M into L'. It is best to have an idea of where you are going before you start your trip. Develop a map of how you plan to get there. This is your "algorithm". So, what is your algorithm for attacking each problem? As to the code you listed - what error messages are you seeing? your node class looks pretty good. i love it to see getters and setters for private members. but the SLinkedList is not that proper. bug: protected Node size; this should be int not Node. then your list has the followin methods: void addFirst(Node v) void removeFirst() that's not that pretty. you shouldn't operate on the first node. a list grows at the tail. usually a list should have the following methods: -void add(String value) this will internally generate a node with the given value and attach it to the tail (end) of the linked list. to optimize you also can keep track of the tail of your list like you do with the head. so adding new values will increase in speed to O(1) instead of O(n). -void remove(String value) this will iterate through the list to find a node with the given value using equals to determine if two values are equal). if found, it will remove it properly by linking the previous node to the next one. also think of the size of the list to be corrected. this will remove the first occurence of the value in the list (since lists can contain a value twice or more times). this method is not needed for your assignment. -void remove(int i) the same as get(i), but removing the node at the index. -int size() returns the amount of nodes in the list -boolean isEmpty() optional, since size() will be enough. but it's easy: return size()==0. -String get(int i) will return the value at position i, if i<size (since i starts at 0). to find the correct value, you will have to iterate through the list until you reach the given position. -void clear() this will remove all nodes from the list by just setting size=0 and head=null. then the nodes will be garbagecollected. having done this class, it's really easy to append one list to another: list a, list b // assume you have to filled lists for (int i=0; i<b.size(); i++){ a.add(b.get(i)); } b.clear(); // only if you want b to be discarded and the non plus ultra: you decided to use Strings as values. lists usually are more general and thus use Object as their values. for the palindromes the easier way is using a stack. you will put the first half of the string as characters to the stack. then compare the secound half of the string with the (reversed order of the ) characters in the stack. eg.: abcba -> 1. 1st half=ab, c is discarded since it doesn't change the palindrome property 2. put it on the stack push(char(0)) push(char(1)) 3. the stack now contains b(top), a (bottom) 4. compare the second half (ba) with the stack: char(3)==pop() (pop will give b) char(4)==pop() (pop will give a) 5. if all chars from 2nd half and stack are equal, you have a palindrome 6. instead of iterating the second half by hand, use a queue for it: push(char(3)) push(char(4)) 7. the queue has now b (first), a (last) 8. compare stack with queue: queue.pop()(pop will give b)==stack.pop() (pop will give b) queue.pop()(pop will give a)==stack.pop() (pop will give a) that's it. I have the palindrome assignment completed and running Thanks a lot graviton. I'm currently working on the code above. Thanks all! Forum Rules Development Centers -- Android Development Center -- Cloud Development Project Center -- HTML5 Development Center -- Windows Mobile Development Center
http://forums.devx.com/showthread.php?152570-java-help&mode=hybrid
CC-MAIN-2015-18
refinedweb
928
75.5
26 September 2012 19:28 [Source: ICIS news] CAMBRIDGE, Maryland (ICIS)--The US economy will see continued if slow growth into next year but will turn down in the second half of 2013 and slide into a new recession in 2014, a forecasting firm said on Wednesday. Brian Beaulieu, executive director at the Institute for Trend Research (ITR), told a specialty chemicals business conference that his firm expects the ?xml:namespace> He noted that “Banks are lending, retail sales are up, non-residential construction is improving,” Beaulieu said, adding: “And residential construction, which is 9% of the nation’s economy and was dead in the water a year ago, now is improving.” He also noted that new construction by US chemicals producers is up by 50% from a year earlier. However, because of some fundamental structural problems in the US economy – including the highest level of national debt since World War II – Beaulieu said ITR is forecasting that the slow upward momentum of growth likely will peak in mid-year 2013, then turn down in the second half of next year before falling into recession for full year 2014. He said he expects Beaulieu said the “If you are 43 years old or younger now, life is going to get a bit more difficult in the future,” he told industry executives. “If you’re over 43, then 2017 would be a good time to retire.” He said there are three “mega-trends” that chemicals producers and other businesses should anticipate and prepare to meet. “The first is demographics. If you live and work in a country whose population is growing, you and your company win,” he said. “If you’re doing business in a country, such as Second, he said, inflation will rise and later interest rates will climb sharply as the US Federal Reserve Board moves to confront the inflationary gain. “Your business planning should anticipate this and model to meet it,” he said. Third, he said, “Taxes are going to go up, and it doesn’t matter who is in the White House, Obama or Romney, taxes are going to go up.” In addition, Beaulieu dismissed widespread worries among chemical producers and other businesses about the possibility of a second term for President Barack Obama. “Here’s our take on the election,” he said, “We don’t care.” “ Beaulieu spoke on the first day of the two-day SOCMA Leadership Conference, which is sponsored
http://www.icis.com/Articles/2012/09/26/9598941/forecasting-firm-sees-new-us-recession-in-2014.html
CC-MAIN-2014-41
refinedweb
406
54.97
We all like the pyramids by the way they are built and the enormous size of those pyramids.Today we shall apply it our programming and print the numbers of our desired range.The output of today’s program may look like this. Let us analyze the output.From the output shown we can decide that we need to use two for loops.One for the moving to the other line and the other for number of spaces to taken from the starting place.To print the numbers we use the while loop such that we need to increase the numbers which are printing in each line.To construct the program we shall ask the user number of lines he desired to have in his pyramid and then those numbers of lines with each line increasing the number of elements in it. import java.util.Scanner; class pyramid { public static void main(String arg[]) { int space,k=1,z=1; Scanner in = new Scanner(System.in); System.out.println("Enter how many lines you need in pyramid:"); int n = in.nextInt(); for(int i=n;i>0;i--) { space=i; for(int j=0;j<space;j++) System.out.print(" "); k=1; while(z>=k) { System.out.print(z+" "); k++; } z++; System.out.print("\n"); } } } The output of the above program is: You can download the code:download
https://letusprogram.com/2013/08/01/java-program-to-print-numbers-in-pyramid-design/
CC-MAIN-2021-17
refinedweb
228
75
Run-obj LabTalk Object Type: The run object executes script from a script file. Origin tools use the run.section() method to execute sections of the appropriate *.OGS file. Add a file to the System folder of Code Builder's Origin C Workspace. After successfully adding the file compile and link it. General mechanism for tool launching. OGOName is the tool OGO file name. DLLObject is the object (existing in an external DLL) that contains source values for the tool sub-objects. (For a full discussion of "source" and tool sub-objects, see the draw.set() method.) The run.dialog() method is equivalent to the following script: dotool -w OGOName; run.section(OGSName,INIT); DLLObject.reset(); draw.set(OGOName,DLLObject); (Where the OGSName is named the same as OGOName.) The run.dialog() method returns 0 if there is no error. It returns 1 if there is an error. Execute the specified LabTalk script file. This is the preferred mechanism for running older user-defined script files without sections (versus using the run command). If the file has an OGS extension, then the extension is optional. If the file is located in your User Files Folder then the path is optional otherwise you must specify the full path. Load and compile an Origin C function or workspace with optional bit arguments. The err return value: The myfile argument: The option argument: Note: user is recommended to use "16" option only. Other options are provided for internal-used. These are bit values and can be combined using the bit wise OR operator |. If option is not specified then zero is used. run.python(arg, [option]) Minimum Origin Version Required: 2015 SR0 Execute Python statement(s) or files or evaluate Python expression(s), the arg should be a string variable including the Python statement, or file name and path, or Python evaluation, depending on the option. You can view the examples in the Run Python section below for sample usage. Execute the named section of the specified LabTalk script file. You can specify a path in fileName. If no path is specified, Origin first looks in the location of the open project, then the user files path, then the path to the Origin application file. Script file sections are separated by [section names]. For example, [Graph]. This method can pass up to five arguments to a section. Arguments in the script can be referred to using the temporary string variables: %1, %2, %3, %4, and %5. Note: When filename is not included, the current running script file is assumed. For example: run.section( ,Init); run.section( ,Init); executes a subroutine Sub1 which is included in the currently running script file. This is a useful way to make a modular structure within a script file. Origin C source file often has dependent C files. The following example shows how you can compile just the main file and automatically compile its dependents with option 16. if(run.LoadOC(Originlab\AscImpOptions, 16) == 0) run -oc AscImportFDLOGFiles(%H); If you opens this file (AscImpOptions), you will see it has includes like #include "fu_utils.h" #include "Import_Utils.h" option 16 will tell the Origin C compiler to scan the first 50 lines of code (after all comments stripped) to find such header include lines and then find corresponding C files in the same folder and add them to workspace as well. In this case, fu_utils.c and Import_Utils will be automatically loaded as well. You may also notice that the Origin C function AscImportFDLOGFiles was called with a LabTalk command run -oc. This technique allows the reduction of Origin C based function names to be directly visible from LabTalk, so as to prevent accidental name collisions. You will need to use #pragma labtalk(2) in the Origin C file to indicate subsequent functions to be callable only via run -oc, while #pragma labtalk(0) is used to completely prevent Origin C functions to be visible from LabTalk. The following examples show how to use the run.python() method. Note that you need to make sure the Script Execution is set to LabTalk in the Script Window in order to run these scripts. This LabTalk script directly executes Python command print('Hello Origin'): run.python("print('Hello Origin')"); // This will print "Hello Origin" in Script Window This script evaluates Python expression 2 ** 4: run.python(" 2 ** 4 ", 1); //this will print 16 in Script Window. This script first sets the file path to be <Origin EXE folder>\Samples\Python, and then execute the ListMember.py file under This script suppress the output so nothing will be output after execution: run.python("print('Hello World')", 16); This script executes the script in the [SimpleFit] section of the LR.OGS file and performs a linear fit of the data in the current graph window. run.section(LR,SimpleFit); This script loads a sequence of files into the User Folder Workspace, deferring Linking to the end: // 9 combines 1 (load to User Folder) and 8 (defer linking) err = run.loadoc(originlab\initcode.c,9); // OR the errors together err = err | run.loadoc(originlab\maincode.cpp,9); err = err | run.loadoc(originlab\supportcode.c,9); if( err == 0) run.loadoc(); // This triggers the linking else printf("Some error(s) occurred. Linking not done.\n"); This script loads a file selected by file dialog into the User Folder in Workspace: fdlog.UseGroup(OriginC); fdlog.Open(A); %B = fdlog.Path$; string fname$ = %B%A; fname$=; err = run.loadoc(%(fname$),1); if(err == 0) type -a Successfully load; LabTalk:OGSFileName (command), LabTalk:OGSFileName.SectionName (command)
http://cloud.originlab.com/doc/LabTalk/ref/Run-obj
CC-MAIN-2020-16
refinedweb
929
58.28
>> combine a pandas Series with Scalar using Series.combine() method? The combine() method in pandas series combines two series objects or combines a series with a scalar according to the specified function. The combine() method takes two required positional arguments. The first argument is another series object or a scalar, and the second argument is a function. The combine() method takes elements from series objects and a value from its parameter based on the specified function it will combine both series and scalar and returns a series object. Example 1 import pandas as pd # create pandas Series series = pd.Series({'i':92,'j':70,"k":88}) print("Series object:",series) # combine series with scalar print("combined series:",series.combine(75,max)) Explanation In this example, we will combine the Series with the scalar value with the 'max' function. The 'max' method takes two elements one from the Series and another one is the scalar value then compares two and returns a single element which one is maximum. Output Series object: i 92 j 70 k 88 dtype: int64 combined series: i 92 j 75 k 88 dtype: int64 In the above block, we can see the resultant series object which is created by combine() method. Example 2 import pandas as pd # create pandas Series series = pd.Series([50,40,70,30,60,20]) print("Series object:",series) # combine series with scalar print("combined series:",series.combine(50,min)) Explanation Initially, we have created a pandas Series with a python list of integer values. And applied the combine() method with a scalar value “50” and a “min” function. Output Series object: 0 50 1 40 2 70 3 30 4 60 5 20 dtype: int64 combined series: 0 50 1 40 2 50 3 30 4 50 5 20 dtype: int64 The Series.combine() method returns a series in which the elements are having a smaller value compared to scalar one otherwise elements are replaced by that scalar value. For this example, the scalar value “50” is replaced at “0,2,3,4” index positions. - Related Questions & Answers - How does pandas series combine() method work? - How to divide a pandas series with scalar using series.div() method? - How to apply integer division to the pandas series by a scalar? - Write a Python code to combine two given series and convert it to a dataframe - How to sort a Pandas Series? - How does pandas series astype() method work? - How does pandas series combine_first() method work? - How does pandas series div() method work? - How to remove a specified row From the Pandas Series Using Drop() method? - How to create a series from a list using Pandas? - How to append a pandas Series object to another Series in Python? - How does the pandas Series idxmax() method work? - How does the pandas Series idxmin() method work? - How to append elements to a Pandas series? - How to access Pandas Series elements by using indexing?
https://www.tutorialspoint.com/how-to-combine-a-pandas-series-with-scalar-using-series-combine-method
CC-MAIN-2022-33
refinedweb
488
65.12
How to Use Class Declarations in Java In Java programming, a class is defined by a class declaration, which is a piece of code that follows this basic form: [public] class ClassName {class-body} The public keyword indicates that this class is available for use by other classes. Although it’s optional, you usually include it in your class declarations so that other classes can create objects from the class you’re defining. The ClassName provides the name for the class. You can use any identifier you want to name a class, but the following three guidelines can simplify your life: Begin the class name with a capital letter. If the class name consists of more than one word, capitalize each word: for example, Ball, RetailCustomer, and GuessingGame. Whenever possible, use nouns for your class names. Classes create objects, and nouns are the words you use to identify objects. Thus, most class names should be nouns. Avoid using the name of a Java API class. No rule says that you absolutely have to, but if you create a class that has the same name as a Java API class, you have to use fully qualified names (such as java.util.Scanner) to tell your class apart from the API class with the same name. The class body of a class is everything that goes within the braces at the end of the class declaration, which can contain the following elements: Fields: Variable declarations define the public or private fields of a class. Methods: Method declarations define the methods of a class. Constructors: A constructor is a block of code that’s similar to a method but is run to initialize an object when an instance is created. A constructor must have the same name as the class itself, and although it resembles a method, it doesn’t have a return type. Initializers: These stand-alone blocks of code are run only once, when the class is initialized. The two types are static initializers and instance initializers. Other classes: A class can include another class, which is then called an inner class or a nested class. A public class must be written in a source file that has the same name as the class, with the extension .java. A public class named Greeter, for example, must be placed in a file named Greeter.java. You can’t place two public classes in the same file. For example, you can’t have a source file that looks like this: public class Class1 { // class body for Class1 goes here } public class Class2 { // class body for Class2 goes here } The compiler will generate an error message indicating that Class2 is a public class and must be declared in a file named Class2.java. In other words, Class1 and Class2 should be defined in separate files.
https://www.dummies.com/programming/java/how-to-use-class-declarations-in-java/
CC-MAIN-2019-30
refinedweb
471
68.1
This page is a snapshot from the LWG issues list, see the Library Active Issues List for more information and the meaning of C++11 status. Section: 19.4 [pairs] Status: C++11 Submitter: Doug Gregor Opened: 2008-03-14 Last modified: 2016-02-10 Priority: Not Prioritized View all other issues in [pairs]. View all issues with C++11 status. Discussion: #include <utility> int main() { std::pair<char *, char *> p (0,0); } I just got a bug report about that, because it's valid C++03, but not C++0x. The important realization, for me, is that the emplace proposal---which made push_back variadic, causing the push_back(0) issue---didn't cause this break in backward compatibility. The break actually happened when we added this pair constructor as part of adding rvalue references into the language, long before variadic templates or emplace came along: template<class U, class V> pair(U&& x, V&& y); Now, concepts will address this issue by constraining that pair constructor to only U's and V's that can properly construct "first" and "second", e.g. (from N2322): template<class U , class V > requires Constructible<T1, U&&> && Constructible<T2, V&&> pair(U&& x , V&& y ); [ San Francisco: ] Suggested to resolve using pass-by-value for that case. Side question: Should pair interoperate with tuples? Can construct a tuple of a pair, but not a pair from a two-element tuple. Related to 885. [ 2009-07-28 Reopened by Alisdair. No longer solved by concepts. ] [ 2009-10 Santa Cruz: ] Leave as open. Howard to provide wording. [ 2010-02-06 Howard provided wording. ] [ 2010-02-09 Moved to Tentatively Ready after 6 positive votes on c++std-lib. ] Rationale: [ San Francisco: ] Solved by N2770. [ The rationale is obsolete. ] Proposed resolution: Add a paragraph to 19.4 [pairs]: template<class U, class V> pair(U&& x, V&& y); 6 Effects: The constructor initializes first with std::forward<U>(x) and second with std::forward<V>(y).
https://cplusplus.github.io/LWG/issue811
CC-MAIN-2018-30
refinedweb
329
57.16
ObjectDataSource's underlying data/schema Discussion in 'ASP .Net' started by =?Utf-8?B?RGljaw==?=, Mar For XML Schema gurus - populate 2 combo boxes from schema dataJoyce, Mar 1, 2005, in forum: XML - Replies: - 0 - Views: - 640 - Joyce - Mar 1, 2005 [XML Schema] Including a schema document with absent target namespace to a schema with specified tarStanimir Stamenkov, Apr 22, 2005, in forum: XML - Replies: - 3 - Views: - 1,428 - Stanimir Stamenkov - Apr 25, 2005 ObjectDataSource Refresh Schema Error, Jul 4, 2007, in forum: ASP .Net - Replies: - 0 - Views: - 720 ObjectDataSource method as another ObjectDataSourceDavid Thielen, Mar 21, 2006, in forum: ASP .Net Web Controls - Replies: - 3 - Views: - 324 - Steven Cheng[MSFT] - Mar 23, 2006
http://www.thecodingforums.com/threads/objectdatasources-underlying-data-schema.120613/
CC-MAIN-2015-27
refinedweb
114
58.21
In my last post I introduced the all-new Apex Metadata API. In this post we will take a look at the security aspects of this new feature. When discussing this new API with customers, partners, and Salesforce employees, security is often the first topic raised. The Apex Metadata API empowers developers to automate changes to an org’s configuration. Changing the structure of an org is a big deal, and also has big implications for the data in that org. Trust is our number one value at Salesforce, and the Apex Metadata API is built to be a trusted interface. Three features provide secure access to an orgs’ metadata: With these features you can “trust, but verify.” The first two help you trust the functionality of apps using the Apex Metadata API. The third lets you verify the app’s behavior. While we intend to support many more types of metadata than the two in this debut of the Apex Metadata API, we will not expose the entire Metadata API in Apex. This assures customers that packages they install can only modify safe metadata types — types that get modified in predictable ways. For example, we will not provide the ability to create Apex classes, Visualforce pages, or Lightning components via Apex. If managed packages have the ability to write code in a subscriber org, it becomes difficult for Salesforce to review their security profile. To assure customers that apps they install only modify metadata types in predictable ways, we will not support automated code generation. In addition, we’re limiting which packages can execute a metadata deploy via Apex. The Apex Metadata API can be executed in three scenarios: These restrictions ensure that the deploy is coming from a trusted entity. Metadata changes can be made by a certified managed package, which is provided by a known, registered ISV. Partner apps in AppExchange that can make metadata changes in a subscriber org will alert subscribers. Partner apps must include this notification to pass the AppExchange security review. Metadata changes can also be made by code that is known to the org in question. The latter can be unmanaged code developed or installed and vetted in the org itself. Or it can be uncertified managed code, but only if the subscriber has explicitly allowed it. Uncertified managed packages can only do metadata operations if the subscriber has set the Apex setting shown on the right. With this setting ISVs can test managed packages that aren’t yet certified, and enterprises can use managed packages to manage their apps. These scenarios are summarized in the following table that shows which permissions and settings are needed to use Apex Metadata API. All metadata operations using the Apex Metadata API are tracked and the namespace of the code performing the deploy is recorded in the setup audit trail. You always know which namespace made what changes and when. This is where you verify the behavior of your trusted apps. Apex Metadata API deploys can modify metadata outside its own namespace. This is necessary to support many important use cases. But it makes some people nervous, so let’s look at the implications. Knowing what metadata can get updated and how will help you make the right choices about how to use this API. But before we dig into the details, it’s important to understand that managed Apex manipulates metadata in a subscriber org in the same way that unmanaged Apex does, with two exceptions: A managed package’s code does a deploy on behalf of the subscribing org. If you remember this, everything else is intuitive. All metadata created by a managed package’s Apex is created in the subscriber namespace. Managed Apex never creates metadata that has the same namespace as the package the code is running from. A managed package’s Apex can update any metadata in the org where the metadata is subscriber-controlled and the metadata is visible from within the managed package’s namespace. Therefore, it can update any public subscriber-controlled metadata, whether it’s in the same package, the subscriber org, or a different managed package. It can also update private subscriber-controlled metadata in its own namespace. If you are a managed package developer, this makes the Apex Metadata API a great tool for securing more of your app. You can now hide your app configurations as protected metadata and still manipulate them with Apex. Apex in a managed package can update developer-controlled metadata only if it’s in the subscriber org namespace. For example, if Apex in the managed package creates a record of a custom metadata type, that record will be in the subscriber namespace. Code in the managed package can update any of the fields. However, Apex cannot update developer-controlled fields of records contained in its own package, even though they’re in the same namespace. That metadata can only be updated with a package upgrade. Some of this may seem counterintuitive. If so, remember that aside from its ability to access protected metadata, the code acts like non-namespaced code, but with an audit trail showing the namespace of the Apex that made the change. With the exception of protected metadata, all the capabilities and limitations I just described are the same capabilities and limitations an admin in the subscriber org has. Here’s how that plays out in all the scenarios you may encounter: The Metadata API itself adds an additional layer of trust. Metadata API permissions are respected by an Apex Metadata API deployment. While Apex lets you write code that enables end users to enqueue a deployment, that deployment will fail. Only users who can already do a Metadata API by other means will be able to do it in Apex. In contrast, retrieve calls from the Metadata API work for any users your app has granted access. As more metadata types are exposed in Apex, this will be a handy way to provide read access to info not available in metadata describes. Some developers leverage remote site settings and call the Metadata API from their app. This provides the same capabilities as the Apex Metadata API, but lacks most of the security controls. The Summer ‘17 release marks the beginning of the end to this approach (or the complete end if you only rely on remote site settings to update custom metadata records or page layouts). Using the Apex Metadata API has many advantages over code that relies on remote site settings. The Apex Metadata API enables you to: In addition to the security benefits, the Apex Metadata API is much easier to use! Wrapping the Metadata API and calling it from Apex requires a lot more code than using this new native solution. And remote site settings can be challenging for partners with a large, low-touch customer base. It isn’t difficult to guide a few large customers through the remote site setting setup. But if you have thousands of customers, this is a manual step that many admins can overlook or do incorrectly, which can prevent your code from functioning properly. If you’re a developer in an ISV, there are a few things to keep in mind as you use the Apex Metadata API: As with any great power, this one comes with great responsibility. We have therefore provided many features to maximize the safety of the Apex Metadata API. These features enable you to trust the apps you and others build, but verify their behavior. Check out my previous post for an overview of the Apex Metadata API. Keep an eye out for follow up posts diving deeper into the setup UI and post install script use cases. And join us at TrailheaDX June 28-29 to see the new Apex Metadata API in action! Then go forth, build some cool stuff, and tell us all about it in the Success Community’s Apex Metadata API group. And be safe!.
https://developer.salesforce.com/blogs/engineering/2017/06/apex-metadata-api-security.html
CC-MAIN-2021-31
refinedweb
1,331
61.56
Bummer! This is just a preview. You need to be signed in with a Basic account to view the entire video. Building the Stats and Counter Pure Component3:24 with Guil Hernandez and Beau Palmquist In this video, we're going to build the next set of components for our app – the stats and counter components. Both are pure (or stateless) components, so they will be easier and faster to implement. - 0:00 Okay, so in this video, we're going to build the next set of components for - 0:04 our app, the stats and counter components. - 0:06 Both are pure or stateless components, so they will be much easier and - 0:10 faster to implement. - 0:12 Let's start by adding a file called Stats.js to our components folder. - 0:18 In Stats.js, we'll add import React, - 0:22 { PropTypes } from 'react';. - 0:26 You'll notice that this time we're not importing component here. - 0:31 This is because we're not going to be extending a class and - 0:33 we'll instead be defining this component as a pure function. - 0:36 So if we open up the file Scoreboard.js and - 0:39 take a look at the stats implementation here about line 81. - 0:42 You'll notice that it's already defined as a function. - 0:46 In fact, if we copy the entire stats implementation - 0:50 including its prop types and paste it into our Stats.js file. - 0:55 There are only a couple of modifications we'll need to make. - 0:58 First, we'll update the function - 1:02 syntax to const Stats = props => {. - 1:07 And at the end of the file, we need to make sure we export - 1:13 our stats function with export default Stats;. - 1:19 So that's it for the stats component. - 1:20 So let's do the same thing for the counter component. - 1:23 Once again, I'll start by adding a new file inside the components folder. - 1:28 This time, I'll name the file Counter.js. - 1:32 Inside the new file, I'll add, - 1:35 import React, { PropTypes } from 'react';. - 1:43 Next, just like we did with the stats component, - 1:47 we're going to copy the counter implementation including the prop - 1:52 types from Scoreboard.js, paste it into the Counter.js file. - 1:58 And we'll change the function syntax to const Counter = props => {. - 2:06 And then let's export our counter, just like we did with stats, - 2:09 at the bottom of the file with export default Counter;. - 2:15 And that's it. - 2:16 But let's go back and clean up our Scoreboard.js file. - 2:20 We're going to remove the stats and counter implementations from this file. - 2:34 Once again, if you run the application now, it won't work because the scoreboard - 2:38 component doesn't know how to resolve the stats or counter components. - 2:42 So let's be sure to import them just like we - 2:44 did earlier with the stopwatch component. - 2:47 At the top of Scoreboard.js, we'll first import Counter. - 3:00 Then we'll import the stats component. - 3:07 So now, we should be able to fire up the dev server and - 3:10 our application should work, just like it did before, but with our counter and - 3:14 stats component implemented as their own separate modules. - 3:17 So, in the next video, we'll implement the last component that doesn't have any - 3:20 component dependencies, the Add Player form component.
https://teamtreehouse.com/library/building-the-stats-and-counter-pure-component
CC-MAIN-2020-24
refinedweb
628
82.34
Python recipe: Fetch all the days between two dates I've added a new function to my python-math library that will return a list—technically a generator—of all of the days between two dates. There might be a more obvious way to do this, but I don't know it. If you do, please let me know. import datetime def date_range(start_date, end_date): """ Returns a generator of all the days between two date objects. Results include the start and end dates. Arguments can be either datetime.datetime or date type objects. h3. Example usage >>> import datetime >>> import calculate >>> dr = calculate.date_range(datetime.date(2009,1,1), datetime.date(2009,1,3)) >>> dr <generator object="object" at="at"> >>> list(dr) [datetime.date(2009, 1, 1), datetime.date(2009, 1, 2), datetime.date(2009, 1, 3)] """ # If a datetime object gets passed in, # change it to a date so we can do comparisons. if isinstance(start_date, datetime.datetime): start_date = start_date.date() if isinstance(end_date, datetime.datetime): end_date = end_date.date() # Verify that the start_date comes after the end_date. if start_date > end_date: raise ValueError('You provided a start_date that comes after the end_date.') # Jump forward from the start_date... while True: yield start_date # ... one day at a time ... start_date = start_date + datetime.timedelta(days=1) # ... until you reach the end date. if start_date > end_date: break </generator>
http://palewi.re/posts/2009/09/01/python-recipe-fetch-all-days-between-two-dates/
CC-MAIN-2019-09
refinedweb
222
63.56
If you have ever played with Django's class-based views, you will know how tangled the view code can be when it comes to customizing the view. In recent projects when working with class-based views, I tend to look at the Django source code directly to determine the best place to put specific code snippets. A commenter on my previous class-based views article didn't understand why I couldn't just modify the object, rather than creating the object from an uncommitted form, then modifying the object, then finally saving it to the database. Class-based views tend to bring with it some complexities, since there is a specific order of operations, much like in math. This article here will explain this order of operations when it comes with class-based views, and how you should go about overriding specific callables in the class. Lets start with my most used class-based view DetailView. This is how a request is handled when a DetailView class is being used: - View.as_view.view(request, *args, **kwargs) - This is the main entrance point for the view code, as_view() is the class constructor, and assigns self as the generated class with the keyword arguments sent in from the urls.py. It returns the output of the next function in this list. - View.dispatch(request, *args, **kwargs) - This function sets up the class instance variables self.request, self.args, and self.kwargs. This function checks to see if the request.method is in the list of available methods for this particular view, and calls it's function. - BaseDetailView.get(request, *args, **kwargs) - The HTTP method function called from View.dispatch. This function assigns the instance variable of self.object by calling get_object(). It also obtains the context by calling get_context_data(object=self.object). Finally this function returns the result from render_to_response(context). - SingleObjectMixin.get_object(queryset=None) - This function has the task of obtaining a model instance by looking at the various resources available, such as variables from the URL. It's query can be limited by sending it a different queryset. If queryset is None, which is normally the case, then it will call get_queryset() to obtain the queryset to use from the current class. It then checks for either the pk or slug from the self.kwargs dict, which is normally set from the URL. It will then apply a filter on the queryset using both the PK and SLUG, if they are available. It will use both. Then it does a queryset.get() and returns the object found back to BaseDetailView.get(). - SingleObjectMixin.get_queryset() - This provides the initial queryset for the DetailView. It basically checks to see if your subclass has a queryset model instance variable and uses that, otherwise provides an appropriate error message about being misconfigurated. It is very handy to override this to limit what users can and cannot access. - SingleObjectMixin.get_context_data(**kwargs) - Here is where a context data Python dictionary is generated for the view. First it takes in all the kwargs and uses it as the initial dictionary. Since the get() function provides the context with object=self.object, it generates an initial dictionary with an object key, similar to how function-based generic views worked. This is also where it adds the context for the model instances object name, such todo or entry. It then returns the dictionary back to get(). - TemplateResponseMixin.render_to_response(context, **response_kwargs) - Almost all class-based views use this Mixin to render the final response. This function calls response_class(request=self.request, template=self.get_template_names(), context=context, **response_kwargs). It then returns the result from this back to get(). You can override the response_class in your subclass if you choose, the default is TemplateResponse. - TemplateResponseMixin.get_template_names() - This functions checks for and uses the template_name variable which should be in the class instance. It returns a list, with one element being the template_name. There you have an entire class-based view, right from the request, down to where it renders the template for the end-user's browser. A great use of response_class would be to return different document types, such as JSON, XML, or even PDF generated data. You just need to create your own response class and make sure it accepts what the TemplateResponseMixin sends to it as variables. Okay, now that we got a fairly simple class-based view out of the way, yes this is a more simpler one. Lets tread into CreateView, which uses two HTTP methods and therefore branches conditionally. - View.as_view.view(request, *args, **kwargs) - Does the same thing as the previous view mentioned, since both view types subclass View. - View.dispatch(request, *args, **kwargs) - Same as before, since we haven't left the View class yet, this will dispatch between get() and post() this time around. - BaseCreateView.get(request, *args, **kwargs) - This method obviously works much differently than the one included with DetailView. Instead, during the get request, it sets self.object to None so that the rest of the class knows that we are going to be creating a new object. It then returns the super of this, which we continue... - ProcessFormView.get(request, *args, **kwargs) - This function first gets the form_class by calling get_form_class(). It then takes the returned form_class and hands it over to get_form(form_class). The context is generated the same way it is in the previous view mentioned, get_context_data(form=form). Finally, the context is handed over to render_to_response(context), which is eventually sent to the browser. - ModelFormMixin.get_form_class() - This function is used to get the class which is used to render the form in the browser. This should return a valid ModelForm with the Model your attempting to create an instance of. By default it obtains the class from the class instance variable form_class. If this is not set, then it attempts to use the provided Model as the form to display in the browser. - FormMixin.get_form(form_class) - This function uses the form_class and creates an instance of it using the result from get_form_kwargs() as the keyword variables to instantiate the form class. - FormMixin.get_form_kwargs() - You can use this to customize the form instance, but by default it does plenty of what you'll need. It uses the result from calling get_initial() to provide the form with some initial data, and place the data and files into the form instance if the HTTP method is either POST or PUT. - FormMixin.get_initial() - This does a copy of the class instance's variable of initial. You can override this function to provide some Python logic to dynamically generate a dictionary for the initial data. - ModelFormMixin.get_context_data(**kwargs) - This function does the exact same thing as the get_context_data() for SingleObjectMixin. - TemplateResponseMixin.render_to_response(context, **response_kwargs) - This View also uses the same function as the previous view. - SingleObjectTemplateResponseMixin.get_template_names() - Unlike the DetailView, this function does a little more, which includes using a suffix such as _form for the template name. - BaseCreateView.post(request, *args, **kwargs) - Here is where we start the POST method, this is only called when the user actually submits the form. Firstly, it also sets self.object to None like the get() method does. Then it calls super and returns the result to the browser. - ProcessFormView.post(request, *args, **kwargs) - For the most part, this is very similar to get(), it obtains a form_class, and then a form to use. Once the form is obtained, this is where things change. The function then checks the result of form.is_valid(), very standard Django stuff from normal views. If the form is valid, it calls and returns the result of form_valid(form). If the form is not valid, it calls and returns the result of form_invalid(form). - ModelFormMixin.form_valid(form) - If the form is valid, this function is called and it sets the variable self.object to the return of form.save(), again very standard Django form handling stuff. Once the object is returned, it does a super. - FormMixin.form_valid(form) - This function does one thing only, and that is return an HttpResponseRedirect object, which is the result of get_success_url(). - ModelFormMixin.get_success_url() - The task of this function is to return a valid URL to direct the user to after the form is valid. First it tries to use self.success_url if it is available. If it is not available, then it attempts to use self.object.get_absolute_url(), and if this fails, returns a misconfigured error. You can override this in your class to generate the URL dynamically using Python code. - FormMixin.form_invalid(form) - This is only called if the form was not valid, it basically returns the form back to the template and provides appropriate validation errors. An idea would be to override this to set other variables on self.object and attempt to re-validate. An example is setting the request.user on an object's field. I personally override form_valid() and it makes more sense to me. As you can see from these two very different view classes, the Django class-based views system is very modular and expandable. It comes with many base classes, and plenty of mixins to make using class-based views work for almost any scenario. To transition a customized function-based view over to a class-based view is pretty easy as you may think: def function_view(request): # Do something fancy here. if request.method == 'POST': # Do something with the POST data. elif request.method == 'GET': # Do something with GET. else: raise Http404 # Transition to a class-based solution: class ClassView(TemplateResponseMixin, View): template_name = "custom_view.html" def get_context_data(self, **kwargs): # This allows this View to be easily subclassed in the future to interchange context data. context = kwargs return context def dispatch(self, request, *args, **kwargs): # Do something fancy here. return super(ClassView, self).dispatch(request, *args, **kwargs) def get(self, request, *args, **kwargs): # Do something with GET. context = self.get_context_data(**kwargs) return self.render_to_response(context) def post(self, request, *args, **kwargs): # Do something with the POST data. context = self.get_context_data(**kwargs) return self.render_to_response(context) Made correction from commenter Visa, thanks! Most POST data in Django will be through forms, so to limit all your work, just subclass the Form class-based views to minimize the work overhead. The above can definitely apply to custom GET requests where you are returning calculated scientific data. If you need more than one view to calculate different datasets, using classes will be much easier, since you just need to create a base class and subclass this for each calculation/dataset. Class-based views allow you to make your view code much more modular and to be more DRY when developing custom views. Although at first class-based views might seem like more work, they will pay off in very large projects which use the same logic in multiple views. You can place a large chunk of your business logic into a custom Mixin, and don't need to worry about if you need to pass around a request variable or other data. Since the business logic will be part of your class when the Mixin is applied, it will have immediate access to the request and kwargs, and other data. This can lower the overall work and better organize your code and logic.
http://pythondiary.com/blog/Nov.11,2012/mapping-out-djangos-class-based-views.html
CC-MAIN-2014-52
refinedweb
1,874
56.66
Created on 2014-02-12 15:13 by flox, last changed 2015-04-07 22:13 by gregory.p.smith. This issue is now closed. I had this sporadic traceback in a project: File "test.py", line 62, in <module> result = do_lqs(client, str(dnvn)) File "test.py", line 25, in do_lqs qualif_service_id = client.create('ti.qualif.service', {}) File "/srv/openerp/.buildout/eggs/ERPpeek-1.4.5-py2.6.egg/erppeek.py", line 894, in wrapper return self.execute(obj, method, *params, **kwargs) File "/srv/openerp/.buildout/eggs/ERPpeek-1.4.5-py2.6.egg/erppeek.py", line 636, in execute res = self._execute(obj, method, *params) File "/srv/openerp/.buildout/eggs/ERPpeek-1.4.5-py2.6.egg/erppeek.py", line 361, in <lambda> wrapper = lambda s, *args: s._dispatch(name, args) File "/usr/lib/python2.6/xmlrpclib.py", line 1489, in __request verbose=self.__verbose File "/usr/lib/python2.6/xmlrpclib.py", line 1235, in request self.send_content(h, request_body) File "/usr/lib/python2.6/xmlrpclib.py", line 1349, in send_content connection 1112, in connect sock = socket.create_connection((self.host, self.port), self.timeout) File "/usr/lib/python2.6/socket.py", line 561, in create_connection raise error, msg socket.error: [Errno 4] Interrupted system call It seems that the EINTR should be caught by the standard library in all cases: But it's not the case for the "socket.create_connection" method (verified in 3.3 and 2.7 source code). > It seems that the EINTR should be caught by the standard library in all cases: > Yes, it should. > But it's not the case for the "socket.create_connection" method (verified in 3.3 and 2.7 source code). It's not the case for *almost all* syscalls. See We encountered the same problem, this is in the context of using PyQt (specifically QProcess) or twisted. They both rely on SIGCHLD for their notification framework. I've attached a httplib EINTR patch for 2.6.4 & 2.7.3. Here is a reproducible test case: import threading import signal import os import httplib def killer(): while 1: os.kill(os.getpid(), signal.SIGINT) def go(): signal.signal(signal.SIGINT, lambda x,y: None) thread = threading.Thread(target=killer) thread.start() while 1: connection = httplib.HTTPConnection("localhost:80") connection.connect() connection.close() if __name__ == '__main__': go() Which gives: Traceback (most recent call last): File "./repro1.py", line 22, in <module> go() File "./repro1.py", line 18, in go connection.connect() File ".../lib/python2.7/httplib.py", line 757, in connect self.timeout, self.source_address) File ".../lib/python2.7/socket.py", line 571, in create_connection raise err socket.error: As said in a previous comment, we don't want to have EINTR handling code everywhere. The right way to do this is to handle it at the syscall level. @tholzer, to clarify Charles-François's comment: EINTR should be handled in socket.connect() and socket.getaddrinfo() (the two syscalls called by create_connection(), AFAIR). No problem, I've attached a patch for socket.py for Python 2.7.3. A few notes: getaddrinfo (and gethostbyname, etc.) are already immune to this bug, so I've just fixed the connect() call. The socket does need to be closed after EINTR, otherwise a EINPROGRESS might get returned on subsequent connect() calls. I've also attached a potential patch for the C module Modules/socketmodule.c inside internal_connect(). A few notes: This seems to work both without time-out and with time-out sockets (non-blocking). One concern would be a signal storm prolonging the operation beyond the time-out. Should we keep track of the actual time taken in this loop and check it against the 'timeout' parameter ? Also, I don't think we can call PyErr_CheckSignals() in this context. Does this need to happen at all ? The smtplib.py also has the same problem. The EINTR cannot be handled properly. @neologix, May I attach the patch file of smtplib.py for review? @meishao Previous comments answer your question : @flox Thank you for your comment. So we just only modify the socket.py to handle the system level call, is it right? Please let me attach the patch file of socket.py for 2.7.2. Oops, I missed a break statement at the end of socket_2.7.3_eintr_patch.py. I've fixed this now in the attached patch. @meishao Could you please also update your socket_2_7_2_patch.py and add the missing break statement ? @tholzer I've updated socket_2_7_2_patch.py and added the missing break statement. And a test case for smtplib: import threading import signal import os import smtplib def go(): running = True pid = os.getpid() def killer(): while running: os.kill(pid, signal.SIGINT) signal.signal(signal.SIGINT, lambda x,y: None) thread = threading.Thread(target=killer) thread.start() while 1: try: smtplib.SMTP('localhost') except Exception, ex: running = False raise if __name__ == '__main__': go() Fails with: socket.error: [Errno 4] Interrupted system call Something like the patch i'm attaching to socketmodule.c is what I would prefer. I haven't looked at or tried tests for it yet. The issue is just an example of the main issue #18885 which proposes to retry interrupted syscalls. I hesitate to mark it as duplicate, but it contains an interesting patch. See also PEP 475 and Issue 23285 for the general fix in Python 3 This issue was fixed in Python 3.5 as part of the PEP 475: see issue #23618 which modified socket.socket.connect() to handle EINTR. It's not as simple as retrying connect() in a loop. You must respect the socket timeout (it's better to have a monotonic clock here, it's now always the case in Python 3.5). When connect() returns EINTR, the connection runs asynchronously, you have to call select() to wait until the connection completes or fails. Then you have to call getsockopt() to get the socket error code. In Python 3.5, socket.socket.connect() still raises InterruptedError if the socket is non-blocking: issue20611-connect-eintr-gps01.diff calls again connect() if connect() failed with EINTR. According to the issue #23618, it might work on some platforms, but it's not portable. For Python 2.7 and 3.4, instead of fixing socket.socket.connect(), which requires complex code, we may only workaround the issue in create_connection(). If connect() raises OSError(EINTR), drop the socket and retry with a fresh connection in a loop until the connection completes or raises a different exception. (And do that for each address.) i'm moving this to the more recent issue as i like the patch in that one better.
https://bugs.python.org/issue20611
CC-MAIN-2021-17
refinedweb
1,106
62.44
Talk:Proposed features/changing table This page is meant to discuss this proposal before it goes to the voting process. Contact with downstream users I recommend gathering previously contacted users here and consider contacting some more major ones, feel free to edit. Bkil (talk) 21:27, 20 April 2019 (UTC) Done: - - - - - (Vespucci) - - - TODO: - ... - StreetComplete has a quest too: [1] Rorym (talk) 15:50, 21 April 2019 (UTC) - Done --Valor Naram (talk) 17:29, 21 April 2019 (UTC) I'm not sure whether these words or the following scheme is the best, but I hope you get the idea what I'm trying to map: - nappy_changing:vending/dispensing=nappies;rash_cream;powder;wet_wipe;gloves;hand_sanitizer;pad;motrin;tylenol;... Bkil (talk) 21:27, 20 April 2019 (UTC) - I appreciate your feedback but currently I am not sure how to embed this in this proposal. See the "In otherwise covered locations" section at Tag:amenity=vending_machine. The suggestion made by the users there fits best to our mapping technique. --Valor Naram (talk) 21:58, 20 April 2019 (UTC) - You should consider creating another proposal because this is unrelated to changing a nappy. You ask about the possibility, if there's a vending/disvending for equipment needed for changing nappies. --Valor Naram (talk) 10:42, 23 April 2019 (UTC) Is nappy_changing:location=toilets equivalent to nappy_changing:location=unisex? Bkil (talk) 21:27, 20 April 2019 (UTC) I would say "No" because we have seperated restrooms for each gender except of divers (it's not really a one-hit gender instead it's a mix of the variations between male and female) and just "toilet" means toilet in general speaking AND NOT gender specified. It also doesn't imply the availability of a unisex toilet since just saying "toilet" means for the most that there's a restroom for men and women. But saying there's a "unisex toilet" states that there's a unisex toilet for all genders available and not just toilets for men and women. Note: I don't use the name "nappy_changing_table" for the feature. I use "baby_changing_table" instead.--Valor Naram (talk) 22:07, 20 April 2019 (UTC) - Well, the place I had in mind only had a unisex toilet - a single, unsegregated entrance with possible multiple booths. In this case, the two are equivalent. However, if there exist two entrances: one unisex and one wheelchair, then it is reasonable to differentiate the two. Bkil (talk) 10:01, 22 April 2019 (UTC) - See the "location" subkey. --Valor Naram (talk) 10:43, 23 April 2019 (UTC) Tag key should be "changing_table" The purpose of the table is to facilitate changing the baby's nappies not the change of the baby. And it makes the keys shorter. --voschix (talk) 09:12, 21 April 2019 (UTC) - In English "to change the baby" or "baby changing" means to change the baby's nappy, not actually changing the baby for another baby Rorym (talk) - Done --Valor Naram (talk) 10:45, 23 April 2019 (UTC) Don't just limit to tables The current proposal is baby_changing_table=*, but the common values for diaper=* include things like diaper=room, diaper=table, diaper=bench, so I think limiting this tag to just tables is bad. May I suggest baby_changing_facilities=* instead? Rorym (talk) 15:48, 21 April 2019 (UTC) - Changing table is the official name for this facility. See also and --Valor Naram (talk) 17:45, 21 April 2019 (UTC) - Yes, "table" is the name of the changing table. But what if a venue provides a baby changing table inside a separate baby changing room (as opposed to a baby changing table in a semi-seculded place?) Don't we want to map what kind, if any baby changing facilities there are, not just tables? Rorym (talk) 18:34, 21 April 2019 (UTC) - Take a look at the 'changing_table:location' subkey. There's a value that indicates that a changing table is in a 'semi-seculed' place or as I call it 'dedicated room'. If the proposal success (gets approved), you will set "changing_table:location" to "dedicated_room" in order to tag this. --Valor Naram (talk) 19:10, 21 April 2019 (UTC) I'm not sure whether these words or the following scheme is the best, but I hope you get the idea what I'm trying to map: nappy_changing:services=bench;shelf;potty;pillow;pad;straps;tilting;... Bkil (talk) 19:40, 21 April 2019 (UTC) - Related to --Valor Naram (talk) 19:48, 21 April 2019 (UTC) Sorry, could you please explain how it is related, other than that the word "services" is accidentally present in both blocks of text? Let me rephrase that these sections aren't asking to use a specific key, I'm asking about possible subkeys. So my question is fully isomorphic to asking what you think about this one: changing_table:features=bench;shelf;potty;pillow;pad;straps;tilting;... Not sure what other features can be found nearby changing tables, maybe there could be more useful ones. Bkil (talk) 20:46, 21 April 2019 (UTC) - Of course. See Roryms answer "[...] but the common values for diaper=* include things like diaper=room, diaper=table, diaper=bench [Formatting added], so I think limiting this tag to just tables is bad." He mentioned adding the "services" like you said. Bench is just an example he's given. --Valor Naram (talk) 07:23, 22 April 2019 (UTC) - Alright then. I wasn't sure that you were considering extra services. I thought that Roryms used this for arguing against the main key "baby_changing_table" and wanted something more general. Some may want to extend the main key values yes/no/limited with the above, but I'd probably like to see a separate subkey for this. Bkil (talk) 09:42, 22 April 2019 (UTC) - Implemented a subkey "features" for the purpose you mentioned. --Valor Naram (talk) 12:14, 22 April 2019 (UTC) changing table in shop I'd be glad to see an example how to map a changing table which is part of the shop and so bound to the opening hours of it. Should the tag be a separate node? Could it be added to the shop node? This is how it looks and it's very common now in drugstores: - Yes, this proposal makes adding to existing nodes from e.g. shops possible. Possible tagging after proposal approval: - shop=chemist - changing_table=yes I may need to enhance the "changing_table:location" subkey --Valor Naram (talk) 18:36, 22 April 2019 (UTC) - changing_table:location=shop, wall, sales_area? Bkil (talk) 20:25, 22 April 2019 (UTC) - Added "sales_area" to value list of subkey "location" --Valor Naram (talk) 12:30, 23 April 2019 (UTC) - It would be useful to know if diapers are available (for free / a fee). in the picture above you see that they are offered as a free service to take. --Panoramedia (talk) 18:42, 22 April 2019 (UTC) - See the above suggestions for services/vending/dispensing. For example, this could include changing_table:dispensing=nappy;gloves;hand_sanitizer;paper_towel + changing_table:features=pad;waste_basket. If I understood correctly, Valor Naram recommended vending_machine=nappy;gloves;hand_sanitizer;paper_towel + vending_machine:fee=no instead (there also exists a proposal for amenity=dispenser). Bkil (talk) 20:25, 22 April 2019 (UTC) - The proposal is located at Proposed features/Dispenser. --Tigerfell (Let's talk) 12:01, 9 November 2019 (UTC) - I would suggest creating another proposal which extends Template:Key:vending This isn't related to change of the nappy of a baby but to buy/get equipment for changing nappies. --Valor Naram (talk) 10:38, 23 April 2019 (UTC) Overcomplicated Can I suggest to split introducing byzantine tagging scheme (changing_table:capacity=*, changing_table:features=*) with deprecating diaper=* (and introducing replacement)? Mateusz Konieczny (talk) 19:31, 22 April 2019 (UTC) - People also wanted to tag capacity/count when using diaper, so suggesting a replacement is a must: - - - - - Could you please clarify your exact concern with specifying features? Do you think that it could block this proposal from acceptance? We had a question about it just recently #changing_table_in_shop. Bkil (talk) 20:57, 22 April 2019 (UTC) - I think that it is not necessary for a new tagging scheme to allow tagging every single subdetail, including ones unlikely to be ever actually used on a wider scale Mateusz Konieczny (talk) 11:58, 23 April 2019 (UTC) - This proposal has two goals: Providing a replacement for "diaper" and improving it by adding more details which are purely optional to tag. Each mapper should decide on its own how many details he/she/it wants to tag. If you're one of them and think that just tagging "changing_table=yes" and "changing_table:location=female_toilet" is enough than just do it. No one forces you to tag every detail. - Note: I also suggested that I can add a section as a simple replacement solution like "diaper=yes|no" --> "changing_table=yes|no" --Valor Naram (talk) 12:23, 23 April 2019 (UTC) - Agree with Mateusz Konieczny, this proposal is way too overcomplicated. --Westnordost (talk) 21:41, 22 April 2019 (UTC) - I could add a section for "diaper" users where I just explain the replacements (simplifying) --Valor Naram (talk) 05:06, 23 April 2019 (UTC) why? Why is this change needed? Looking at the arguments I don't get it. The wheelchair issue is covered by diaper:wheelchair (see also taginfo), it's just -as do often -not documented in the wiki. Also all other proposed details can be done with diaper:* as well. Most of them are even already done. The confusion point is also a bit far fetched. Anybody having contact with a diaper change knows that there are changing tables and that isolated dispensing/disposal sites for diapers essentially don't exist. And if: they would be tagged by vending/disposal. For the BE/AE argument. That's a good rule if you look at ou vs. o spelling, but is imho not meant as something to artificially pick terms where most of the world just looks at you astounded. But even if: what's the benefit of changing a tad that is known, implemented, and coherently used in many sites?? --Morray (talk) 12:59, 26 April 2019 (UTC) Morray (talk) 12:59, 26 April 2019 (UTC) - Because the key 'diaper' has been often misunderstood and there's the possibility that it can be also used wrong. The mailing list stated some misunderstandings like "tagging a place where you can buy/get diapers", "a place where you have to wear a diaper" (this one is irony but shows how absurd the name is) and so on... --Valor Naram (talk) 13:15, 26 April 2019 (UTC) - Re: "isolated dispensing/disposal sites for diapers essentially don't exist" -> but there do exist changing tables where diapers (nappies) are being dispensed as mentioned in #changing table in shop. If the main tag was simply "diaper" or "nappy", such namespacing would be confusing: disregarding the exact tagging, imagine something like diaper:diaper=yes, diaper:vending:diaper=yes, diaper:vending=diaper, diaper:dispensing=diaper, diaper:features=diaper, etc. See the other optional tags section for features that we could map which wouldn't make sense if namespaced under diaper: diaper:features=pad (the changing table is padded and not the diapers), diaper:features=straps (the changing table has straps, not the diapers being vended), diaper:capacity=8 (there are 8 changing tables, not 8 diapers stocked) etc. -Bkil (talk) 00:25, 27 April 2019 (UTC) - Not quite. Looking at the mailing list, I see comments that indicate that the commenters are not really in contact with diapers. If you have children and/or are a person needing diapers there is no way to misunderstand this. Now the argument could be that the tag should also usable for people who are not "affected" by the tag. But imho I guess you will need to read a lot of documentation anyhow because most tags are not self explaining. Putting all this aside: changing_table is too limited as there are many other settings for changing diapers. If a change it should become something like diaper_changing_location. --Morray (talk) 05:16, 27 April 2019 (UTC) - The sad fact is: the demographics of mappers and wiki editors is skewed towards young males, possibly without children. Handling the majority as outgroup is not a proper argument. After we agree on a common tagging scheme that can feasibly represent various options, we can create editor presets that can be described with keywords, headlines and description of your desire. It is good practice to create tags to be self-explanatory in their respective context - it is expected that one looking at a toilet node will be able to tell from its tags all needed information. Could you please clarify what other settings we need to address? --Bkil (talk) 09:43, 27 April 2019 (UTC) - Closing this for not providing any other good reason why we shouldn't replace the key `diaper` --Valor Naram (talk) 19:47, 30 April 2019 (UTC) This discussion part is meant for listing all answers by me. - "changing_table:vending/dispensing" won't be implemented in this proposal in order to make it not to complicated. Vending and dispensing have also its own topic. You can buy/get thinks needed for changing the nappy of the baby but it isn't related to the possibility to aktually change nappies. Please consider extending the key described at by just editing its wiki page or even better by a proposal. Valor Naram (talk) 06:16, 27 April 2019 (UTC) - Some users said this proposal were overcomplicated. In order to solve these concerns out I've made the suggestion to split up the resulting wiki page in at least two parts. The first part will simply compare the old tagging with the new one so mappers can easily switch from the Key:diaper to Key:changing_table . The second one will explain the additional tagging which is fully optional to achieve. Mappers can decide on their own, if they want to tag it or not. - The name of the key "changing table" cannot be misunderstood. The definition for "changing table": "A changing table is a small raised platform designed to allow a person to change someone's diaper." Source: Similar definitions can be found on other dictionary sites. - Datasets containing the old data won't be deleted. There will be a transition period in which we have to consider how to treat the old key "diaper" and how to replace it with the new key. what happens to the old data? What do you plan to do with the old data? If your point that the tag taking used by several thousand people (!) atm is so confusing you have to trash all this data. Or are you planning some kind of rechecking campaign? I think losing this data is not acceptable! --Morray (talk) 07:42, 1 May 2019 (UTC) Morray (talk) 07:42, 1 May 2019 (UTC) - I agree losing this data is not acceptable. Overwriting the old data via bot is also not an option because we cannot know how mappers use the key because some work with taginfo only and therefore there's the possibility of wrong use. That's why we need a transition period in which we negotiate how to correct the old data. Rechecking is also a reasonable consideration to take. I will work together with the community and I'm optimistic that we will find an appropriate solution. --Valor Naram (talk) 09:21, 1 May 2019 (UTC) - Maybe I missed something. Where "trash all this data" was proposed? Mateusz Konieczny (talk) 17:35, 1 May 2019 (UTC) - Again! No data will be deleted --Valor Naram (talk) 17:56, 1 May 2019 (UTC) - Yes, I agree that previous data should be preserved. Furthermore, I think we may decide to convert some of the clear-cut cases (like diaper=yes to changing_table=yes), although this can be done manually with Overpass Turbo and JOSM, although I'd probably double check the input first (to only convert nodes where no other diaper:*=* tag combination was present). Bkil (talk) 19:13, 13 May 2019 (UTC) Default values for changing_table:fee and changing_table:access [...] do we assume some values when the keys changing_table:fee=* or changing_table:access=* are missing? --Skorbut (talk) 05:44, 13 May 2019 (UTC) - I think we should assume a default value similar to how fee=* and access=* is interpreted, i.e., it is free and access is not restricted. The same should be assumed for toilets as well. If a fee is needed to use toilet or an access restriction is in order, (like access=customers), I always mark it as such. 19:09, 13 May 2019 (UTC) Description of changing_table:features=* [...] I miss the descriptions for the values of the key changing_table:features=* [...] --Skorbut (talk) 05:44, 13 May 2019 (UTC) - bench: a bench is present - shelf: shelves are available preferably integrated into the table in reaching distance - potty: a chamber pot is available - pillow: a pillow is integrated into the table or available to place behind the head of the baby - pad: a (solf) pad is placed behind the back of the baby (the difference between a mat and a pad is that a mat is thinner, more rugged and more about surface protection, while a pad is thicker, softer, more about shock absorbing and comfort - a non native speaker) - straps: straps can be used to immobilize and stabilize the baby against falling - tilting: the angle of the changing table against the ground can be adjusted - adjustable_height: the height of the table can be adjusted
https://wiki.openstreetmap.org/wiki/Talk:Proposed_features/changing_table
CC-MAIN-2021-21
refinedweb
2,930
59.74
Package: aptitude Version: 0.6.1.3-3 Severity: serious Tags: patch Justification: Broken package manager, broken d-i, etc. User: debian-bsd@lists.debian.org Usertags: kfreebsd Hi, for quite a while, we've had broken GNU/kFreeBSD d-i images, users complaining about installation being stuck at 1%. For quite a while too, aptitude started not working at all on GNU/kFreeBSD (which I didn't notice initially since I'm mainly using cupt on my porter box). So I took some time to check various settings, and found out that with current sid libraries, sid's aptitude wasn't working, while testing's was. Not working means, among other things: - “aptitude update” doesn't do anything but waiting. - “aptitude” alone cleans up the screen, and then does nothing at all. So I started “bisecting” versions between testing (0.4.11.11-1) and sid's, and determined that DEBIAN_aptitude_0.5.9rc2-1 was OK while DEBIAN_aptitude_0.5.9rc3-1 was not. Looking at the log between both, the changeset below sounded like a good candidate, so I used DEBIAN_aptitude_0.5.9rc3-1 and reverted it, which gave me a working aptitude. I then got back to sid's, applied the attached patch (which acts as a revert, but only for GNU/kFreeBSD), and aptitude seems to be working fine: - “aptitude update” is alright. - “aptitude install foo” is alright. - “aptitude” and then: * u-update * U-upgrade * C-changelog are alright. Changeset: | $ hg log -r 3246 -p | changeset: 3246:ebf77e8755f5 | parent: 3241:11f5f723d2c4 | user: Daniel Burrows <dburrows@debian.org> | date: Wed Sep 23 08:58:29 2009 -0700 | summary: Block SIGWINCH by default to ensure that cwidget is able to sigwait() on it. (Closes: #547212) | | diff -r 11f5f723d2c4 -r ebf77e8755f5 src/main.cc | --- a/src/main.cc Sun Sep 13 09:05:49 2009 -0700 | +++ b/src/main.cc Wed Sep 23 08:58:29 2009 -0700 | @@ -502,6 +502,27 @@ | | int main(int argc, char *argv[]) | { | + // Block signals that we want to sigwait() on by default and put the | + // signal mask into a known state. This ensures that unless threads | + // deliberately ask for a signal, they don't get it, meaning that | + // sigwait() should work as expected. (the alternative, blocking | + // all signals, is troublesome: we would have to ensure that fatal | + // signals and other things that shouldn't be blocked get removed) | + // | + // In particular, as of this writing, log4cxx doesn't ensure that | + // its threads block signals, so cwidget won't be able to sigwait() | + // on SIGWINCH. (cwidget is guilty of the same thing, but that | + // doesn't cause problems for aptitude) | + { | + sigset_t mask; | + | + sigemptyset(&mask); | + | + sigaddset(&mask, SIGWINCH); | + | + sigprocmask(SIG_SETMASK, &mask, NULL); | + } | + | srandom(time(0)); | | using namespace log4cxx; I guess that even if the original changeset was meant to fix a bug, this very bug can stay around on GNU/kFreeBSD until somebody proposes a better solution than just disabling this codepath. It would be nice to have a working d-i again ASAP (although I didn't build an image to check, I already spent a long time building and building again aptitude, and I'm not yet used to d-i image building); and even if that's not sufficient, getting back a working aptitude would be nice. Thanks for considering this quickly. (I'm Cc-ing debian-bsd@, in case somebody has an idea about what's going on exactly.) Mraw, KiBi. --- a/src/main.cc +++ b/src/main.cc @@ -528,6 +528,9 @@ int main(int argc, char *argv[]) // its threads block signals, so cwidget won't be able to sigwait() // on SIGWINCH. (cwidget is guilty of the same thing, but that // doesn't cause problems for aptitude) + // + // Do not do that on GNU/kFreeBSD, that totally breaks aptitude: +#if !defined(__GLIBC__) { sigset_t mask; @@ -537,6 +540,7 @@ int main(int argc, char *argv[]) sigprocmask(SIG_SETMASK, &mask, NULL); } +#endif srandom(time(0));
https://lists.debian.org/debian-bsd/2009/12/msg00053.html
CC-MAIN-2015-32
refinedweb
649
61.26
> RakNet-2.52.zip > README This is the README for bzip2, a block-sorting file compressor, version 1.0.3. This version is fully compatible with the previous public releases, versions 0.1pl2, 0.9.0, 0.9.5, 1.0.0, 1.0.1 and 1.0.2. bzip2-1.0.3 is distributed under a BSD-style license. For details, see the file LICENSE. Complete documentation is available in Postscript form (manual.ps), PDF (manual.pdf) or html (manual.html). A plain-text version of the manual page is available as bzip2.txt. A statement about Y2K issues is now included in the file Y2K_INFO. HOW TO BUILD -- UNIX Type `make'. This builds the library libbz2.a and then the programs bzip2 and bzip2recover. Six self-tests are run. If the self-tests complete ok, carry on to installation: To install in /usr/bin, /usr/lib, /usr/man and /usr. HOW TO BUILD -- UNIX, shared library libbz2.so. Do 'make -f Makefile-libbz2_so'. This Makefile seems to work for Linux-ELF (RedHat 7.2 on an x86 box), with gcc. I make no claims that it works for any other platform, though I suspect it probably will work for most platforms employing both ELF and gcc. bzip2-shared, a client of the shared library, is also built, but not self-tested. So I suggest you also build using the normal Makefile, since that conducts a self-test. A second reason to prefer the version statically linked to the library is that, on x86 platforms, building shared objects makes a valuable register (%ebx) unavailable to gcc, resulting in a slowdown of 10%-20%, at least for bzip2. Important note for people upgrading .so's from 0.9.0/0.9.5 to version 1.0.X. All the functions in the library have been renamed, from (eg) bzCompress to BZ2_bzCompress, to avoid namespace pollution. Unfortunately this means that the libbz2.so created by Makefile-libbz2_so will not work with any program which used an older version of the library. Sorry. I do encourage library clients to make the effort to upgrade to use version 1.0, since it is both faster and more robust than previous versions. HOW TO BUILD -- Windows 95, NT, DOS, Mac, etc. It's difficult for me to support compilation on all these platforms. My approach is to collect binaries for these platforms, and put them on the master web page (). Look there. However (FWIW), bzip2-1.0.X is very standard ANSI C and should compile unmodified with MS Visual C. If you have difficulties building, you might want to read README.COMPILATION.PROBLEMS. At least using MS Visual C++ 6, you can build from the unmodified sources by issuing, in a command shell: nmake -f makefile.msc (you may need to first run the MSVC-provided script VCVARS32.BAT so as to set up paths to the MSVC tools correctly). VALIDATION Correct operation, in the sense that a compressed file can always be decompressed to reproduce the original, is obviously of paramount importance. To validate bzip2, I used a modified version of Mark Nelson's churn program. Churn is an automated test driver which recursively traverses a directory structure, using bzip2 to compress and then decompress each file it encounters, and checking that the decompressed data is the same as the original. Please read and be aware of the following: WARNING: This program (attempts to) compress data by performing several non-trivial transformations on it. Unless you are 100% familiar with *all* the algorithms contained herein, and with the consequences of modifying them, you should NOT meddle with the compression or decompression machinery. Incorrect changes can and very likely *will* lead to disastrous loss of data. DISCLAIMER: I TAKE NO RESPONSIBILITY FOR ANY LOSS OF DATA ARISING FROM THE USE OF THIS PROGRAM, HOWSOEVER CAUSED. impossible to rule out the possibility of bugs remaining in the program. DO NOT COMPRESS ANY DATA WITH THIS PROGRAM UNLESS YOU ARE PREPARED TO ACCEPT THE POSSIBILITY, HOWEVER SMALL, THAT THE DATA WILL NOT BE RECOVERABLE. That is not to say this program is inherently unreliable. Indeed, I very much hope the opposite is true. bzip2 has been carefully constructed and extensively tested. PATENTS: To the best of my knowledge, bzip2 does not use any patented algorithms. However, I do not have the resources to carry out a patent search. Therefore I cannot give any guarantee of the above statement. End of legalities. WHAT'S NEW IN 0.9.0 (as compared to 0.1pl2) ? * Approx 10% faster compression, 30% faster decompression * -t (test mode) is a lot quicker * Can decompress concatenated compressed files * Programming interface, so programs can directly read/write .bz2 files * Less restrictive (BSD-style) licensing * Flag handling more compatible with GNU gzip * Much more documentation, i.e., a proper user manual * Hopefully, improved portability (at least of the library) WHAT'S NEW IN 0.9.5 ? * Compression speed is much less sensitive to the input data than in previous versions. Specifically, the very slow performance caused by repetitive data is fixed. * Many small improvements in file and flag handling. * A Y2K statement. WHAT'S NEW IN 1.0.0 ? See the CHANGES file. WHAT'S NEW IN 1.0.2 ? See the CHANGES file. WHAT'S NEW IN 1.0.3 ? See the CHANGES file. I hope you find bzip2 useful. Feel free to contact me at jseward@bzip.org if you have any suggestions or queries. Many people mailed me with comments, suggestions and patches after the releases of bzip-0.15, bzip-0.21, and bzip2 versions 0.1pl2, 0.9.0, 0.9.5, 1.0.0, 1.0.1 and 1.0.2, and the changes in bzip2 are largely a result of this feedback. I thank you for your comments. At least for the time being, bzip2's "home" is (or can be reached via) Julian Seward jseward@bzip.org Cambridge, UK. 18 July 1996 (version 0.15) 25 August 1996 (version 0.21) 7 August 1997 (bzip2, version 0.1) 29 August 1997 (bzip2, version 0.1pl2) 23 August 1998 (bzip2, version 0.9.0) 8 June 1999 (bzip2, version 0.9.5) 4 Sept 1999 (bzip2, version 0.9.5d) 5 May 2000 (bzip2, version 1.0pre8) 30 December 2001 (bzip2, version 1.0.2pre1) 15 February 2005 (bzip2, version 1.0.3)
http://read.pudn.com/downloads69/sourcecode/windows/network/249397/DependentExtensions/bzip2-1.0.3/README__.htm
crawl-002
refinedweb
1,067
69.28
Introduction In one of the previous article Build Your Own Directive, I showed you how to build or develop a custom highlight attribute that highlights the text background with yellow. We made use of Renderer service to render an element with the background style color to yellow. The element reference (which contained the text) was obtained using ElementRef type. We bound the style.background attribute of that element with the value of yellow using the setElementStyle method of the Renderer service. In this article I will use the same Typescript class and show you the alternate way to highlight the hosting element. We will use something called as @HostBinding decorator or meta data. Using @HostBinding The @HostBinding decorator can be used to bind a value to the attribute of the hosting element, in our case, the element hosting the highlight directive. Let’s look at the code that makes use of @HostBinding decorator. import { Directive, HostBinding } from '@angular/core'; @Directive({ selector: '[highlight]' }) export class HighlightDirective { private color = "yellow"; @HostBinding('style.backgroundColor') get getColor() { return this.color; } constructor() { } } The above is the same directive class with the highlight attribute. The only difference is here now we are rendering the element color to yellow using @HostBinding decorator instead of the Renderer service. Let’s walk through the code. First we define a property named color and set its default value to yellow. Next we define the @HostBinding decorator which is part of core Angular package. The said decorator accepts the name of the attribute to which we want to bind the value in the hosting element. The attribute in this case will be style.background because we need to set the background color. The get is a built-in concept of Typescript which acts like getter function to the property. The getColor() method acts as a property to the get ‘getter’ which simply returns the value of the color property. The getColor()method name has nothing to do with the colorproperty that we have defined. The method name can be anything and it should return the value which is eventually bound to the hosting element. Upon running the application, you should see the text with highlighted color as yellow.
http://techorgan.com/javascript-framework/angularjs-2-series-binding-the-host-element-with-hostbinding/
CC-MAIN-2017-26
refinedweb
367
56.35
There is a bug in plugin-config grails plugin at 0.1.5 version. It has been already fixed in version 0.1.8. We using quartz2 plugin which enforced 0.1.5 version to our app. So I have excluded plugin-config from quartz2 and added latest version. url = "jdbc:h2:file:prodDb;MVCC=TRUE;LOCK_TIMEOUT=10000" refers to a H2 file database. Can you try using in memory prodDb instead if the intention was not to refer to a file database? url = "jdbc:h2:prodDb;MVCC=TRUE;LOCK_TIMEOUT=10000" I spent several hours today with this and find a solution minutes after posting here. In Netbeans I had to delete Tomcat from the list of servers then manually add it back with all the same information. Netbeans must save details about the server the first and only time it connects. The cursorInfo command should work. If there are no more cursors, then it's ok to shut off the mongos. Any connections that still exist will simply fail over to another mongos through the load balancer when they try to reconnect (assuming they have an appropriate reconnection policy in place). The only thing you need to worry about is cursors, since they have state, which is taken care of by cursorInfo. Your problem is your operating system, you need to tune it for that kind of load. Have you considered linux instead? It's significantly more powerful and stable than Windows, and unlike Windows (which is purposefully "crippled" (low TCP default limits) to encourage upsell), Linux comes out-of-the-box ready for much higher workloads, and can also be tuned for 10x better performance than Microsoft. found a way to count number of db2 connections per DB per IP address db2 list applications for database <databaseName>| grep db2jcc_applica | grep -c <ip_address> This will show all connections to the particular coming from an ip_address. If the DB server is within your local Server, the ip_address would be "127.0.0.1" There was a connection leaking issue affecting the mongo connector. It has recently being addressed and it will be part of the next release. change maxThreads value to hit count on server.xml file and check <Connector port="****" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" /> According to the documentation, exec doesn't block. Trying using exec! instead.}" end Alternatively,}" ssh1.loop end It appears the problem is that a user created via the method described in the mongo docs does not have permission to connect to the default database (test), even if that user was created with the "userAdminAnyDatabase" and "dbAdminAnyDatabase" roles. The service provider mechanism determines which class is used as the implementation of XPathFactory. A JAR file on the class path that contains the file META-INF/services/javax.xml.xpath.XPathFactory can replace the default implementation in the JRE. Most likely, the class path is different for development and production environments in your case. To check which implementation is used you can print XPathFactory.newInstance().newXPath().getClass(). The internal implementation in the JRE is com.sun.org.apache.xpath.internal.jaxp.XPathExpressionImpl. Solved with hacks described in I would suggest doing a: grails clean and also deleting the .slcache folder, that you can find under ~/.grails/.slcache regards ProxyPass is a directive used by mod_proxy not mod_jk. If you want to use mod_jk use: JkMount /appName/* workerApp For this to work you need to configure the module (/etc/apache2/mods-available/jk.load): LoadModule jk_module /usr/lib/apache2/modules/mod_jk.so JkWorkersFile /etc/apache2/workers.properties JkLogFile /var/log/apache2/mod_jk.log JkLogLevel debug JkLogStampFormat "[%a %b %d %H:%M:%S %Y] " then add a worker (/etc/apache2/workers.properties): workers.tomcat_home={PATH_TO_TOMCAT} workers.java_home={PATH_TO_JVM} ps=/ worker.list=workerApp worker.workerApp.port=8009 worker.workerApp.host=localhost worker.workerApp.type=ajp13 worker.workerApp.lbfactor=1 Be sure this line is not commented in Tomcat server.xml: <Connector port="8009" address= "127.0.0.1" enableLookups= " Try the From what I see it could be: The database server is down Database server was up but there was a network problem causing your application not being able to contact database server (cable disconnected, firewall, etc.) You can instantiate the Tomcat's Connection Pool in your servlet. It should look like this: public class YourServlet extends HttpServlet { private DataSource ds; public void init(ServletConfig config) throws ServletException { org.apache.tomcat.jdbc.pool.PoolProperties prop; prop = new org.apache.tomcat.jdbc.pool.PoolProperties(); prop.setUrl("jdbc:mysql://localhost:3306/foo"); prop.setDriverClassName("com.mysql.jdbc.Driver"); prop.setUsername("user"); prop.setPassword("password"); org.apache.tomcat.jdbc.pool.DataSource dataSource; dataSource = new org.apache.tomcat.jdbc.pool.DataSource(); dataSource.setPoolProperties(prop); ds = dataSource; } protected void doGet(HttpServletRequest request, HttpServletResponse response) throws Ser Perhaps you encounter this bug: This was fixed in the latest Grails release. In the code, I see that your are saving uploaded files into your WEB-INF folder. Although you can do that, when you restart server/ redeploy application, the content of your web-app folder will be replaced, hence the uploaded file will be removed as well. That's how Tomcat works. I think the better way to do that is saving your uploaded file to a folder which is NOT inside of your web container (Tomcat) folder. By that way, it won't be deleted when you: 1) Redeploy your web application 2) Restart Tomcat for whatever reason A practice my team used to do is that we create a custom folder in the home folder of the current user. By that way we don't have to worry much about the privilege to save files. You could implement your job so that it periodically checks if it's allowed to continue. This is best practice for long-running jobs anyway. If that's in place, you can easily provide a UI for the feature - be it on application restart or individual per job. I found out that child ResourceDescriptors are references to themselves, so I had to add: PROP_RU_IS_REFERENCE = true PROP_RU_REFERENCE_URI = uriString for it to work! Tomcat as the client has to be considered as a client which does not run in the same application server container. So you have to package all relevant classes/interfaces into the .war file which is deployed on Tomcat. Looking at the stack trace: The JBoss server does respond, it's the JBoss client who reports that a class is missing: at org.jboss.ejb.client.naming.ejb.EjbNamingContext.createEjbProxy The actual error (missing class) is packaged in a NamingException. So you have to put the relevant classes into the .war file for Tomcat: It's probably your remote interface recc.business.login.LoginBeanRemote. I always have EJB classes (implementation) and their interfaces in two separate packages, and the EJB client gets a .jar file which just contains the interfaces (and possibly ex If you're merely using tomcat, you needn't build it, just download a version, unzip, set CATALINA_HOME, and hit $CATALINA_HOME/bin/catalina.sh start. The only reason I'd want to build it yourself is if I were a developer, instead of a mere user. Leave a comment if you have further problems. g:message was changed because of a XSS vulnerability (GRAILS-7170). See for a workaround for continuing to use HTML arguments in certain cases (such as your use case). The problem was that the updateOnStartFileNames property takes an Array, not a String, so the configuration should be grails.plugin.databasemigration.updateOnStartFileNames = ['changelog.groovy'] The plugin iterates over this list executing each in turn, but when given a String it iterates over each character and executes it. Liquibase then throws the exception because it doesn't recognise the suffix of the first letter, in this case, 'c'. The image is very small but on closer investigation you have two Java processes the first process has a connection to itself. There is a connection for each end, port 52209 and 52210. it also has a connection from the second process on port 1001. the second process is the client you are running with one connection to port 1001 Many Questions: 1) Reusing a connection is faster then establishing a new connection for every use. Depending on your code, this will speedup your application a little bit. But reusing connections is more complex. Thats the reason why many people use connection pools. 2) If your program has a short runtime you can work with one connection, e.g. in a global variable. If your application is a server application (long running), than you need to maintain your connection because the server can close the connection, if he thing that nobody use it because there runs no traffic over the connection. This could happen in the night times on server applications. The connection maintenance function is part of connection pools. Summary. If your application a simple, not multi threaded, not server app The tomcat maven plugin has been moved to the Apache umbrella and been significantly updated since v1.1. I tried reverting to the mojo you're using, above, and could not get it to deploy either. There's a Tomcat7-specific mojo that works just fine. Try using <plugin> <groupId>org.apache.tomcat.maven</groupId> <artifactId>tomcat7-maven-plugin</artifactId> <version>2.1</version> <configuration> ... </configuration> </plugin> instead. You can find all the documentation at I don't think your package name is valid. I've never officially looked it up for Groovy, but I believe Groovy follows Java naming conventions for packages, which states you can't have package names start with a digit. ich chume nöd drus. z vill Infos. Rüef mich mal, wänns Problem nöchst mal vorchunnt. ok, jetzt chumi drus. schräg... In short: when deploying packed as war (unpackWar=FALSE), the application cannot acces some beans. I'll have a look at your settings next time it occurs. You could always have a look at a very handy tool called Process Monitor. It was developed by Mark Russinovich, and is now part of a Microsoft toolset. It allows you to monitor lots of events, including thread creation, registry access, file access, and network activity. All of these events are on a per-process basis, so you should be able to use the filters to see what process is connecting to a specific port local port. I'm not sure if this would include UDP (which is generally connectionless), but certainly a quick test shows TCP Connect, Disconnect, Send and Receive events. It's a very handy tool to add to your programming toolbox anyway. I can't help you debug that issue, but I can suggest using connect-redis instead. The Redistogo nano instance on Heroku is free, and should support automatically expiring unused sessions so that you won't need a pricier option. sub = [767220, 769287, 770167, 770276, 770791, 770835, 771926, 1196500, 1199789, 1201485, 1206331, 1206467, 1210929, 1213184, 1213204, 1213221, 1361867, 1361921, 1361949, 1364886, 1367224, 1368005, 1368456, 1368982, 1369000, 1370365, 1370434, 1370551, 1371492, 1471407, 1709408, 1710264, 1710308, 1710322, 1710350, 1710365, 1710375] def runningMean(seq, n=0, total=0): #function called recursively if not seq: return [] total = total + int(seq[-1]) if int(seq[-1]) < total/float(n+1) * 0.9: # Check your condition to see if it's time to stop averaging. return [] return runningMean(seq[:-1], n=n+1, total=total) + [total/float(n+1)] avg = runningMean(sub, n = 0, total = 0) print avg print sub[-len(avg):] You are not returning any value from the validate method, you were returning 'error' from the each() callback method, not from validate //My default strings from another place MyApp.strings.defaults = { firstName : 'first name' } //Model Validate function validate : function(attr){ var error; jQuery.each(attr, function(key, value){ var defaultValue = MyApp.strings.defaults[key]; if(defaultValue){ defaultValue = jQuery.trim(defaultValue.toLowerCase()); if(value.toLowerCase() == defaultValue){ console.log(value, defaultValue); //fires, and outputs both as being the same error = 'error'; return false; } } }); return error; } I think you need to add custom controller to render pages defined in web.xml. @see description here: Custom error pages in Tomcat with Spring The Commons DBCP pool is very good, but the Tomcat pool is more flexible and has higher performance. The initial blog posts from tomcatexpert.com are a bit dated, but should still be very relevant and if anything the numbers should be better now: Note that in 2.3 we've replaced Commons DBCP with Tomcat JDBC, so it makes sense to start using it now.. If the connections are coming from a different machine, the connections can't be pooled. A connection required both endpoints to be a connection. If you are using connection pooling correctly, applications instantiate a connection (from the pool), use the connection, then drop it as soon as exchange is complete. If you a writing a single-threaded desktop app, another common and simple strategy is to open a connection, and just leave that connection open as long as the application is running. You can control how many connections are created, etc. see MS article for more details related to connection pooling. IIRC, connection pools are not shared unless the connection string is identical either. The easiest thing would be to add dbCreate = "update" to your DataSource.groovy. A better thing would be to use the database-migrations plugin for your app. Regarding manually creating tables, the convention grails following by default is to underscore camelcase. For example, given the following domains: class User { String firstName String lastName static hasMany = [addresses: Address] } class Address { static belongsTo = [user: User] } You would end up with the following tables: user --------- id version --------- address --------- id version user_id --------- You could use my lib for this called SignalR.EventAggregatorProxy Install using nuget Install-Package SignalR.EventAggregatorProxy It takes care of the Hub and connection details for you under the hood, so that you only need to subscribe and publish events Check wiki for how to set it up Once set up all you need todo to get it to listen to a back end event from javascript is ViewModel = function() { signalR.eventAggregator.subscribe(MyApp.Events.TestEvent, this.onTestEvent, this); }; You can listen to how many events you want on only one connection and one hub
http://www.w3hello.com/questions/Grails-Mongo-Tomcat-fails-to-stop-creating-connections
CC-MAIN-2018-17
refinedweb
2,380
55.64
Base class for any astro image with a fixed position. More... #include <StelSkyImageTile.hpp> Base class for any astro image with a fixed position. Definition at line 64 of file StelSkyImageTile.hpp. Default constructor. Constructor. Constructor. Destructor. Draw the image on the screen. Implements StelSkyLayer. Return the absolute path/URL to the image file. Definition at line 99 of file StelSkyImageTile.hpp. Return the dataset credits to use in the progress bar. Definition at line 86 of file StelSkyImageTile.hpp. Return an HTML description of the image to be displayed in the GUI. Reimplemented from StelSkyLayer. Definition at line 102 of file StelSkyImageTile.hpp. Return the server credits to use in the progress bar. Definition at line 89 116 of file StelSkyImageTile.hpp. Whether the texture must be blended. Definition at line 122 of file StelSkyImageTile.hpp. The credits for the data set. Definition at line 113 of file StelSkyImageTile.hpp. The image luminance in cd/m^2. Definition at line 119 of file StelSkyImageTile.hpp. Minimum resolution of the data of the texture in degree/pixel. Definition at line 134 of file StelSkyImageTile.hpp. True if the tile is just a list of other tiles without texture for itself. Definition at line 125 of file StelSkyImageTile.hpp. The credits of the server where this data come from. Definition at line 110 of file StelSkyImageTile.hpp. list of all the polygons. Definition at line 128 of file StelSkyImageTile.hpp. The texture of the tile. Definition at line 131 of file StelSkyImageTile.hpp.
http://stellarium.org/doc/head/classStelSkyImageTile.html
CC-MAIN-2015-22
refinedweb
252
54.79
From: Ed Brey (edbrey_at_[hidden]) Date: 2001-09-08 16:20:02 Following is my review of Boost.Threads. My role as reviewer is independent of my role as review manager. All comments expressed herein are mine only and do not reflect a summary of all the reviews (which I haven't read yet :-). Overall, the design and documentation are excellent. I specially like the well-written introductions to each page of the documentation, plus the timely danger indications in the reference material. The interface is sound, with many good design decisions, but could be improved by a few tweaks. I have read most of the documentation and some of the code, and I have compiled and run the test. Based on my review (only), I believe that Boost.Threads should be accepted into Boost. However, its acceptance should be conditioned on fixing compilation errors, getting the test to run successfully, and eliminating meaningful warnings. Additionally acceptance should be conditioned on a having a coordinated interface to functions that affect current thread, including documented rationale (details of current concerns and suggestions are in the body of the review). Interface design: semaphore: When would up() ever be called such that it would raise count above max? I would have expected count <= max be a precondition to calling up(). There's probably a good reason that I'm not think of. It might be good to document what it is. Since condition::wait and thread::join affect the current thread rather than the object that are part of, I think the interface would be clearer if they were free functions that took the item to wait for as a parameter. True, the user would need to qualify the name, i.e. boost::wait(cond), which wouldn't otherwise be necessary; however, I think the advantage of avoiding a form that makes it look like the thread or condition is doing the waiting (or joining) something that is not specified in the syntax is of greater value. Moreover, if waiting on a condition, thread, and thread group were named consistently, e.g. all using a function called wait, with a single using declaration the user could have the nice syntaxes of: wait(my_thread), wait(my_thread_group), or wait(my_condition). Likewise, sleep and yield can be pulled out into free functions for convenience, since there is no worry I can see of a name clash. Given these changes, wait, sleep, and yield would all be convenient and consistent in syntax. add_thread should take an auto_ptr. As it is, the client has to take special precaution to avoid a leak when push_back fails within add_thread. Also, the fact that add_thread may throw should be documented. The thread subdirectory is inconsistent with the lack of a general thread subnamespace. Naming: Semaphore::up and down are adjectives doing a verb's work. More to spirit of C++ would be the use of operator++, operator--, and operator+= where no other parameters are needed, and increment and decrement otherwise. join: When a fresh user, not yet having picked up any jargon, asks "Why did you name this function such-and-such?" the answer should be "because that's what is does," and the fresh user, having read a simple function description, should be able to say "Yes, you're right. I see that is what it does." To this end, wait is clear, plus has the advantage of terminology consistent with condition::wait. Are there any disadvantages that I'm forgetting from the naming thread a while back? xlock.hpp: The header name doesn't mean anything to me. I would have thought just "lock.hpp". Complicating this issue is that the documentation for scoped_lock et al. neglects to mention that they are tucked within a detail namespace. They should either be true details and not shown with any header or namespace (only accessible via the mutex classes), or they should become first-class citizens of Boost.Threads. The subnamespace for Boost.Threads is thread, whereas for Boost.Tuples, it is tuples. The singular vs. plural works against the cohesiveness of Boost and lessens chances for standardization. I like the singular, although that is difficult without adopting a changing the naming convention to allow upper case. I don't have the answer to this issue, but it is one that Boost needs to address. Implementation: Inlining should be used in some cases to increase efficiency. For example, with functions that just call another function (with few parameters), such as semaphore::down(). I realize that this may mean including <windows.h> in the header, but this isn't a big problem. How many users won't have it already? Plus with precompiled headers, the time difference is negligible. (I realize that PCHs can worsen the compiler sometimes, but it is fine to just turn them off for the translation units that trigger the bugs.) In condition::notify_all, line 125, signals is assigned with no effect. thread_group::m_threads: Why not a vector? You are already paying O(N) to do the linear search on removal. Since size() will generally be small, a vector will tend to be more efficient. Documentation: boost:: in boost::noncopyable is superfluous. Private sometimes precedes public in examples. HMO, they should be reversed. Documentation of constructions where you don't have anything to say should be removed. For example, documentation of condition's constructor and destructor should be removed, both in the members section and in the synopsis. This way, whether condition actually has a (manually created) constructor or destructor can be left as an implementation detail and changed at any time. semaphore: Plain English descriptions of up and down are hard to find. They are buried in the introduction paragraph, but the reference section only provides a "just read the source code" description. There is room for improvement in organizing the documentation to steer the end user right away towards the documentation that he is most likely to find helpful, bearing in mind that he is likely to take a top-down approach in getting a handle on the library. Unfortunately, I don't have any specific suggestions in this regard; only that perhaps the thread creation and mutex classes should tend toward the top of the opening page. Examples that use thread::create and join_all are out of date. Mutex: I'm probably missing something obvious, but it doesn't look like the example will work. The count in global_data is initialized to 0, and so each of the 4 calls to down() will see that it is 0 and block indefinitely. scoped_lock: Rather than including the unreferenced abbreviation mx, it would better to just omit the token altogether, since it adds nothing and actually detracts by providing the reader with one extra item to process. scoped_lock: lock_it is a independent clause doing an adjective's work. initially_locked would be more suitable for a boolean. I realize each of these points about naming are by themselves nits; however, when taken as a whole, there is considerable value in using parts of speech consistently. The part of speech subtly reinforces the meaning of the name. scoped_lock::lock: Effect of unchecked locking strategy is not listed in the table. condition::wait: The asterisk in *this->notify_one() et al. is illegal. The enumeration of the various locking strategies and scheduling policies do an excellent job of putting the nature of boost unspecified policies in a clear light. The only exception is that the strategy on unlocking is ambiguous. Only the checked strategy mentions this. Recursive, unchecked, and unspecified are silent. I would have expected at least the last two, if not all three remaining to not check, i.e. make m_locked == true a precondition. Currently, it is not a precondition, as the code for scoped_lock::unlock always checks whether the lock is locked, even though the policy is "unchecked". This seems inconsistent. Coding details: scope_lock: The friendship declaration exists in the public section. Wouldn't it be more appropriate in the private section? test_thread.cpp: There are many unreferenced dummy parameters, which trigger warnings. The corresponding variables should be removed. Compiling the test and implementation code generates many warnings under VC6. The code should be improved so that it compiles without warning under level 4, given that warnings that are not helpful are turned off, e.g.: 4097 4127 4284 4290 4554 4505 4511 4512 4514 4706 4710 4786 4800 4355 4250 4291 4251 4275. tss.cpp has an error in line 128. An assignment is attempted to the constant value dereferenced from the set iterator. A map may be a better choice here, or perhaps using a structure with a mutable member instead of pair. (The frequent usage of pair makes this code hard to read anyway.) Given a quick-fix const_cast for line 128, the test compiles. Running the test, however, generates many assertions on line 78 of tss. Let me know if this is a new problem to you and you would like more info. My platform is VC6SP5 STLport 4.5b8, December 1999 Platform SDK. Typos: ~timed_mutex Dangers: Extra period. Same in the condition introduction. It probably wouldn't hurt to grep everything for "..</p>". Existence of <hr> is inconsistent between overloads of condition::wait versus condition::timed_wait thread::thread: Postcondition should be Precondition. thread example: iostram should be iostream. Again, good job! The practicality of addressing almost exclusively the bad points can be misleading. The good far outweigh them. - Ed Boost list run by bdawes at acm.org, david.abrahams at rcn.com, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
http://lists.boost.org/Archives/boost/2001/09/17151.php
crawl-002
refinedweb
1,603
57.77
Hi, Could you please let me know exactly how the pre-jit, econo-jit and normal-jit are implemented? A descriptive answer is welcome Many Thanks... Thanks for your post. Econo-JIT has a faster compilation speed and a lesser compiler overhead than Standard-JIT. However, the Standard-JIT generates more optimized code than the Econo-JIT and includes the verification of MSIL code. Please take a look at here. Pre-JIT compiles the whole code into native code in a single compilation. It can be used to optimize the program startup delay. The implementation of commercial .NET platform is not publicly available. But we could use the SSCLI (Shared Source CLI) to get a peek into the Standard-JIT and Econo-JIT details. The following links could be helpful. 1. The CLR x86 JIT, an overview 2. Introduction to .NET (Search for "Compile code") The Pre-JIT could be tuned with types from System.Runtime.CompilerServices namespace. The following link could give you some idea. Pre-compile (pre-JIT) your assembly on the fly, or trigger JIT compilation ahead-of-time
https://social.msdn.microsoft.com/Forums/en-US/c90e04f0-a666-4651-be97-e4d299813bda/about-the-three-jits?forum=clr
CC-MAIN-2020-45
refinedweb
182
61.02
Objective This demo provides a working example of function pointers in action. Function pointers are not frequently used in C programming (perhaps due to their strange syntax), but can be extremely useful in some circumstances. Software Tools Exercise Files Procedure 1 Open the ProjectStart MPLAB® X IDE, then click on the Open Project Navigate to the folder where you saved the exercise files for this class.Click on the Lab13.X folder. Select Open Project 2 Debug ProjectClick on the Debug Project Click on the Continue Click on the Halt 3 What just happened? As was done earlier in the class, we opened a pre-configured MPLAB® workspace with a complete, working program. We then compiled the code and ran it long enough for it to complete its task. This program uses a function pointer to pass the address of a mathematical function to another function that will compute its integral. The integral example was adapted from one published on Wikipedia at:. The integral function takes three parameters: the upper and lower bounds of the integral, and the address of the function that it is to evaluate. The function’s header looks like: float integral(float a, float b, float (*f)(float)) Note that the third parameter is defined as a function pointer. When we call this function, we only need to provide the name of the function we want to integrate. For example: y2 = integral(0, 1, xsquared); The function xsquared() is a simple mathematical function defined as: float xsquared(float x) { return (x * x); } There are other functions that may be passed to the integral() function as well.02 from the Projects Window and then selecting Close. Conclusions Function pointers, while not frequently used, can provide a very convenient mechanism for passing a function to another function. Many other possible applications exist - Jump tables - Accommodating multiple calling conventions - Callback functions (used in Windows™) - Call different versions of a function under different circumstances
http://microchipdeveloper.com/tls2101:lab13
CC-MAIN-2019-13
refinedweb
324
59.94
Let’s consider the interest rate of a bank. If the interest rate of a bank is updated today, all calculations should use the new value immediately. In an OOP, this can be done by declaring a static variable of a class. If the value of this static variable is updated, all values of its related instances will be updated to the new value. So, in a way, there is only one interestRate object that will be shared by all the Account objects. To access the static variable, we use the scope operator :: as shown in the example below. A static variable can only be assessed by a static function. #include <cstdlib> #include <iostream> using namespace std; class Account { public: static double rate() { return interestRate; } static double interestRate; }; double Account::interestRate = 12; int main(int argc, char** argv) { cout << Account::rate(); return 0; }
http://codecrawl.com/2015/02/02/cplusplus-static-variable/
CC-MAIN-2017-04
refinedweb
143
53.81
Type: Posts; User: navy1991 The correct solution still eludes me. Every thing seems to work correct . I tried debugging line by line too. The method tries to search the correct combination by backtracking method. So even after... At GCDEF, you are absolutely correct. I will try to use the debugger to interrupt execution and follow execution and to do all the fun stuff. Regards. The specific problem is this: This is the BASIC code of CheckPoss function: FUNCTION CheckPoss (u) tot = tot + 1 f = 0 SELECT CASE u CASE 2 Hi OReubens, I did like you told.I got a code like this: throw -- terminate on thrown exception REPLACEABLE #define _HAS_EXCEPTIONS 0 #include <cstdio> #include <cstdlib>... When I debugged it after removing comments , I got like this in the command window: DEBUG2: m = 3 i = 0 {f} = 2 {u} 2 DEBUG2: m = 2 i = 0 The bug is exactly here at the end of... Hello Victor, This is the reply that i get on my console window: 188 possibilities checked. 2147 million possibilities discarded. 0 solutions found. Press any key to continue . . . I... Hello Paul, I will follow your advice from now on. Regards, Naveen. Hello victor, please check the code below: #include <iostream> #include <math.h> #include <string> #include <cstdlib> Hi Paul, My plan was to follow the logic as explained for the BASIC program. With this in mind, I started writing my code. In BASIC programming, the array starts from 1, but in C++ it starts... Hello Victor and Paul, Here you go: This is the formatted code , which I am trying: #include <iostream> #include <math.h> #include <string> #include <cstdlib> Hello Paul, I figured out that in the CheckPoss function, case 8 is not being checked .I tried to find the value of 'f' by inserting the comments , which u can see. But, I am still not... #include <iostream> #include <math.h> #include <string> #include <cstdlib> using namespace std; long tot, tsol; int P[8][6]; int A[8][1]; Hello 2Kaud, It works perfectly fine. In fact, I solved my problem and got the result. (around 0.02) . At my place, I am trying to build / include a FBA solver in c++ platform (which... Finally, I have found one. It is called clp : It seems to be very detailed . I will try to implement with this one. If I encounter any problem, I will... Sorry. This was the link which had the LPsolver : It has an implementation in many languages except c++.I was trying to use this one... Yes, I will ask him. In the mean time, I downloaded another LP solver : . It has a source code in Java platform. I want to... 31799 please have a look at this attachment too. It is better to parse as it has whitespace in between. I retrieved it from one of the file formats posted in that website. Regards, Naveen. Hi 2kaud, I did not write the code in Matlab. It was written by Jonathan. I am working in a diiferent place but on the same topic : FBA (flux balance analysis) for whole cell... Hi 2Kaud, I want to parse it like the representation shown in this header file: /*! \internal Representation of a LP constraint like: Hello Paul, I do not have Matlab software suite. It is not open source too. I have 2 ideas: 1. To build a parser by splitting it(the string or line) and by assigning the values... Hello Paul, The file was created using this Matlab Code it seems : ... Hi Paul and 2kaud, I have learnt about formal Grammar rules in Theory of Computation or Formal Languages. I guess we have a prebuilt lexer and parser to do this at my place. But,... I will also keep trying Hi Paul, Ok, if you have a close look at the file , we can easily delete the comments and have a structure like I posted in the first post: Maximize: obj: 3e-06 A - 3e-06 B + 2.7e-01 F If it is not possible with c++ , then I will try to use python for text processing. But ultimately, I need to parse this file into c++ because the linear programming solver is coded in c++.
http://forums.codeguru.com/search.php?s=6f29854d87dc7e7419dd10b288f92787&searchid=2757643
CC-MAIN-2014-15
refinedweb
703
75.5
In this project we will be looking at two different types of posts: Ask HN and Show HN. Ask HN is where users submit posts to ask the Hacker NEws community questions. Show HN is where users submits posts to showcase Hacker NEws a project, product or something interesting that was found. In this project we will be specifically looking at: It is important to note that the dataset being worked with has been significantly reduced from 300,000 rows to 20,000 due to removal of submittions without comments and randomly sampling the remaining submissions. import csv opened_file = open('hacker_news.csv') hn = list(csv.reader(opened_file))']] headers = hn[0] hn = hn[1:] print(headers)']] ask_posts = [] show_posts = [] other_posts = [] for row in hn: title = row[1] if title.lower().startswith("ask hn"): ask_posts.append(row) elif title.lower().startswith("show hn"): show_posts.append(row) else: other_posts.append(row) print(len(ask_posts)) print(len(show_posts)) print(len(other_posts)) 1744 1162 17194 total_ask_comments = 0 for row in ask_posts: total_ask_comments += int(row[4]) avg_ask_comments = total_ask_comments / len(ask_posts) print(avg_ask_comments) total_show_comments = 0 for row in show_posts: total_show_comments += int(row[4]) avg_show_comments = total_show_comments / len(show_posts) print(avg_show_comments) 14.038417431192661 10.31669535283993 On average, ask posts receive approximately 14 comments whereas show posts receive 10 comments. As ask posts receive more comments, we will focus on this going forward. import datetime as dt result_list = [] for row in ask_posts: result_list.append([row[6], int(row[4])]) comments_by_hour = {} counts_by_hour = {} date_format = "%m/%d/%Y %H:%M" for row in result_list: date = row[0] comment = row[1] time = dt.datetime.strptime(date, date_format).strftime("%H") if time not in counts_by_hour: comments_by_hour[time] = comment counts_by_hour[time] = 1 else: comments_by_hour[time] += comment counts_by_hour[time] += 1 comments_by_hour {'00': 447, '01': 683, '02': 1381, '03': 421, '04': 337, '05': 464, '06': 397, '07': 267, '08': 492, '09': 251, '10': 793, '11': 641, '12': 687, '13': 1253, '14': 1416, '15': 4477, '16': 1814, '17': 1146, '18': 1439, '19': 1188, '20': 1722, '21': 1745, '22': 479, '23': 543} avg_by_hour = [] for hour in comments_by_hour: avg_by_hour.append([hour, comments_by_hour[hour] / counts_by_hour[hour]]) avg_by_hour [['21', 16.009174311926607], ['12', 9.41095890410959], ['03', 7.796296296296297], ['23', 7.985294117647059], ['20', 21.525], ['16', 16.796296296296298], ['05', 10.08695652173913], ['07', 7.852941176470588], ['10', 13.440677966101696], ['01', 11.383333333333333], ['15', 38.5948275862069], ['17', 11.46], ['22', 6.746478873239437], ['00', 8.127272727272727], ['02', 23.810344827586206], ['11', 11.051724137931034], ['08', 10.25], ['04', 7.170212765957447], ['19', 10.8], ['09', 5.5777777777777775], ['13', 14.741176470588234], ['14', 13.233644859813085], ['06', 9.022727272727273], ['18', 13.20183486238532]] swap_avg_by_hour = [] for row in avg_by_hour: swap_avg_by_hour.append([row[1], row[0]]) print(swap_avg_by_hour) sorted_swap = sorted(swap_avg_by_hour, reverse = True) sorted_swap [[16.009174311926607, '21'], [9.41095890410959, '12'], [7.796296296296297, '03'], [7.985294117647059, '23'], [21.525, '20'], [16.796296296296298, '16'], [10.08695652173913, '05'], [7.852941176470588, '07'], [13.440677966101696, '10'], [11.383333333333333, '01'], [38.5948275862069, '15'], [11.46, '17'], [6.746478873239437, '22'], [8.127272727272727, '00'], [23.810344827586206, '02'], [11.051724137931034, '11'], [10.25, '08'], [7.170212765957447, '04'], [10.8, '19'], [5.5777777777777775, '09'], [14.741176470588234, '13'], [13.233644859813085, '14'], [9.022727272727273, '06'], [13.20183486238532, '18']] [[38.5948275862069, '15'], [23.810344827586206, '02'], [21.525, '20'], [16.796296296296298, '16'], [16.009174311926607, '21'], [14.741176470588234, '13'], [13.440677966101696, '10'], [13.233644859813085, '14'], [13.20183486238532, '18'], [11.46, '17'], [11.383333333333333, '01'], [11.051724137931034, '11'], [10.8, '19'], [10.25, '08'], [10.08695652173913, '05'], [9.41095890410959, '12'], [9.022727272727273, '06'], [8.127272727272727, '00'], [7.985294117647059, '23'], [7.852941176470588, '07'], [7.796296296296297, '03'], [7.170212765957447, '04'], [6.746478873239437, '22'], [5.5777777777777775, '09']] print("Top 5 Hours for Ask Posts Comments") for avg, hour in sorted_swap[:5]: print("{}: {:.2f} average comments per post".format(dt.datetime.strptime(hour, "%H").strftime("%H:%M"), avg)) Top 5 Hours for Ask Posts Comments 15:00: 38.59 average comments per post 02:00: 23.81 average comments per post 20:00: 21.52 average comments per post 16:00: 16.80 average comments per post 21:00: 16.01 average comments per post In this project, we looked at both ask and show posts in order to find which type of post received the most comments on average. From this, we wanted to find, what time would receive the most comments on average. From our analusis, we see that an ask post should be created around 15:00 - 16:00 to maximize the amount of comments received. It must be noted this data excluded the analysis of posts with 0 comments. It is accurate to state that the posts that had received comments, ask posts received more comments on average and those posts created between 15:00 - 16:00 received the most comments on average
https://nbviewer.org/urls/community.dataquest.io/uploads/short-url/2dQlmCSEzWyvr2WmAadM7fRriKT.ipynb
CC-MAIN-2022-40
refinedweb
768
82.41
The benefit with using international standards when creating software is that you can blame to stupid design decisions on “an international team of brilliant computer scientists”. That way, if someone tries to body-check you on the pathetic implementation, you can defer to aforementioned governing body. That way, you’re calling an international standards body a bunch of crackheads, thus making you look like a crackhead. Who in their right mind would attempt to insult a group of talented individuals from around the globe brought together for the specific purpose to collaborate on creating a new programming API? Me. I call it like I see it and it’s crap; specifically, CDATA handling in E4X. For background, CDATA is a way to put “character data” which is basically text that could have funky characters in it like HTML. Since XML is made up of HTML like syntax, you want to make sure that you can put HTML into XML, and not have itscrew up when parsed. Enter the CDATA tag, a special tag that tells the parser (whoever is converting the XML text into something useful, like Flash or Flex for example), to ignore parsing anything inside of it. Kind of like the “pre” and “code” tags in HTML. Putting these special tags into E4X XML is impossible when combined with binding. Binding is an extremely useful way of creating dynamic XML from variables without having to construct it manually. Since XML is a first class citizen in AS3, you can basically copy and paste real XML into your ActionScript, and set it to a variable. Even cooler, though, is the binding; sort of like Flex’ databinding, except it works inside of the XML nodes. So, if you are building Factory classes for example that create XML request packets to send to somewebservice, you can make a function that takes some parameters to customize the request, and return the XML, like so: function getLoginRequest ( username : String, password : String ) : XML { var request:XML = <request> <login> <username>{username}</username> <password>{password}</password> </login> </request>; return request; } var request:XML = getLoginRequest("Jesse", "moogoo123"); Working with a talented PHP dev at work named Nick, and he requires me to send a URL in my request to one of the services. We don’t make heavy use of attributes, which you can actually get away with a lot of HTML character data without XML getting mad, even in the old DOM way. The URL, however, kept getting URL encoded. The URL was ALREADY URL encoded because it had some parameters on it that the server would use later. However, it seems E4X does automatic URL encoding (akaencodeURIComponent or it’s ilk) on the text you throw into nodes. So, I tried Just putting a CDATA tag instead: var theURL:String = "";<the_url><![CDATA[{theURL}]]></the_url> …however, bindings don’t work in CDATA tags like that. Since the CDATA is doing what it’s told, and telling the XML parser in ActionScript to ignore the inner contents, this includes the binding. Shoot! So, I tried creating it as a String: var s:String = "<![CDATA[" + theURL + "]]>";<the_url>{s}</the_url> …but then it URL encodes the URL again!!! <the_url><![CDATA[]]></the_url> Son of a… I then tried various other ways of doing the same thing, all to no avail. In DOM, we had XML.ignoreWhite, which basically told the parser to ignore whitespace. There doesn’t seem to be the same sort of setting for E4X to turn of automatic encoding. Furthermore, you can treat the CDATA node as just a normal String, unlike the old DOM which was a tad more explicit like node.firstChild vs. node.firstChild.firstChild (or was it nodeValue… forget). Anyway, DAMMIT! Michael Schmalle had a fix. You CAN bind to functions as well. It’ll do an automatic invoke (just like a getter for example), and you can return whatever you want. So, doing this: function cdata(theURL:String):XML { var x:XML = new XML("<![CDATA[" + theURL "]]>"); return x; } <the_url>{cdata("")}</the_url> Flexcoders for the win. I’ve gotten used to namespaces. They make sense; while verbose code, at least verbose pays off in AS3 with speed at runtime. CDATA handling, however, is an f’ing joke. If it DOES handle it well, then the docs are an f’ing joke. Neither of which is funny. No one cares, though, because I gotta fix and Nick could remove his Base64 decoding hack he put in for me as a temporary band-aid. wait, are you implying that Macrodobe would publish insufficient documentation? Why, I never! Tim June 12th, 2007 Thanks for posting this. Mike Britton June 13th, 2007 Considering the auto-url encoding that goes on when sharing xml data with a php driven backend … how many times is the xml being url encoded/decoded and by whom? I am sure php does it before it gets sent off to Flex. I agree Jesse. Someone’s else’s idea of helpfulness can be a severe drag. Randy Troppmann June 15th, 2007: David Frankson December 18th, 2007 I have just try your method and I also try this function cdata(theURL:String):XML still have parse error.. mete March 2nd, 2009 Thanks Jesse. Helped a lot! Raz March 2nd, 2009 So I haven’t needed this until now… but man did it save me some time. I believe that’s TWO beers I owe you now Jassa March 24th, 2009 As usual Jesse, awesome article. Love the truncated explicitives. Best of all thank you for an informative and helpful posting. Thomas Burleson April 24th, 2009
http://jessewarden.com/2007/06/e4x-xml-binding-cdata.html
crawl-002
refinedweb
931
72.16
With some minor adjustments, we can use the C++ string class to accomplish our goal. We will use the C++ member function ‘append’ from the string class. My solution if fairly clean and simple. Here is the source: 4. Write a program that asks the user to enter his or her first name and then last name, and that then constructs, stores, and displays a third string consisting of the user’s last name 174 C++ PRIMER PLUS, FIFTH EDITION followed by a comma, a space, and first name. Use string objects and methods from the string header file. A sample run could look like this: Enter your first name: Flip Enter your last name: Fleming Here’s the information in a single string: Fleming, Flip #include <iostream> #include <string> using namespace std; int main() { string firstName; string lastName; string str; // Gather input, could use cin.getline here. cout << "Enter your first name: "; cin >> firstName; cin.ignore(); cout << "Enter your last name: "; cin >> lastName; // Using C++ string member functions str.append(lastName); str.append(" , "); str.append(firstName); cout << "Here's the information in a single string: " << str << endl; cin.get(); return 0; } Advertisements
https://rundata.wordpress.com/2012/10/18/c-primer-chapter-4-exercise-4/
CC-MAIN-2017-26
refinedweb
193
71.65
5 - Building the Player This section will guide you to create the player Prefab that will be used in this tutorial from scratch, so we'll cover every step of the creation process. It's always a good approach to try and create a player Prefab that can work without PUN being connected, so that it's easy to quickly test, debug and make sure everything at least works without any network features. Then, you can build up and modify slowly and surely each feature into a network compliant character. Typically, user input should only be activated on the instance owned by the player, not on other players' computers. We'll cover this in detail below. Contents - The Prefab Basics - CharacterController - Animator Setup - Camera Setup - Beams Setup - Health Setup The Prefab Basics The first and important rule to know about PUN is, that a Prefab, that should get instantiated over the network, has to be inside a Resources folder. The second important side effect of having Prefabs inside Resources folders is that you need to watch for their names. You should not have two Prefab under your Assets' Resources paths named the same, as Unity will pick the first one it finds. So always make sure that within your Project Assets, there is no two Prefabs within a Resources folder path. - Drag and drop Robot Kyleonto the "Scene Hierarchy". - Rename the GameObject you've just created in the hierarchy to My Robot Kyle - Drag and drop My Robot Kyleinto /PunBasics_tutorial/Resources/ We have now created a Prefab that is based of Kyle Robot Fbx asset, and we have an instance of it in the hierarchy of your scene Kyle Test. Now we can start working with it. CharacterController Let's add a CharacterController Component to My Kyle Robotinstance in the hierarchy. You could do this directly on the Prefab itself but we need to tweak it, so this is quicker this way.This Component is a very convenient Standard Asset provided by Unity for us to produce faster typical characters using Animator, so let's make use of these great Unity features. Double click on My Kyle Robotto have the Scene View zooming in. Notice the "Capsule Collider" centered at the feet; We actually need the "Capsule Collider" to match the character properly. In the CharacterController Component change the Center.yproperty to 1 ( half of its Heightproperty). Kyle Robot Capsule Collider Hit "Apply" to affect the change we made. It's a very important step, because we've edited an instance of our prefab My Kyle Robot, but we want these changes to be for every instance, not just this one, so we hit "Apply". Apply Prefab Changes Animator Setup Assigning An Animator Controller The Kyle Robot Fbx asset needs to be controlled by an Animator Graph. We won't cover the creation of this graph in this tutorial, and so we provide a controller for this, located in your project assets under of the Animator Component is the ability to actually move the character around based on its animation. This feature is called Root Motion and there is a property Apply Root Motion on the Animator Component that is true by default, so we are good to go. So, in effect, to have the character walking, we just need to set the Speed Animation Parameter to a positive value and it will start walking and moving forward. Let's do this! Animator Manager Script Let's create a new script where we are going to control the Character based on } } - Save the Script PlayerAnimatorManager Animator Manager: Speed Control The first thing we need to code is getting the Animator Component so we can control it. - Make sure you are editing the script PlayerAnimatorManager - Create a private Variable animatorof type Animator Store the Animator Component in this variable within the Start() method. private Animator animator; // Use this for initialization void Start() { animator = GetComponent<Animator>(); if (!animator) { Debug.LogError("PlayerAnimatorManager is Missing Animator Component", this); } } Notice that since we require an Animator Component, if we don't get one, we log an error so that it doesn't go unnoticed and gets addressed straight away by the developer. You should always write code as if it's going to be used by someone else :) that we've squared both inputs. Why? So that it's always a positive absolute value as well as adding some easing. Nice subtle trick right here. You could use Mathf.Abs() too, that would work fine. We also add both inputs to control Speed, so that when just pressing left or right input, we still gain some speed as we turn. All of this is very specific to our character design of course, depending on your game logic, you may want the character to turn in place, or have the ability to go backward. The control of Animation Parameters is always very specific to the game. Test, Test, 1 2 3... Let's validate what we've done so far. Make sure you have the Kyle Test scene opened. Currently, in this scene, we only have a camera and the Kyle Robot instance, the scene is missing ground for the Robot to stand on, if you'd ran now the scene the Kyle robot would fall. Also, we won't care in the scene for lighting or any fanciness, we want to test and verify our character and scripts are working properly. - Add a "Cube" to the scene. Because a cube has by default a "Box Collider", we are good to use it as is for the floor. - Position it at 0,-0.5,0because the height of the cube is 1. we want the top face of the cube to be the floor. - Scale the cube to 30,1,30so that we have room to experiment - Select the Camera and move it away to get a good overview. One nice trick is to get the view you like in the "Scene View", Select the camera and Go to the menu "GameObject/Align With View", the camera will match the scene view. - Final step, move My Robot Kyle0.1 up in y, else collision is missed on start and the character goes through the floor, so always leave some physical space between colliders for the simulation to create the contacts. - Play the scene, and press the 'up arrow' or 'a' key, the character is walking! You can test with all the keys to verify. This is good, but still plenty of work ahead of us, camera needs to follow, and we can't turn yet... If you want to work on the Camera right now, go to the dedicated section, the rest of this page will finish the Animator controls and implement the rotation. Animator Manager Script: Direction Control Controlling the rotation is going to be slightly more complex; we don't want our character to rotate abruptly as we press the left and right keys. We want gentle and smoothed out rotation. Luckily, an Animation Parameter can be set using some damping - Make sure you are editing the Script PlayerAnimatorManager Create a public float variable directionDampTimewithin a new region the damping time, and one the deltaTime. Damping time makes sense: it's how long it will take to reach the desired value, but deltaTime?. It essentially lets you write code that is frame rate independent since Update() is dependant on the frame rate, we need to counter this by using the deltaTime. Read as much as you can on the topic and what you'll find when searching the web for this. After you understood the concept, you'll be able to make the most out of many Unity features when it comes to animation and consistent control of values over time. - Save the Script PlayerAnimatorManager - Play your Scene, and use all arrows to see how well your character is walking and turning around - Test the effect of directionDampTime: put it 1 for example, then 5 and see long it takes to reach maximum turning capability. You'll see that the turning radius increases with the directionDampTime. Animator Manager Script: Jumping With jumping, we'll need a bit more work, because of two factors. One, we don't want the player to jump if not running, and two, we don't want the jump to be looping. - Make sure you are editing the Script PlayerAnimatorManager Insert this before we catch the user inputs in the Update() method. //"); } } - Save the Script PlayerAnimatorManager - Test. Start running and press the 'alt' key or the right mosue button and Kyle should jump. OK, so the first thing is to understand how we know that the animator is running or not. We do this using stateInfo.IsName("Base Layer.Run"). We simply ask if the current active state of the Animator is Run. We must append Base Layer because the Run state is in the Base Layer. If we are in the Run state, then we listen to the Fire2 Input and raise the Jump trigger. If you want to write CameraWork from scratch, please go to the next part and come back here when done. - Add the component CameraWorkto My Kyle Robotprefab - Turn on the property Follow on Start, which effectually makes the camera instantly follow the character. We'll turn it off when we'll start network implementation. - Set the property Center Offsetto 0,4,0which makes the camera look higher, and thus gives a better perspective of the environment than if the camera was looking straight at the player, we would see too much ground for nothing. - Play the scene Kyle Test, and move the character around to verify the camera is properly following the character. to get this done quickly: don't add a cube directly as child of the head, but instead create it move it and scale it up on its own and then attach it to the head, it will prevent guessing the proper rotation values to have your beam aligned with eyes. The other important trick is to use only one collider for both beams. This is for the physics engine to work better, thin colliders are never a good idea, it's not reliable, so we'll make a big box collider so that we are sure to hit targets reliably. - Open Kyle testScene - Add a Cube to the Scene, name it Beam Left - Modify it to look like a long beam, and be positioned properly against the left eye My Kyle Robotinstance in the hierarchy - Locate the Head child Kyle Robot Head Hierarchy - Add an empty GameObject as a child of the HeadGameObject, name it Beams - Drag and drop Beam Leftinside Beams - Duplicate Beams Left, name it Beams Right - Position it so that it's aligned with the right eye - Remove the Box Collider component from Beams Right - Adjust Beams Left's "Box Collider" center and size of to encapsulate both beams - Turn Beams Left's "Box Collider" IsTriggerproperty to True, we only want to be informed of beams touching players, not collisions. - Create a new Material, name it Red Beam, save it in 'DemoAnimator_tutorial/Scenes/' - System.Collections; namespace Com.MyCompany.MyGame { /// <summary> /// Player manager. /// Handles fire Input and Beams. /// </summary> public class PlayerManager : MonoBehaviour { () { // we only process Inputs if we are the local player if (photonView.IsMine) { ProcessInputs (); } // trigger Beams active state if (beams != null && IsFiring != beams.activeSelf) {. We've also exposed a public property Beams that will let us reference the exact GameObject inside the hierarchy of My Kyle Robot Prefab. Let's look at how we need to work in order to connect Beams, because this is tricky within prefabs, since in the Assets browser, prefabs only expose the first childs, not sub childs, and our beams is indeed buried inside the prefab hierarchy, so we need to do that from an instance in a scene, and then Apply it back to the prefab itself. - Open Kyle testScene My Kyle Robotin the scene hierarchy - Add the PlayerManagerComponent to My Kyle Robot - Drag and drop My Kyle Robot/Root/Ribs/Neck/Head/Beamsinto the PlayerManager Beamsproperty in the inspector - Apply the changes from the instance back to the Prefab If you hit play, and press the Fire1 Input (that is left mouse or left ctrl key by default), the beams will show up, and hide right away when you release. Health Setup Let's implement a very simple health system that will decrease when beams hits the player. Since it's not a bullet, but a constant stream of energy, we'll need to account for health damage in two ways, when we get hit by a beam, and during the whole time the beam is hitting us. - Open PlayerManagerScript Turn PlayerManagerinto a MonoBehaviourPunCallbacksto expose the PhotonView component using Photon.Pun; public class PlayerManager : MonoBehaviourPunCallbacks {. For easy debugging we made the Health float a public float so that it's easy to check its value while we are waiting for the UI to be built. OK, this looks all done right? Well... the health system is not complete without taking into account the game over state of the player, that occurs when health hits 0, let's do that now. Health Checking For Game Over To keep things simple, when the player's health reaches 0, we simply leave the room, and if you remember, we've already created a method in the GameManager script to leave the room. It would be great if we could reuse this method instead of coding twice the same feature. Duplicated code for the same result is something that you should avoid at all costs. This will also be a good time to introduce a very handy programming concept, "Singleton". While this topic itself could fill up several tutorials, we'll only do the very minimal implementation of a "Singleton". Understanding; } Save GameManagerScript Notice we've decorated the Instance variable with the [static] keyword, meaning that this variable is available without having to hold a pointer to an instance of GameManager, so you can simply do GameManager.Instance.xxx() from anywhere in your code. It's very practical indeed! Let's see how that fits for our game: we reach the LeaveRoom()public method of the GameManager instance without actually having to get the Component or anything, we just rely on the fact that we assume a GameManager component is on a GameObject in the current scene. OK, now we are diving into networking! Next Part. Previous Part.
https://doc.photonengine.com/zh-tw/pun/current/demos-and-tutorials/pun-basics-tutorial/player-prefab
CC-MAIN-2019-35
refinedweb
2,416
58.42
On Mon, 2009-11-30 at 14:27 +0100, Oleg Kalnichevski wrote: > On Sat, 2009-11-28 at 17:01 +0000, Tony Poppleton wrote: > > Hi, > > > > I have run a JProfiler on my application that uses HttpClient to send > > requests every 10 milliseconds. One interesting part of the results is > > that the Log creation is actually consuming about 5% of the time, which > > is significant considering I am trying to squeeze the most performance > > out. For example: > > > > public class ClientParamsStack extends AbstractHttpParams { > > private final Log log = LogFactory.getLog(getClass()); > > > > I have always used static loggers myself, which avoid this problem, so I > > did a tiny bit of research > > () and > > apparently static isn't always the right choice. > > > > > > Is there anything I can do to prevent the log creation from being a > > slowdown, short of checking out the source tree and creating my own > > custom patch? > > > > I haven't investigated fully yet, but ClientParamsStack class seems to > > be the main culprit, so is there any way I can set it to use my own > > custom implementation of this? > > > > Tony, > > If log creation does indeed have such an adverse effect on performance, > I would very much rather prefer to fix the problem in the library > itself. If you are reasonably sure performance can be improved by > eliminating certain log instances, please remove them and submit a patch > for inclusion into the official code base. > > Cheers > > Oleg > Hhhm. I get good 5 to 7% performance improvement by eliminating Log instances in ClientParamsStack and DefaultHttpRequestDirector classes. I never imagined the performance penalty of the Log lookup operation was so significant. Tony, Could you please open an issue in JIRA for this problem? Oleg > > > Many thanks, > > Tony > > > > --------------------------------------------------------------------- > >
http://mail-archives.apache.org/mod_mbox/hc-httpclient-users/200911.mbox/%3C1259601742.7088.27.camel@ubuntu%3E
CC-MAIN-2017-43
refinedweb
281
58.82
Just a quick one. Just noticed that when trying to pass an empty string literal "" to a function that it's converted to null. I want to use that argument as a context to execute a callback on. e.g. arg = fn.call(arg, xxx); The only way around it was to pass a new String('') object instead, however I see on JSHint that you get reprimanded for using the string constructor. Any other way around this? Cheers RLM Something strange is going on then, for with the following code I see in my console different results, depending on if it's an empty string or null function show(obj) { console.log(obj); } > show(""); <-undefined > show(null); null <-undefined What in your situation causes it to be changed from an empty string to null? You're right. It seemed strange. A flaw elsewhere in my code perhaps. function check(x){ console.log(x); console.log({}.toString.call(x)); }; check(""); -->(an empty string) -->[object String] It's in my css parser a call to parentLookUp. Will give it another look. That would be this one here: parentLookUp = function (child, fn, obj) { if (obj) { while (child = child.parent) { obj = fn.call(obj, child); } return obj; } else { while (child = child.parent) fn(child); } }, What types of expected arguments are you wanting to use with that., Can you provide some examples that both include and exclude the empty string? Paul it needs work. I wanted the function to have some flexibility, so that if down the line if I want to pass an array to it I can. Needs thinking out. Just knocked up a simplified example of usage function func(fn, obj){ return fn.apply(obj); } function prefix () { return this.replace(/^/, 'Starts here...'); } console.log(func(prefix, "")); In parentLookUp 'Start here' would be a value supplied by a child property. An example of the code calling the function is. parentLookUp(cssRule, function (child) { return (child.selector) ? this.replace(/^/, child.selector + ' ') : this; }, "" ); Alternate usage is just a simple lookup so that a function can gain access to those properties. No object supplied. As in the parseVars function I use to replace variables with their values. Spotted the flaw. 'if (obj)' Will never be true for on an empty string However with the new String wrapper. var x = ""; if(x) console.log('false'); > var x = new String(''); if(x) console.log('true'); > true Need to sort out my type checking. Something like this. if (obj !== undefined || obj !== null) Okay, so what are you wanting to check instead? Just that if something is passed as the third parameter? if (obj !== undefined) { ... } That will let defined objects be used, even if they are an empty string too. if (obj !== undefined) Yes that's it. Sometimes can't see the wood for the trees.
https://www.sitepoint.com/community/t/passing-an-empty-string-literal-to-a-function/43599
CC-MAIN-2015-48
refinedweb
466
77.64
displaying the resultset in table can anybody plz tell me how to display the resultset in table comming from the sql query in Applet? JTable populate with resultset. JTable populate with resultset. How to diplay data of resultset using JTable? JTable is component of java swing toolkit. JTable class... of columns in resultset object. JTable table = new JTable(data, columnNames ResultSet In Java example which will demonstrate you about how to use the ResultSet in Java...ResultSet In Java In this section we will learn about the ResultSet in Java... that are mostly used in to get the value. ResultSet contains the data of a table after JDBC Updateable ResultSet Example ; } Updateable ResultSet Example You can update the database table using result set. To update the table using result set you need to set the ResultSet... illustrate how to update table through resultset. UpdateableResultSet.java package Displaying files on selection of date. show the particular txt files of the selected date. I want the java logic for same. Here is a java swing code that accepts two dates and search...Displaying files on selection of date. Hi, I am developing a GUI JDBC ResultSet Example JDBC ResultSet Example: In this example, we are discuss about ResultSet class...) database query results. Through this example you can see how to use ResultSet... resultset methods. The full code of the example is: package  Rows Count Example Using JDBC ResultSet Rows Count Example Using JDBC ResultSet: In this tutorial you can count the number of rows in the database table using ResultSet. Through this example... of the example is: package ResultSet; import java.sql. Scrollable ResultSet - Java Beginners an example.... It will be very helpful for me.... Hi friend multiple resultset in one resultset multiple resultset in one resultset how to retrive multiple resultsets in one resultset in java.? plz help creation of table using a Java swing creation of table using a Java swing how to create a table dynamically in Java swing... = con.createStatement(); ResultSet rs =stmt.executeQuery("select image from image where image
http://www.roseindia.net/tutorialhelp/allcomments/3574
CC-MAIN-2015-11
refinedweb
342
51.34
Gateway side of RC-MAC. More... #include <introspected-doxygen.h> Gateway side of RC-MAC. ns3::UanMacRcGw is accessible through the following paths with Config::Set and Config::Connect: This MAC protocol assumes a network topology where all traffic is destined for a set of GW nodes which are connected via some out of band (RF?) means. UanMacRcGw is the protocol which runs on the gateway nodes. 60 of file uan-mac-rc-gw.h. Assign a fixed random variable stream number to the random variables used by this model. Return the number of streams (possibly zero) that have been assigned. Implements ns3::UanMac. Definition at line 737 of file uan-mac-rc-gw.cc. References NS_LOG_FUNCTION. Attach PHY layer to this MAC. Some MACs may be designed to work with multiple PHY layers. Others may only work with one. Implements ns3::UanMac. Definition at line 212 of file uan-mac-rc-gw.cc. References ns3::MakeCallback(). Clears all pointer references Implements ns3::UanMac. Definition at line 79 of file uan-mac-rc-gw 103 of file uan-mac-rc-gw.cc. Enqueue packet to be transmitted Implements ns3::UanMac. Definition at line 199 of file uan-mac-rc-gw.cc. References NS_LOG_WARN. Implements ns3::UanMac. Definition at line 187 of file uan-mac-rc-gw.cc. Implements ns3::UanMac. Definition at line 225 of file uan-mac-rc-gw.cc. Implements ns3::UanMac. Definition at line 193 of file uan-mac-rc-gw.cc. Implements ns3::UanMac. Definition at line 206 of file uan-mac-rc-gw.cc.
https://coe.northeastern.edu/research/krclab/crens3-doc/classns3_1_1_uan_mac_rc_gw.html
CC-MAIN-2022-05
refinedweb
259
52.66
CodePlexProject Hosting for Open Source Software Bit of a long story why I'm trying to do this - but something pretty awesome if it works! I've got a Content Part that displaying fine from this Driver: results.Add( ContentShape("Parts_Foo_Bar", () => shapeHelper.Parts_Foo_Bar( ContentPart:part, ContentItem: part.ContentItem), ContentItems:list))); ... return Combined(results.ToArray()); So ContentPart, ContentItem, ContentItems can all be found from my view by accessing @Model.ContentItem etc. Now say I write the following class in another module: public class Shapes : IShapeTableProvider { public void Discover(ShapeTableBuilder builder) { builder.Describe("Parts_Foo_Bar") .OnCreated(OnCreated); } private void OnCreated(ShapeCreatedContext context) { ContentPart part = context.Shape.ContentPart; ContentItem item = context.Shape.ContentItem; if (item == null || part == null) throw new Exception("No item or part"); } } What I'm finding is that Shape.ContentPart and Shape.ContentItem are null; and I can't find any way to access the Model that will end up in the view. I want to add a wrapper at this stage based on certain properties of the part but I can't find it. Any clues on this? I don't understand why you create the class Shapes.cs? And how do you make use of it? IShapeTableProvider is used in Core.Shapes and other places to describe properties about a whole bunch of the shapes that you normally see on an Orchard page. I want to use it to add a Wrapper around my Parts_Foo_Bar (obviously I have simplified an example from my real application, I don't really have a Parts_Foo_Bar, but I want to be able to do this with any part). But I can't add the Wrapper until I've checked settings from my part. However I can't *find* the part because both Shape.ContentPart and Shape.ContentItem are null. I'm wondering how I can access the model that I created in my Driver. This code is going to run for *all* shapes. My guess would be that this does throw, but not for your shape. You just need to verify that you have the right shape. Can you try using the brand new shape.Metadata.OnDisplay(Action) ? It's called just before the HTML gets rendered, and you can even return your own HTML, to do some caching for instance. bertrand: Surely it will only run for "Parts_Foo_Bar" - isn't that the point of builder.Describe? (I verified this in context.Shape.Metadata.Type) I also tried using implementing IShapeFactoryEvents (which I think is what you're thinking of) but got the same result with this code (i.e. contentItem==null) : public void Created(ShapeCreatedContext context) { if (context.ShapeType=="Parts_Foo_Bar") { ContentItem contentItem = Context.Shape.ContentItem; if (contentItem != null) { if (contentItem.As<FooPart>().Bar) { var shapeMetadata = (ShapeMetadata)context.Shape.Metadata; shapeMetadata.Wrappers.Add("FooBarWrapper"); } } } } sebastien: I didn't know about that, will give it a try - but can you explain what the difference is between ShapeMetadata.OnDisplaying, IShapeDisplayEvents.Displaying, and IShapeTableProvider => build.Describe("...").OnDisplaying(...) . These look like three different ways to hook into the same event. I've already tried it from IShapeDisplayEvents and IShapeTableProvider, still no access to ContentItem. However - I'm pretty sure this is actually the correct result, I'm just trying to access it in the wrong way ... Shape Tracing confirms the Shape itself has no ContentItem property, it's on the Model tab instead ... but I want to know where can I access the model at a point where I can also add a Wrapper? Thanks for both your help so far ! Ok - in ShapeMetadata.OnDisplaying it works. So ... why does it work there and not in the other places? My bad. Why it doesn't work in other places? I don't know, too early probably, but it does seem to make the event pretty much useless. Asking around. The the shape Creating/Created events are really to build a named shape before any specific information is available. You can think of those events as firing right before and right after a parameterless constructor. So the best use for those two events is if you want to change a shape's base class (on creating) or attach shape-specific system-wide behaviors (on created) before any method invokation or property assignment occurs. OnDisplaying, on the other hand, fires after all of the dust is settled and you're just about to render. That moment gives you the greatest possible chance you'll be able to work with the final values that all of the different bits of code have thrown onto the shape, no matter when or how they occured. One last observation - if you really want to jump into the shape from the beginning - is that in the OnCreating (or created) events for a shape you can attach an additional behavior. That behavior can then see all of the property assignments and method invocations as they occur - so from that standpoint you can have code that adds or updates a wrapper template when the ContentItem property is assigned. It's very flexible. :) Thanks ... I think I follow it now. It's just sometimes really difficult to see what's going on when all the important bits are dynamic objects that you can't really look into! So would it therefore be possible in one of the creating hooks to push a shape into a different Zone? Oh yeah. I have a module that does that. Never found the time to finish it but yeah. I think that's a nice place to do it - I couldn't manage it in a Handler as you suggested in that other thread, I ended up just creating my own Driver for part I wanted to move, building custom shapes and hiding the normal ones. But it could be a bit of a pain where shapes have a more complicated Display method. I think an ideal thing would be to just introduce it into Placement.info at some stage, so you can use <Zone Shape_Name="Header"/>. Eventually :) Are you sure you want to delete this post? You will not be able to recover it later. Are you sure you want to delete this thread? You will not be able to recover it later.
http://orchard.codeplex.com/discussions/252007
CC-MAIN-2016-40
refinedweb
1,033
66.23
Removing Menu Items and Child Menu Items Discussion in 'ASP .Net' started by Larry29 - Steven Cheng[MSFT] - Mar 1, 2006 Menu Control-expanding child items cause browser scroll bars to ap=?Utf-8?B?am9qb2Jhcg==?=, Jul 13, 2006, in forum: ASP .Net - Replies: - 0 - Views: - 702 - =?Utf-8?B?am9qb2Jhcg==?= - Jul 13, 2006 How do I: Main thread spawn child threads, which child processes...control those child processes?Jeff Rodriguez, Dec 5, 2003, in forum: C Programming - Replies: - 23 - Views: - 1,453 - David Schwartz - Dec 9, 2003 removing a namespace prefix and removing all attributes not in that same prefixChris Chiasson, Nov 12, 2006, in forum: XML - Replies: - 6 - Views: - 782 - Richard Tobin - Nov 14, 2006 programmatically removing submenu child items from menu controljobs, Oct 27, 2007, in forum: ASP .Net - Replies: - 1 - Views: - 937 - Larry Bud - Oct 27, 2007
http://www.thecodingforums.com/threads/removing-menu-items-and-child-menu-items.520962/
CC-MAIN-2016-07
refinedweb
140
72.16
The VBoxManage createvm command creates a new XML virtual machine definition file. You must specify the name of the VM by using --name . This name is used by default as the file name of the settings file that has the name .xml extension and the machine folder, which is a subfolder of the .config/VirtualBox/Machines folder. Note that the machine folder path name varies based on the OS type and the Oracle VM VirtualBox version. Ensure that the VM name conforms to the host OS's file name requirements. If you later rename the VM, the file and folder names will be updated to match the new name automatically. The --basefolder option specifies the machine folder path name. Note that the names of the file and the folder do not change if you rename the VM. path The --group option assigns the VM to the specified groups. Note that group IDs always start with group-ID, ... / so that they can be nested. By default, each VM is assigned membership to the / group. The --ostype option specifies the guest OS to run in the VM. Run the VBoxManage list ostypes command to see the available OS types. ostype The --uuid option specifies the universal unique identifier (UUID) of the VM. The UUID must be unique within the namespace of the host or of its VM group memberships. By default, the VBoxManage command automatically generates the UUID. uuid The --default option applies a default hardware configuration for the specified guest OS. By default, the VM is created with minimal hardware. The --register option registers the VM with your Oracle VM VirtualBox installation. By default, the VBoxManage createvm command creates only the XML configuration for the VM but does not registered the VM. If you do not register the VM at creation, you can run the VBoxManage registervm command after you create the VM.
https://docs.oracle.com/en/virtualization/virtualbox/6.1/user/vboxmanage-createvm.html
CC-MAIN-2020-29
refinedweb
312
66.33
I have a datareader that is getting results from my stored procedure. The stored procedure depending on certain values such as ("ismarried" = true) returns 10 coulmns but if ("ismarried" = false) it returns only 5 columns. In my asp.net page my datareader is expecting 10 columns no matter what and wanted to know if there was a way in my asp.net c# code to have optional parameters. I do know you can use ISNULL("Column", '') in SQL but instead of doing that i was hoping there was a way to maybe tell my datareader that these 5 parameters might not always exist. Thanks You can tell how many columns that the stored procedure returned by using the FieldCount property. If it returns 5 or 10 your code can react accordingly. Instead of checking the columns returned and then mapping based of the count of fields etc, a cleaner solution would be to create a class, let's say Person like so... public class Person { public int Id { get; set; } public string Name { get; set; } public bool IsMarried { get; set; } //etc... } Then, you can use dapper to return your result... var people = cnn.Query<Person>("spName", commandType: CommandType.StoredProcedure).ToList(); dapper will map the fields to your class property, and ignore any missing fields. Be aware that the property names will need to match the field names from the database. This will cut down on any logic check and having to map each property by hand to the field returned.
https://dapper-tutorial.net/knowledge-base/20170098/sqldatareader-optional-parameter
CC-MAIN-2021-21
refinedweb
250
73.07
Updated 11/16/09 – I think this is pretty much resolve now – read below: I have seen this at several customer sites, and even in my own lab. You might find the following alerts (below) stemming from the DNS MP. To start, I would recommend the resolutions in my previous post: Getting lots of Script Failed To Run alerts- WMI Probe Failed Execution- Backward Compatibility Everything at that post above helps, however, it does not resolve all of the alerts, 100% of the time. After about two weeks on a Windows 2003 DC/DNS server… the problem can re-occur with WMI failures and script errors. Restarting the computer, or restarting WMI will immediately resolve it. This appears to be an issue with the Windows DNS WMI provider, that causes this Generic Failure when trying to access the WMI based DNS namespace, and query it. It appears that there is a TLS slot leak every time the DNS WMI provider unloads. It appears that the DNS WMI provider will unload after 5 minutes of not being accessed. Those who patch their computers monthly, likely wont even see this issue, or only see it for a short time until the next patch cycle. To resolve it – I have written a monitor (example and sample MP below) which queries the DNS WMI namespace every 4 minutes, which keeps the provider from unloading. Therefore, the DNS provider stays loaded, and never has to unload, and leak a TLS slot. This has actually shown to resolve some other issues with scripts and latency, caused by the DNS WMI provider having to load back up after an unload. The events/alerts you may see to define the error condition: WMI Probe Module Failed Execution Log Name: Operations Manager Source: Health Service Modules Event Number: 10409 Description: Object enumeration failed Query: 'Select EventLogLevel from MicrosoftDNS_Server' HRESULT: 0x80041001 Details: Generic failure One or more workflows were affected by this. Workflow name: Microsoft.Windows.DNSServer.2003.Monitor.ServerLoggingLevel Instance name: dc01.opsmgr.net Instance ID: {11056C4C-B933-98ED-3DC5-4B9AAE232B23} Management group: PROD1 WMI Probe Module Failed Execution Log Name: Operations Manager Source: Health Service Modules Event Number: 10409 Description: Object enumeration failed Query: 'Select Name, Shutdown, Paused from MicrosoftDNS_Zone' HRESULT: 0x80041001 Details: Generic failure One or more workflows were affected by this. Workflow name: Microsoft.Windows.DNSServer.2003.Monitor.ZoneRunning Instance name: test.opsmgr.net (dc01.opsmgr.net) Instance ID: {E0A3BD98-04B7-0C44-B26D-F8E6175456D1} Management group: PROD1 Script or Executable Failed to run Log Name: Operations Manager Source: Health Service Modules Event Number: 21406 Description: The process started at 6:26:59 AM failed to create System.Discovery.Data. Errors found in output: C:\Program Files\System Center Operations Manager 2007\Health Service State\Monitoring Host Temporary Files 10\8675\DNS2003ComponentDiscovery.vbs(123, 9) SWbemServicesEx: Generic failure Command executed: "C:\WINDOWS\system32\cscript.exe" /nologo "DNS2003ComponentDiscovery.vbs" {C984657D-0255-F11B-2C76-1542793A684D} {11056C4C-B933-98ED-3DC5-4B9AAE232B23} dc01.opsmgr.net true true true "" false 700 1 Working Directory: C:\Program Files\System Center Operations Manager 2007\Health Service State\Monitoring Host Temporary Files 10\8675\ One or more workflows were affected by this. Workflow name: Microsoft.Windows.DNSServer.2003.Discovery.Components Instance name: dc01.opsmgr.net Instance ID: {11056C4C-B933-98ED-3DC5-4B9AAE232B23} Management group: PROD1 Script or Executable Failed to run Log Name: Operations Manager Source: Health Service Modules Event Number: 21405 Description: The process started at 3:58:21 AM failed to create System.Discovery.Data, no errors detected in the output. The process exited with 0 Command executed: "C:\WINDOWS\system32\cscript.exe" /nologo "DNS2003Discovery.vbs" {C8655A28-E27E-C6ED-B158-8569219A71A6} {89AC2E61-9144-4B94-9028-5A25F547213E} dc01.opsmgr.net false Working Directory: C:\Program Files\System Center Operations Manager 2007\Health Service State\Monitoring Host Temporary Files 10\8515\ One or more workflows were affected by this. Workflow name: Microsoft.Windows.DNSServer.2003.ServerDiscovery Instance name: dc01.opsmgr.net Instance ID: {89AC2E61-9144-4B94-9028-5A25F547213E} Management group: PROD1 Script or Executable Failed to run Event Type: Error Event Source: Health Service Script Event Category: None Event ID: 1152 Date: 5/19/2009 Time: 11:18:48 AM User: N/A Computer: DC01 Description: DNS2003Discovery.vbs : The Query 'select * from MicrosoftDNS_Server' did not return any valid instances. Please check to see if this is a valid WMI Query.. Generic failure So…. at this point, you have updated Cscript to 5.7 KB955360, and applied the KB933061 hotfix to stabilize WMI. However, after a period of time – these errors start happening again? Since the issue is a problem caused by the Windows DNS WMI provider unloading – we need to keep it loaded. Since I believe it unloads after 5 minutes of inactivity, we need to make sure we query WMI at least every 4 minutes. The simplest, cheapest, and easiest way I know to do that… is to create a simple performance monitor, that queries the DNS WMI namespace for a value, every 4 minutes. I have a complete write-up on how to create this monitor at THIS LINK. I will start by creating a new Management pack - “Custom – DNS Addendum MP” Next – I will create a new monitor, Unit Monitor, WMI Performance Counters, Static Thresholds, Single Threshold, Simple Threshold. Give the monitor a name. I used “Custom - DNS Monitor Query to keep namespace loaded” For the monitor target – since this is a problem only on Windows Server 2003, I chose “DNS 2003 Server”. We do not need to do this on Server 2008. For the Parent monitor, I chose performance: Next, we need to fill in the namespace, query, and frequency. I input “root\MicrosoftDNS” for the namespace, and “Select EventLogLevel from MicrosoftDNS_Server”. Since I want it to run every 4 minutes, that would be 240 seconds: For the performance mapper section – this is the most confusing – I explain it a bit deeper at THIS LINK For now – just follow the graphic below: Next, on the Threshold page… since this monitor is not really supposed to do anything other than query WMI on a schedule… we don't want it to alert. The query we are running for this example will return an integer from 0-10, so I will set this to 99, a number it could never return so the monitor will never change state. Next, on the Alert Settings, do NOT generate alerts for this monitor. Click Create. That is it. For those who want to test this – I am attaching my sample management pack with only this monitor in it. To use my MP, you will need to have SCOM R2, otherwise you can create your own monitor as above. Hello Kevin. Many times the solutions you describe work perfectly. However, on many occassions I find that for the DNS MP the updates of WMI and Windows Scripting Host is not sufficient. The DNS class in WMI has to be recompiled as well. Then all errors are gone and everything runs like clockwork again. So on DNS servers I start three actions: - Updating WMI - Updating Windows Scripting Host - Recompiling DNS class in WMI Here is the only problem I have with recompiling the MOF: In my testing - recompiling the MOF is only necessary when the DNS WMI namespace is missing or corrupt. If after bouncing the WMI service, you still cannot manually query any of the WMI objects, or cannot even connect to the WMI namespace, then I agree - recomplile the mof. However - in my testing - I did all three - updated WMI, recompliled MOF, and then updated cscript. After 1 month passed - the issue returned. It seems to take a long amount of uptime for this random error condition to present itself. That is why I am adding the additional WMI buffer space now. This supposedly will address the issue for most people. I will let you know in a month or two. :-) Oops. So even with recompiling the MOF the issue returns... Good to know about adding additional WMI buffer space. If that solves this problem also on the long term it is good to know. Thanks again for sharing such good information with the community. Hi Kevin, I'm afraid it does not help. We recompiled the DNS mof, installed KB933061, increased teh buffer space, finally rebooted the systems. It was a relief for some time, but the errors reappeared after some time. I'm afraid there is something wrong with either WMI or DNS mof or both ? Yep... totally agree. I am seeing the same now. The current theory, is that something is wrong with the WMI provider for DNS.... this isnt related to SCOM. That issue is - that this DNS WMI provider leaks a TLS slot and when they are exhausted (takes about 2-3 weeks for me) then the problem occurs.... When this happens, you can bounce WMI/Reboot the computer, and the problem goes away for 2-3 weeks. The provider unloads after 5 minutes of inactivity. If you did something - say.... run a times script that does something VERY lightweight, like runs the simple WMI query and nothing else - against the DNS WMI namespace, and run this script every 2 minutes. This would keep the provider from loading/unloading as caused by the SCOM MP.... and the TLS slots will not leak because the DNS provider is not unloading. I was also thinking of maybe writing a threshold monitor - against a WMI perf object, that wont change/alert... and this might keep the provider loaded and have even less impact. That is a theory, I have not had time to test and validate this. I found an existing monitor that appears to do the same thing. Under the class DNS 2003 server there is a configuration monitor named "DNS 2003 Event Logging Level Monitor". I created an override to change the Interval from 900 to 240. Hope this helps. Mark - that is a really go idea.... as long as that monitor queries the WMI namespace. The only concern I would have is the "expense" of that monitor... if it uses a lot of CPU when it runs it might have a tad more impact to the server... but overall I like it! Has anyone been able to try modifying the monitor that Mark mentionned in his post (02/19/2010). I'm stuck with the same issue here but it only happens on our production DCs and can't really try this out... Francis Hi, I am not sure whether this is the best place for my problem. We have DNS memory leak in a dc (wk3/sp2/x86) that occurs about every two weeks. It started after we have deployed SCOM R2 agents. I wonder whether it might be related this topic. Thanks, Hi, everyone! A have the same issue - monitoring DNS in SCOM always failed! Query 'Select Name, Shutdown, Paused from MicrosoftDNS_Zone' always return 0x80041001 error. I have tried to apply various updates and hotfixes, create custom MP for DNS according to Kevin's post - nothing helps! In I have found, that WMI query to DNS-server in SCOM has bug - query should be like this 'Select Name, Paused, Shutdown from MicrosoftDNS_Zone'.This query returns proper result! Does anyone know, how to change this query in SCOM?? If you are getting a generic failure - the problem is the leak in WMI. If you bounce the server - does the problem go away for a little while? Those two queries are identical, changing the order doesnt matter, both are valid. Well, You are right, Kevin. Failure of query 'Select Name, Shutdown, Paused from MicrosoftDNS_Zone' depends on position of stars... After rebooting server problem goes away for a couple of weeks. But after that time it happens again. All recommended updates to WMI and OS are installed. Does anyone have issue like this on Windows 2008 DNS servers? P.S. I will try to ask MS techsupport for this problem. Alex - are you getting this on 2003 servers or 2008 or 2008R2? There is no TLS leak on the 2008 DNS WMI provider.... this specific issue should impact 2003 servers only.... unless you are hitting something else. Are your WMIPRVSE processes using a lot of private bytes (look in task manager) Are you sure you set up a rule or used my MP to query the WMI provider every 4 minutes to keep it from unloading? That is the fix.... for 2003 servers at least. Kevin, All my DNS servers are Windows 2003 R2 and I get this error from all this servers. I have 3-6 WMIPRVSE processes with 5-20 MBs of memory. I have upgraded SCOM to R2 version adn will try your MP on this version too - on SCOM 2007 it has no effect. Alex - this is still Server 2003 then - and has the leak in WMI. You MUST use something like my MP to keep the provider from unloading, or you will be affected. This is a textbook example. You can hotfix and patch and tweak to your hearts content - you will not solve the root cause. The root cause is that when the DNS WMI provider unloads after a period of inactivity - it leaks a TLS slot. If you use my example MP, this will qury the provider enough to keep it from unloading, and you will work around the issue in the Windows WMI provider.
http://blogs.technet.com/b/kevinholman/archive/2009/06/29/errors-alerts-from-the-dns-mp-script-failures-wmi-probe.aspx
CC-MAIN-2015-06
refinedweb
2,230
72.97
11 September 2012 09:34 [Source: ICIS news] SINGAPORE (ICIS)--Sinopec and PetroChina are expected to refine a combined total of 29.5m tonnes of crude in September, with daily throughput up by 1.65% from the previous month, sources from both companies said on Tuesday. Chinese oil and gas company Sinopec’s crude throughput target for September is 17.5m-17.6m tones, with daily throughput up by 0.4% month on month, according to a company source. Sinopec shut its 13m tonne/year ?xml:namespace> PetroChina, a listed subsidiary of China National Petroleum Corporation (CNPC), plans to process 11.9m-12m tonnes of crude in September, with daily throughput up by 3.5% from August. PetroChina restarted its
http://www.icis.com/Articles/2012/09/11/9594351/sinopec-petrochina-hike-september-crude-throughput-by.html
CC-MAIN-2014-41
refinedweb
120
70.19
0 Hai friends I have a some trouble in running a 3rd party program. This 3rd party program actually takes one file as input & does some processing on it and creates an output in a different folder with the same name, but with different file extention. I do it manually from DOS prompt. I automated this by using the following code import os import sys for filename in os.listdir('d:\path2\subpath2'): if filename.find('.txt') >= 0: f1 = filename.split('.') print f1,filename x = "d:\\path1\\subpath1\\gpg -r encr -o d:\\dest1\\subdest\\" + f1[0] + ".txt.gpg -e d:\\path2\\subpath2\\" + filename os.system(x) When I execute this command manully in the DOS prompt, I get a (y/n) input, for which I usually give 'y'. But when the Python script executes it, the script stops to ask for the (y/n)input for each file. I want Python script itself to give 'y' as default input, without prompting for each file. Is there anyway this can be done?
https://www.daniweb.com/programming/software-development/threads/35430/running-a-3rd-party-program-from-python
CC-MAIN-2018-05
refinedweb
172
67.25
Income Tax department wants to make it (processing of ITR form) very fast, maybe in a day or a week. It will be a big good news for Income Tax payers, when this becomes a reality. Central Board of Direct Taxes (CBDT) Chairman Sushil Chandra has confirmed that Income Tax payers will soon get pre-filled ITR forms that will make the process of filing returns easier. Pre-filled ITR forms: What are they and what CBDT chairman has confirmed? – The I-T department is working on the pre-filled income tax return (ITR) forms. These would be based on tax deducted at source (TDS) details filed with the Income Tax department by the employer or any other entity, as per a PTI report. – According to CBDT Chairman Sushil Chandra, “You will be getting a pre-filled return form on which we are working because your TDS is with us. So, we are moving towards that direction.” – “We want to make it (processing of return form) very fast, maybe in a day or a week. That system is also under preparation and it may take a year or so. So that you get a pre-filled form, and you can justify that form is correct. We will accept it,” Chandra added. Chandra says only 0.5 per cent of cases had been taken up for scrutiny and even these cases have been selected by computer system.”There is no discretion (in selecting income tax-related cases). Our endeavour is to curtail the discretion of tax officials,” Chandra added. Get live Stock Prices from BSE and NSE and latest NAV, portfolio of Mutual Funds, calculate your tax by Income Tax Calculator, know market’s Top Gainers, Top Losers & Best Equity Funds. Like us on Facebook and follow us on Twitter.
https://www.financialexpress.com/money/income-tax/income-tax-payers-alert-itr-filing-process-set-to-witness-unprecedented-change-what-cbdt-chairman-has-confirmed/1405657/
CC-MAIN-2020-05
refinedweb
299
72.05
This is the mail archive of the cygwin mailing list for the Cygwin project. On 5/2/2012 5:02 PM, Ryan Johnson wrote:That worked at first, but after a while it stopped and never came back.On 02/05/2012 1:16 PM, Ryan Johnson wrote:On 02/05/2012 9:55 AM, Ken Brown wrote:On 4/30/2012 11:52 PM, Ryan Johnson wrote:On 30/04/2012 10:08 PM, Ken Brown wrote:On 4/30/2012 9:07 PM, Ryan Johnson wrote:On 30/04/2012 8:48 PM, Ryan Johnson wrote:On 30/04/2012 4:08 PM, Ken Brown wrote:Caught one in gdb (no symbols, sadly):I'm experiencing regular seg faults, often while using gdb but notI'm experiencing regular seg faults, often while using gdb but notTest releases of the emacs, emacs-X11, and emacs-el packages (24.0.96-1) are now available. This is a pretest for the upcoming release of emacs-24.1. Emacs users are encouraged to try it and report any problems to the cygwin mailing list. always (switching between buffers is another big offender). I'm not sure what other information I can provide, other than the EIP=610CF707 reported in the .stackdump file... Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 8128.0x3d0] 0x0000010c in ?? () (gdb) bt #0 0x0000010c in ?? () #1 0x0054b0ac in ?? () #2 0x004e4303 in ?? () #3 0x0054afbe in ?? () #4 0x004e4e96 in ?? () #5 0x004e5180 in ?? () #6 0x004dfbec in ?? () #7 0x610070d8 in _cygwin_exit_return () from /usr/bin/cygwin1.dll #8 0x00000003 in ?? () #9 0x610050dd in _cygtls::call2(unsigned long (*)(void*, void*), void*, void*) () from /usr/bin/cygwin1.dll Backtrace stopped: Not enough registers or memory available to unwind further HTH... I'm reverting for now (I can re-install if you've got specific ideas to try out) Thanks for testing. I'll try to make debugging symbols available so that you can get a better backtrace. It might be a few days before I get to it. I can still make debugging symbols available for the version I built if you'd like, but you'll get a more reliable backtrace from a build without optimization. Would you like to build it yourself (with CFLAGS='-g -O0') and send a backtrace? If so, you can get the source from I'm copying Eli Zaretskii, one of the Emacs developers, who might be able to help with debugging if you can get a useful backtrace. Please keep him in the CC if you reply. By the way, you can find some good hints about debugging emacs in etc/DEBUG in the emacs distribution.I've downloaded the sources and will get back to you when I've had a chance to build and play with them.Figures... after using the home-built version for about 4 hours, I've only had one seg fault, and it was deep in Windows code somewhere (something about acquiring a reader lock on a file, perhaps?); gdb couldn't find any cygwin or emacs code to pin a stacktrace on. The gdb-mi integration also seems to work reasonably well, with a few exceptions: 1. The (gdb) prompt basically never displays. I find that I sometimes have to press RET before I see the prompt. I'll try to figure out why that's happening, but at least pressing RET provides a workaround in the meantime. Not for me it doesn't. Maybe this fix you mention is patched into your version?Not for me it doesn't. Maybe this fix you mention is patched into your version? 2. Breakpoints don't always jump to the source file. I could have sworn this worked before, but the 4h run that didn't crash definitely doesn't. This may have something to do with the fact that I'm loading the target file manually (to avoid the long-standing endless initialization feature/bug). Again, pressing RET seems to avoid the endless initialization bug. (This was fixed once and was a Cygwin bug, so I think it won't be hard for me to resurrect my test case and get it fixed again.)). 3. Breakpoints having "commands" stuck to them do not display their name/args when triggered, nor do some outputs for commands (such as "fr 0") which they issue. This makes it hard to see which breakpoint a given output corresponds to (print still works). The same applies for breakpoints that just stop. The combination of all three makes it really hard to tell when gdb breaks into execution. The only indication is that the status line changes to [breakpoint], or [interrupt] if the target program faults. I agree that there are some issues to be worked out, which may well be Cygwin specific. But getting to the bottom of the crashes is a higher priority.. If you continue to find that my build crashes for you but your build doesn't, we should try to figure out what the differences are. You could download the source for the Cygwin package and rebuild using my .cygport file, with the lineIt won't happen this weekend, but I may get back to you on this later. CFLAGS="-g -O0" added. I can send more detailed instructions if you're not familiar with cygport. (For one thing, you'll have to go to the build/src directory in order to run the unstripped binary under gdb.) -- Problem reports: FAQ: Documentation: Unsubscribe info:
http://cygwin.com/ml/cygwin/2012-05/msg00050.html
CC-MAIN-2016-44
refinedweb
913
74.49
Oh, what a tangled web I love the IQueryable interface, but it’s got a dark checkered past that most of you might not know about. IQueryable is a great way to expose your API or domain model for querying or provide a specialized query processor that can be used directly by LINQ. It defines the pattern for you to gather-up a user’s query and present it to your processing engine as a single expression tree that you can either transform or interpret. It’s the way LINQ becomes ‘integrated’ for many LINQ to XXX products. Yet it was not supposed to be that way; with all that ease of use, plugging automatically into LINQ with an abundance of pre-written query operators at your disposal. You were not supposed to use it for your own ends. It was not meant for you at all. It was meant for LINQ to SQL. Period. The interface, the ‘Queryable’ query operators and whatnot were all part of the LINQ to SQL product and namespace. The original plan of record was to require all LINQ implementations to define their own query operators that abided by the standard query operator pattern. You’d have to cook up your own clever way to connect calls to your ‘Where’ method back to some encoding that your engine could understand. It would be daunting work to be sure, but not impossible. After all, you were likely building a query processor anyway, so what’s another thousand lines of code? Of course, that was until that fateful day in December 2005 when the gauntlet was thrown down and the challenge was made; a challenge that had nothing what so ever to do with IQueryable. It started out as a simple little request by the infamous Don Box. “Why don’t you guys have an eval for these expression trees?” He said to me that day in an email. He was working on another project unrelated to LINQ and saw potential use of the LINQ expression trees for his own purpose, as long as there was some way to actually execute or interpret them. Of course, he wanted us to take it on. Yet our product was already so full of features that we were having a hard time as it was to convince management that even an ultra slimmed down LINQ to SQL would fit into the Orcas schedule. So I mailed him back with, “Yes, it should be straightforward to convert these trees into IL using the reflection emit API. Why don’t you build it?” You see, I challenged him to write the code, not the other way around. I figured that would shut him up. Yet, to my surprise he actually agreed. He was going to do it over the holiday break, hand it over to me when he was done and I’d find some way to get it into the product. As it turns out, I was actually relieved. It wasn’t like we had not already thought about it. Most of the design team wanted there to be an eval like mechanism, but it was not high priority since the primary consumer (LINQ to SQL and other ORM’s) where not going to need it. So over the holiday break I actually built up anticipation for it. I was pre-geeking-out. What was he going to build? Did this guy even know how to write code? Would he figure out how to solve the closure mess? My god, what had I started? As it turns out, Don did not find the time to build anything, and I was somewhat let down. However, I had gotten myself so juiced up about the idea of it working that I didn’t care. It just gave me the excuse to do it myself, and I love to write brand-new geek’n-out code. So the next weekend in January I spent all the brownie points I had built up over the break by engaging in ‘family time’ and plugged myself to my machine for an all night coding session. I was running high on adrenaline, and the solutions just seemed to come as fast as I could type. By Sunday it was all working beautifully. On Monday I was eager to show it off and so I did during the design meeting. I showed everyone a mechanism that could turn any LINQ expression tree into a delegate that could be called directly at runtime. The IL generated was the same as what the compiler would give you, so it performed just a well. Of course, that’s when it happened. That’s when this seemingly unrelated geek-fest over the expression tree blossomed into something much more. You see Anders had been thinking about something else over the break. He was looking for a way to solve the polymorphism problem of making queries first class things within the language, since what we had so far was a really just an illusion. Query objects were really just IEnumerables. LINQ to SQL queries were IQueryables, which were IEnumerables by inheritance. The only way someone could write a general piece of code to operate over ‘any’ query was to specify its type as IEnumerable. Yet, the compiler would treat the query differently depending on its static type. LINQ to SQL’s IQueryable mechanism wouldn’t work if the compiler thought it was IEnumerable, no expression tree would be built; the query would just run locally and not inside the database server where it belonged. After seeing the demonstration everything just clicked. If IEnumerables could be turned into IQueryables such that the operations applied to it were captured as expression trees (as LINQ to SQL was already doing) and if those expression trees could be turned back into executable IL as delegates (which so happened to be just what we needed to feed into the locally executing standard query operators) then we could easily turn IQueryables back into locally executing IEnumerables. The IQueryable interface could become the polymorphic query interface instead of IEnumerable. Queries meant to run against local objects could be manipulated just like their expression tree toting brethren. Dynamic mini-languages could be written to generate expression trees and apply query operators to any type of query generically. Life was good. The whole was suddenly greater than the sum of its parts. It became obvious that we needed the expression compiler as part of the product and that IQueryable should be promoted out of the private domain of LINQ to SQL and into the limelight to become the general definition of a query. It was a done deal. And all because Don wanted us to do Evil Eval. THANK YOU Don and Matt!!! Simply put.... thank you. Glad those Ruby guys don't get to have all the fun :) Come on Aaron... Everyone know *I* suggested it in a forum. but called it IDomainnameProvider And I am a Ruby guy now lol :) Nicolas One of the questions I've had about the new lambda expression syntax in C# 3.0 is how to pronounce it... Some quick links about LINQ: Articles about extension methods by the Visual Basic team Third-party LINQ In search of a solution to how LINQ to SQL should be used in an N-tier application architecture with...
http://blogs.msdn.com/mattwar/archive/2007/06/01/iqueryable-s-deep-dark-secret.aspx
crawl-002
refinedweb
1,229
70.73
The train schedule gods were smiling on me that day, unfortunately. I never had to wait more than 20 minutes for my next train. I had to switch trains three times - at Attnang-Puchheim, Salzburg and Munich. The train between Munich and Frankfurt was a high-speed one that traveled at 250 km/h for much of the way. I got to the airport with plenty of time to spare...damn. Anxiously awaiting the return home to Canada. I had to choke back tears many times that day. I absolutely loved Europe and vowed to return (it won't be long!). The plane ride was monotonously boring as usual, but I did meet a Canadian guy who gave me many ideas on what to do with my life (another story). I was not looking forward to going home, although I was looking forward to seeing my friends again. The plane landed in Toronto, Ontario early, but it took FOREVER for me to get out of the airport. My 2-month trip around Europe was finally over. The travel bug had bitten me hard. I'm already planning my next trip! Here are the final trip statistics.
http://ca.geocities.com/kenlasko/Frankfurt.htm
crawl-001
refinedweb
196
84.07
Notifications You’re not receiving notifications from this thread. Having trouble figuring out net/http and API calls. Hi folks, I need to make a number of calls to an API, some of which are GET requests and some of which are PUT requests. I've tried to make a common function since there's a bunch of security stuff that needs to happen, and I'm not sure if a) I'm actually doing it right in the first place, or b) there's a better/simpler/cleaner/more reliable way to do it instead. I have the following function to handle the contact with the Smart API: def callSmart(apipath,format) pem = File.read("#{Rails.root}/private/websummit.pem") uri = URI.parse(apipath) http = Net::HTTP.new(uri.host, uri.port) = true = OpenSSL::X509::Certificate.new(pem) = OpenSSL::PKey::RSA.new(pem, ENV['KEY_PASSWORD']) = OpenSSL::SSL::VERIFY_NONE { if format == 'get' request = Net::HTTP::Get.new(apipath) elsif format == 'put' request = Net::HTTP::Put.new(apipath) end {|res| respond_to do |format| result = { :message => res.body } format.json { render :json => result } end } } end I can pass it an appropriate URL such as: callSmart(' and get the result: {"message":"{\"connection\":{\"connected\":true}}"} So far, so good. However I then tried calling one of the actions on the car, rather than passively requesting data: callSmart(' This got me the following response: {"message":"{\"message\":\"Service is not authenticated\"}"} Given that it's running the same authentication code, do you have any idea why one is working and the other not? I know it seems simple... but have you verified that you actually have put priviledges from the provider? I'm dealing with this right now on a project where I have access to everything except for one resource, so I'm in a holding pattern until they fix the account. The error message returned was just null so I burned a day thinking there was something wrong with the connection method / query. =( It works on a straight curl request, so I'm figuring we have the right priviledges unless curl works differently - I'm not really that familiar with it. Hmm, well I don't see anything that jumps out that would be wrong with your setup, but I've never really used the Net::HTTP API so I'm not much help there. Have you by chance tried using one of the rest gems just to make sure it's not a setup issue? When I need REST support I drop in and go to town.
https://gorails.com/forum/having-trouble-figuring-out-net-http-and-api-calls
CC-MAIN-2022-21
refinedweb
424
62.38
Handle 404s Now that we know how to handle the basic routes; let’s look at handling 404s with the React Router. Create a Component Let’s start by creating a component that will handle this for us. Create a new component at src/containers/NotFound.js and add the following. import React from "react"; import "./NotFound.css"; export default () => <div className="NotFound"> <h3>Sorry, page not found!</h3> </div>; All this component does is print out a simple message for us. Let’s add a couple of styles for it in src/containers/NotFound.css. .NotFound { padding-top: 100px; text-align: center; } Add a Catch All Route Now we just need to add this component to our routes to handle our 404s. Find the <Switch> block in src/Routes.js and add it as the last line in that section. { /* Finally, catch all unmatched routes */ } <Route component={NotFound} /> This needs to always be the last line in the <Route> block. You can think of it as the route that handles requests in case all the other routes before it have failed. And include the NotFound component in the header by adding the following: import NotFound from "./containers/NotFound"; And that’s it! Now if you were to switch over to your browser and try clicking on the Login or Signup buttons in the Nav you should see the 404 message that we have. Next up, we are going to configure our app with the info of our backend resources. If you liked this post, please subscribe to our newsletter, give us a star on GitHub, and check out our sponsors. For help and discussionComments on this chapter
https://branchv21--serverless-stack.netlify.app/chapters/handle-404s.html
CC-MAIN-2022-33
refinedweb
278
74.59
points being displayed in the world points being displayed in the world how do i display points in my world Hi Friend, Please clarify your problem. Thanks i am Getting Some errors in Struts - Struts i am Getting Some errors in Struts I am Learning Struts Basics,I am Trying examples do in this Site Examples.i am getting lot of errors.Please Help me errors errors i am getting an error saying cannot find symbol class string.how do i rectify compre request dispatcher and send redirectmethod - Framework compre request dispatcher and send redirectmethod compre request dispatcher and send redirectmethod Hi I am sending a link where u will found in details about RequestDispatcher vs sendRedirect methods. http datetimepicker not displayed in struts2 datetimepicker not displayed in struts2 Hi, I am facing problem... prefix="s" uri="/struts-tags"%> <%@taglib prefix="d" uri="/struts-dojo-tags"%> Note : I was included "struts2-dojo-plugin-2.1.6.jar" in WEB Display Errors using Message Resources - Struts where i used applicationresources.properties to display errors for null values and i wrote the condition in formbean. Now i want to show the errors for invalid... required.. I want to display errors using message resources using How to send message in struts on mobile - Struts How to send message in struts on mobile Hello Experts, How can i send messages on mobile i am working on strus application and i want to send message from jsp the Struts if being used in commercial purpose. the Struts if being used in commercial purpose. Do we need to pay the Struts if being used in commercial purpose How to send HTTP request in java? How to send HTTP request in java? How to send HTTP request in java send the mail with attachment problem - Struts send the mail with attachment problem Hi friends, i am using the below code now .Here filename has given directly so i don't want that way. i need... message.setContent(multipart); // Send the message Struts - Struts Struts Hello I like to make a registration form in struts inwhich... be displayed on single dynamic page according to selected by student. pls send.... Struts1/Struts2 For more information on struts visit to : check for errors check for errors How can I check for errors Getting 404 errors - Java Beginners I received a 404 errors and I identified that the servlet doesn't appear...Getting 404 errors Dear experts, I have embedded a login page inside my landing page - home.jsp. So, at the right-hand column of my page, I request for java source code request for java source code I need source code for graphical password using cued-click points enabled with sound signature in java and oracle 9i as soon as possible... Plz send to my mail Understanding Struts Controller part of the Struts Framework. I will show you how to configure the struts...; This servlet is responsible for handing all the request for the Struts... Understanding Struts Controller   Request for codes - JSP-Servlet Request for codes Sir , I am an engineering student i am interested in learning JAVA also i need some example code for creating Registration form codes for creating web based application using JSP sir plz send me which If I open .class file with notepad what is displayed? If I open .class file with notepad what is displayed? What is displayed when we open java .class files with notepad compilation errors (HttpServletRequest request, HttpServletResponse response) throws...() { } } giving errors: 1) WelcomeServlet.java:37: ')' expected error log and send Database error log and send Database hi my requirement is validate xml and xsd in java.If there is an errors then i will log error and store into error table. so plz if any one knows send code urgent. error table details How to send NSURLConnection synchronous request on https? How to send NSURLConnection synchronous request on https? Hi, Give me program example of NSURLConnection synchronous request on https. Thanks What is Struts - Struts Architecturec . Struts is famous for its robust Architecture and it is being used for developing...: The Struts Controller Components: Whenever a user request for something, then the request is handled by the Struts Action Servlet. When the ActionServlet receives getting errors in the ascending order. */ ? for(int i=0; i<a.length; i++) { j=0; if(a[i]>0) { c[j]=a[i]; j++; } } for(int k=0;k<c.length;k...; list=new ArrayList<Integer>(); for(int i=0; i<a.length;i java errors java errors when i am compiling the java program it is giving that the file can not be read what does it mean.. and what i have to do to remove that error.. please reply me soon Check for the directory ..if JSP Request Dispatcher , response) send the request page back to form.jsp. Otherwise it transfer the page... JSP Request Dispatcher  ... the RequestDispatcher class to transfer the current request to another jsp page. You can is not processing while the rest of the data is being processed.Please Help me out STRUTS 1.2.9 (NetBeans 6.1) - ValidationGroup for <html:errors> and <html:submit> as .NET? - Struts the are shown and I don?t want this In struts exists something like...STRUTS 1.2.9 (NetBeans 6.1) - ValidationGroup for and as .NET? Hi I have 3 and 6 each one with its respective (required) [text1] [text2 JSP Forwards a request How JSP Forwards a request In this section you will study how jsp forwards a request. The <jsp:forward> forwards the request information from one resource Compiler errors in java Compiler errors in java Hi, I used GenerateRDF java file. Am getting errors when i run this code. I used command prompt only. getting errors as no package exist. i followed your instructions properly. Please help me out Retrieve HTTP Request Headers using JSP Retrieve HTTP Request Headers using JSP  ... the request headers. When a HTTP client sends a request, it is required to supply GET or POST. It can send a number of headers. Here are some headers:   Struts - Struts Struts hi, I am new in struts concept.so, please explain example login application in struts web based application with source code...:// I hope that, this link will help you struts - Struts struts Hi, I am new to struts.Please send the sample code for login and registration sample code with backend as mysql database.Please send....shtml>...;/h1></center> <html:errors/><br/& Frameworks highly maintainable web based enterprise applications. Struts is also being...Struts Frameworks Struts framework is very useful in the development of web... (data), View (user interface) and Controller (user request handling Java Program Errors Java Program Errors These are the 2 errors which I am getting when executing the java source code extracted from executable jar file The project was not built since its build path is incomplete. Cannot find the class file url parameters displayed in short sentence url parameters displayed in short sentence In my url parmeter like below"1" It will displayed the value of id=1 ... I want to change this url Send Cookies in Servlets Send Cookies in Servlets This section illustrates you how to send cookie... can create cookie, read cookie and send cookie to the client browser. You can How to customize property type conversion errors in Spring MVC 3. form example, I have populated age's imput with a string "hello". After send...How to customize property type conversion errors in Spring MVC 3. Hi, I have readed "Spring 3 MVC Validation Example". request this program request this program if three text box value in my program i want to check the three input boxes values and display greatest two values STRUTS2.0 Validation Errors is happening perfectly. But the problem is even if i am enering the values the previous field errors persist in the page. For eg:if i am not entering both..." and "password is required".Next time even if i am enering the username the first Struts Articles request processing differs in the portlet Struts environment from the servlet Struts... previous Struts experience. I notice in many cases that some sort of ?paradigm mismatch? exists between Struts and JSF. I call this the ?problem of the locomotive How to get the request scope values? - Struts How to get the request scope values? Get value in Struts errors that what the errors in above code php display errors php display errors i don't know how to display the error using PHP Struts iteraor Struts iteraor Hi, I am making a program in Struts 1.3.8 in which i have to access data from mysql. I am able to access data but data is not coming...(ActionMapping mapping, HttpServletRequest request) { ActionErrors errors Struts - Struts mapping, HttpServletRequest request ) { ActionErrors errors = new ActionErrors(); return errors; } public String getAction() { return action...,HttpServletRequest request){ this.id = null; this.userid=null; this.password=null send answer send answer For online exam project,i want single question on single page and on that page there are three buttons previous,submit,next.If i click on next button it show next question from database Struts - Struts , HttpServletRequest request ) { ActionErrors errors = new ActionErrors(); return errors; } public String getAction...; public void reset(ActionMapping mapping,HttpServletRequest request java - Struts java hi.., i wrote login page with hardcodeted username and password ,but when i submit the page ,i give blank page... one hint: error... problem...see the code i have created formbean class,formaction class Struts Books Request Dispatcher. In fact, some Struts aficionados feel that I exagerate the negatives of Struts in the next section. I like Struts, and think... to internationalize your Struts applications Tips for managing errors struts first example - Struts struts first example I got errors in struts first example like can... welcome.title=Struts Blank Application welcome.heading=Welcome! index.jsp struts-config.xml struts validation struts validation I want to apply validation on my program.But i am failure to do that.I have followed all the rules for validation still I am unable to solve the problem. please kindly help me.. I describe my program below Need urgent help with C++ errors! Need urgent help with C++ errors! hi, i'm new to C++ programming. this is my code... i'm using Turbo C++. It's showing so many errors!.. I...() { cout<<"Can somebody fix this?"; } the errors are listed as follows Request[/DispatchAction] does not contain handler parameter named 'parameter'. This may be caused by whitespace in the label text. - Struts struts-config.xml file & three jsp pages but it shows the warning Request... Request[/DispatchAction] does not contain handler parameter named 'parameter'. This may be caused by whitespace in the label text. I am trying Str Interview Questions Struts Interview Questions Question: Can I setup Apache Struts to use multiple... are the disadvantages of Struts? Answer: Struts is very robust framework and is being and hibernate integration struts and hibernate integration i want entire for this application using struts and hibernate integration here we have to use 4 tables i.e... all the courses are approved or not then logout all these modification displayed need to fix errors please help need to fix errors please help it does have 2 errors what should i fix? import java.io.*; class InputName static InputStreamReader reader = new InputStreamReader(system.in); static BufferedReader input = new BufferedReader Understanding Struts Action Class Understanding Struts Action Class In this lesson I will show you how to use Struts Action... HTTP request and the business logic that corresponds to it. Then the struts Struts - Jboss - I-Report - Struts Struts - Jboss - I-Report Hi i am a beginner in Java programming and in my application i wanted to generate a report (based on database) using Struts, Jboss , I Report How to send request to the visa/master card to get verifed the credit card - Security How to send request to the visa/master card to get verifed the credit card Sending request to verify the card number best Struts material - Struts best Struts material hi , I just want to learn basic Struts.Please send me the best link to learn struts concepts Hi Manju...:// Thanks how to send sms on mobile how to send sms on mobile send sms on mobile by using struts + spring Interceptors in Struts 2dojo\..*,^struts\..*,^session\..*,^request...;paramdojo\..*,^struts\..*,^session\..*,^request...;paramdojo\..*,^struts\..*,^session\..*,^request successfully able to create a session on successful login, but I want to display Servlet Response Send Redirect - JSP-Servlet Servlet Response Send Redirect Hi, Thank you for your previous... be one of my last questions as I am almost finish with my web medical clinic app... editing records. 1. In the code that you have given last time, I want Im not getting validations - Struts Im not getting validations I created one struts aplication im using DynaValidations I configured validation.xml and validation-rules.xml also...")); } saveMessages(request,errors); if(errors.isEmpty()){ return Struts - Struts Struts how to handle errors Request context in struts? SendRedirect () and forward how to configure in struts-config.xml
http://www.roseindia.net/tutorialhelp/comment/16247
CC-MAIN-2014-42
refinedweb
2,186
65.62
Overview of Hadoop archives Storing a large number of small files in HDFS leads to inefficient utilization of space – the namespace is overutilized while the disk space might be underutilized. Hadoop Archives (HAR) address this limitation by efficiently packing small files into large files without impacting the file access. The Hadoop Distributed File System (HDFS) is designed to store and process large (terabytes) data sets. For example, a large production cluster may have 14 PB of disk space and store 60 million files. However, storing a large number of small files in HDFS is inefficient. A file is generally considered to be "small" when its size is substantially less than the HDFS block size, which is 256 MB by default in HDP. Files and blocks are name objects in HDFS, meaning that they occupy namespace (space on the NameNode). The namespace capacity of the system is therefore limited by the physical memory of the NameNode. When there are many small files stored in the system, these small files occupy a large portion of the namespace. As a consequence, the disk space is underutilized because of the namespace limitation. In one real-world example, a production cluster had 57 million files less than 256 MB in size, with each of these files taking up one block on the NameNode. These small files used up 95% of the namespace but occupied only 30% of the cluster disk space. Hadoop Archives (HAR) can be used to address the namespace limitations associated with storing many small files. HAR packs a number of small files into large files so that the original files can be accessed transparently (without expanding the files). HAR increases the scalability of the system by reducing the namespace usage and decreasing the operation load in the NameNode. This improvement is orthogonal to memory optimization in the NameNode and distributing namespace management across multiple NameNodes. Hadoop Archive is also compatible with MapReduce — it allows parallel access to the original files by MapReduce jobs.
https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.1.0/data-storage/content/overview_of_hadoop_archives.html
CC-MAIN-2019-22
refinedweb
331
51.78
There are many ways to extract stocks information using python. A simple way to get the current stocks data can be achieved by using python Pandas. The data retrieved however are limited. The method I use below are based on downloading the various data .csv file, a service provided by the Yahoo Finance. The method to construct the various url to download the .csv information are described in great details from the Yahoo Finance API. The current script created can only retrieved the most current data statistics for the various stocks. First, it will construct the URL based on user stocks input and the parameters required. It then makes use of the PATTERN module to read the url and download the information to local drive. Next, it will call the pandas function to read the .csv file and convert it to data frame for further analysis. Sample output of the script is as shown below. data_ext = YFinanceDataExtr() ## Specify the stocks to be retrieved. Each url constuct max up to 50 stocks. data_ext.target_stocks = ['S58.SI','S68.SI'] #special character need to be converted ## Get the url str data_ext.form_url_str() print data_ext.cur_quotes_full_url ## >>> ## Go to url and download the csv. ## Stored the data as pandas.Dataframe. data_ext.get_cur_quotes() print data_ext.cur_quotes_df ## >>> NAME SYMBOL LATEST_PRICE OPEN CLOSE VOL YEAR_HIGH YEAR_LOW ## >>> 0 SATS S58.SI 2.99 3.00 3.00 1815000 3.53 2.93 ## >>> 1 SGX S68.SI 7.18 7.19 7.18 1397000 7.63 6.66 To specify the parameters to be output, it can be changed in the following method of the script. In future, this will be refined to be more user friendly. To download data from web, the following pattern method is used: def downloading_csv(self, url_address): """ Download the csv information from the url_address given. """ url = URL(url_address) f = open(self.cur_quotes_csvfile, 'wb') # save as test.gif f.write(url.download()) f.close() The full script can be found at GitHub.
https://simply-python.com/2014/08/15/extracting-stocks-info-from-yahoo-finance-using-python/
CC-MAIN-2019-30
refinedweb
326
78.35
- NAME - VERSION - SYNOPSIS - DESCRIPTION - ATTRIBUTES - SUBROUTINES/METHODS - DEPENDENCIES - SEE ALSO - PLATFORMS - BUGS AND LIMITATIONS - AUTHOR - LICENSE AND COPYRIGHT NAME BuzzSaw::DataSource - A Moose role which defines the BuzzSaw data source interface VERSION This documentation refers to BuzzSaw::DataSource version 0.12.0 SYNOPSIS package BuzzSaw::DataSource::Example; use Moose; with 'BuzzSaw::DataSource'; sub next_entry { my ($self) = @_; .... return $line; } sub reset { my ($self) = @_; .... } DESCRIPTION This is a Moose role which defines the methods which must be implemented by any BuzzSaw data source class. It also provides a number of common attributes which all data sources will require. A data source is literally what the name implies, the class provides a standard interface to any set of log data. A data source has a parser associated with it which is known to be capable of parsing the particular format of data found within this source. Note that this means that different types of log files (e.g. syslog, postgresql and apache) must be represented by different resources even though they are all sets of files. There is no requirement that the data be stored in files, it would be just as easy to store and retrieve it from a database. As long as the data source returns data in the same way, one complete entry at a time, it will work. A BuzzSaw data source is expected to work like a stream. Each time the next entry is requested the method should automatically move on until all entries in all resources are exhausted. For example, the Files data source automatically moves on from one file to another whenever the end-of-file is reached. The following atributes are common to all classes which implement this interface. - db This attribute holds a reference to the BuzzSaw::DB object. When the DataSource object is created you can pass in a string which is treated as a configuration file name, this is used to create the BuzzSaw::DB object via the new_with_configclass method. Alternatively, a hash can be given which is used as the set of parameters with which to create the new BuzzSaw::DB object. - parser This attribute holds a reference to an object of a class which implements the BuzzSaw::Parser role. If a string is passed in then it is considered to be a class name in the BuzzSaw::Parser namespace, short names are allowed, e.g. passing in RFC3339would result in a new BuzzSaw::Parser::RFC3339 object being created. - readall This is a boolean value which controls whether or not all files should be read. If it is set to true(i.e. a value of 1 - one) then the code which normally attempts to avoid re-reading previously seen files will not be used. The default value is false(i.e. a value of 0 - zero). SUBROUTINES/METHODS Any class which implements this role must provide the following two methods. - $entry = $source->next_entry This method returns the next entry from the stream of log entries as a simple string. For example, with the Files data source - which works through all lines in a set of files - this will return the next line in the file. This method should use the BuzzSaw::DB object start_processingand register_logmethods to avoid re-reading sources (unless the readallattribute is true). It is also expected to begin and end DB transactions at appropriate times. For example, the Files data source starts a transaction when a file is opened and ends the transaction when the file is closed. This is designed to strike a balance between efficiency and the need to commit regularly to avoid the potential for data loss. Note that this method does NOT return a parsed entry, it returns the simple string which is the next single complete log entry. When the data source is exhausted it will return the undefvalue. - $source->reset This method must reset the position of all (if any) internal iterators to their initial values. This then leaves the data source back at the original starting position. Note that this does not imply that a second parsing would be identical to the first (e.g. files may have disappeared in the meantime). The following methods are provided as they are commonly useful to most possible data sources. - $sum = $source->checksum_file($file) This returns a string which is the base-64 encoded SHA-256 digest of the contents of the specified file. - $sum = $source->checksum_data($data) This returns a string which is the base-64 encoded SHA-256 digest of the specified data. DEPENDENCIES This module is powered by Moose, it also requires MooseX::Types, MooseX::Log::Log4perl and MooseX::SimpleConfig. The Digest::SHA module is also required. SEE ALSO BuzzSaw, BuzzSaw::DataSource::Files, DataSource::Importer, BuzzSaw::DB, BuzzSaw::Parser. 1 POD Error The following errors were encountered while parsing the POD: - Around line 206: You forgot a '=back' before '=head1'
https://metacpan.org/pod/BuzzSaw::DataSource
CC-MAIN-2015-14
refinedweb
812
54.63
BindTo Users Guide Contents - Fortran to C: Simple example - Fortran to C: Types - Fortran to C: Example of derived type with Bind(C) - Fortran to C: Example of derived type without Bind(C) - Fortran to C: Arrays - Fortran to Python: Introduction - Fortran to Python: Example - Fortran to Python: BindTo directives - Fortran to Python: Benchmark test - Not supported features In many todays applications it is common to use two or even more programming languages. The use of mixture of the languages lets programmers exploited the strong sides of each language. No doubt, that strong side of Fortran is the speed of calculation. However, when it comes to building of a graphical user interface, a preparation of data for calculations (pre-processing) or an analysis of the calculation results (post-processing), other programming languages may add significantly to the end product of your work. BindTo tool tries to make the communication with Fortran code from the outside more easily by automatically generating the required wrapping code. An intrinsic “iso_c_binding” module is used. In addition to the exposing Fortran code to C, BindTo can generate Cython (cython.org) files, which enable to call Fortran from Python language. In general, this tool doesn’t do anything what you could not do by yourself. It just saves your time. Actually, BindTo is what Fwrap intended to be, but its development was abandoned (maybe it is time for revival?). Alternatively, Fortran code can be connected to Python by using “f2py” or its extension “f90wrap”. Fortran to C: Simple example Let start from a simple example. Say, we have a nice Fortran code and now what to make it accessible from C or other language, which can call C. For this purpose Fortran standard includes iso_c_binding module. By use of this module, we can make our code callable from C. We just need to write a thin wrapper code using this iso_c_binding. BindTo does exactly the same: writes this wrapper code. In this example we have a simple Fortran subroutine: ! Add two integers in Fortran subroutine add_it(a,b,c) implicit none integer, intent(in) :: a integer, intent(in) :: b integer, intent(out) :: c print *, "Hello from Fortran!" c = a + b end subroutine Steps: - Create a new Code::Blocks project: File->New->Project, select "Fortran library" and follow wizard. I created project called “bind_me” (Fig.1). For the compiler, I selected “GNU Fortran Compiler”. - The new project contains “main.f90”, but you can rename it to “add.f90” (you can rename the files directly in Projects tree, if files are closed). Open that file and change the content to the code from above. Compile the project. The result is the “libbind_me.a”. Fig.1. Project "bind_me" properties Now open BindTo dialog (Fortran->Bind To…) (Fig.2). For this simple example, the default settings should work. For more sophisticated cases you may need to add or change the items in “Types” table. An information from this table is used in generated files. The column “Fortran” on the table is the type in your Fortran code. The column “Fortran Bind(c)” is the type used in the generated Fortran wrapping code. And the column “C” is the type in the generated C header file. Run BindTo by pressing “OK” button. The files “add_bc.f90” and “add_bc.h” should be generated in “<your-project-path>/bind” folder. Fig.2. BindTo dialog - Add generated “add_bc.f90” file to “bind_me” project (Project->Add files…). Compile or recompile the project. - Now is the time to write C++ code (I will use C++ not C) from which Fortran subroutine is called. However first a new build target should be created, for which GCC compiler should be selected: Project->Properties…, Build targets, Add. Give name “cmain_debug” for the new target. Select type “Console application”. Check “Pause when execution ends”. Select “Build options…” for this target. Change target’s compiler to “GNU GCC Compiler”. Close dialogs. - Create a new file (File->New->Empty file). Add this file to your project. Assign this file to “cmain_debug” build target. Also add automatically generated “add_bc.h” file from “<your-project-path>/bind” folder to the same “cmain_debug” build target. Add “libbind_me” and “libgfortran” libraries to the Linker Settings of this target (Fig.3) Fig.3. Linker Settings - Write C++ main function, in which call Fortran subroutine. It could be like this: #include <iostream> #include "bind/add_bc.h" int main() { int mysum; int a=1, b=2; add_it(&a, &b, &mysum); std::cout << "Result = " << mysum << std::endl; return 0; } - Make the “cmain_debug” build target active. Compile it and execute the program. Fortran to C: Types In this example I will try to explain a bit more about the type conversion between Fortran and C. Actually I am unsure if "conversion" is the right term, because most Fortran types just have corresponding C types. The types information used by BindTo tool is in the table “Binding types” on BindTo dialog. For example Fortran integer(4) type corresponds to integer(c_int32_t) in iso_c_binding module and it corresponds to int32_t in C. If some line in this table is not true for your compiler, you can change it. In some cases you will need to add new types. E.g. if you use real(rp), where rp is a parameter defined somewhere in your program, you will need to add a new line with real(rp) for Fortran, real(8) for Bind(C) (here I assumed, that rp=8) and double for C. There are three cases, about which I should speak separately: a) character type; b) logical type; c) derived types. Character type A string in C language is a one-dimensional array of characters terminated by a null character \0. Exactly this is assumed by BindTo, i.e. strings which come from C have to be terminated with \0. On Fortran side in the generated wrapping code the string from C is seeing as one-dimensional assumed size array of character(c_char) type. The size of this array is determined from the position of c_null_char (\0 in Fortran). The conversion (copy) of character array to character(len=determined_size) is made and passed to Fortran code. On the return, an opposite operation is performed. By using intent(in) or intent(out) in your Fortran code, one copy operation may be avoided. The wrapping of Fortran character arrays (e.g. character(len=20), dimension(*) :: names ) is not supported. Logical type While iso_c_binding module includes c_bool, it seems that this solution doesn’t work for default logical type. Therefore, in the generated Fortran wrapper code, the conversion from Fortran logical type to C int (and oppositely) is performed. There is no entry for logical types in Types table on BindTo dialog, because this rule is used for logical<->int conversion implicitly. However user may add an entry for some logical type in the table and then this new rule will be used. Derived types In the Fortran defined derived type may, in some cases, correspond to C struct. For this user need to add Bind(C) attribute in declaration of such derived type. If BindTo finds such attribute, it writes corresponding struct in the generated *.h file. However, in many cases there is not possible to have corresponding struct in C for Fortran derived type (allocatable or pointer components). If BindTo doesn’t find Bind(C) attribute in the type declaration, it takes C pointer to derived type variable. This could look something like: type(mytype), pointer :: fvar type(c_ptr) :: cp_fvar allocate(fvar) cp_fvar = c_loc(fvar) ! do the stuff with fvar variable .... In this case, when type(c_ptr) is used, it is required to allocate memory for the derived type pointer variable. Therefore, every derived type variable should have a constructor procedure, where memory is allocated and a destructor procedure, where memory is freed. If your code has a procedure (subroutine or function) in which the components of the derived type variable are initialized, you can tell for BindTo how it is called, and BindTo will use it as a constructor. However, there is one important convention: BindTo requires, that first dummy argument in the constructor subroutine be of the derived type for which this constructor is created. Every Fortran function which returns the derived type variable, can be called “constructor”, because the memory have to be allocated for the return variable. It is assumed that constructor and destructor is in the same module where the derived type is defined. For the destructor, a subroutine with derived type dummy argument can be used. However, this subroutine should have no other arguments except of this one. The destructor should be called from C code by the user for every derived type variable to free memory, when this variable is not needed anymore. If BindTo can’t find the constructor or/and the destructor procedure, then the constructor or/and the destructor is created automatically. Fortran to C: Example of derived type with Bind(C) In this example the use of a derived type variable with Bind(C) attribute is showed. Fortran code: module person_m use iso_c_binding type, bind(c) :: person_t integer(c_int) :: age character(len=20, kind=c_char) :: name end type contains subroutine print_person(p) type(person_t), intent(in) :: p print *, "Person received:" print *, " age=", p%age print *, " name=", p%name end subroutine end module The BindTo creates two files. The first file with Fortran code: module person_m_bc use :: person_m use, intrinsic :: iso_c_binding implicit none contains subroutine print_person_bc(p) bind(c,name='print_person') type(person_t), intent(in) :: p call print_person(p) end subroutine end module As you can see, the derived type person_t variable comes as a dummy argument to subroutine print_person_bc. The second file is C header file: #ifdef __cplusplus extern "C" { #endif #ifndef PERSON_BC_H #define PERSON_BC_H typedef struct { int age; char name[21]; } person_t; // Module 'person_m' procedures void print_person(person_t* p); #endif #ifdef __cplusplus } #endif In the header file, the structure person_t is created. To call Fortran, the following C++ code can be used: #include <string.h> #include "bind/person_bc.h" int main() { // Create structure and call fortran person_t person; person.age = 33; strcpy(person.name,"Spyder-Man "); print_person(&person); return 0; } Fortran to C: Example of derived type without Bind(C) Second example demonstrates the use of a derived type, which doesn’t have Bind(C) attribute. Fortran code: module balls implicit none type balls_t integer :: n real, dimension(:,:), allocatable :: coord end type contains subroutine balls_ctor(self, n) type(balls_t), intent(inout) :: self integer, intent(in) :: n allocate(self%coord(3,n)) self%n = n print *, "Fortran: A variable of type 'balls_t' was created." end subroutine subroutine set_coord(self, bc) type(balls_t), intent(inout) :: self real, dimension(:,:), intent(in) :: bc if (any(shape(self%coord) /= shape(bc))) then stop "Error. sub. set_coord. Array sizes should be the same!" end if self%coord = bc end subroutine subroutine get_coord(self, i, xb) type(balls_t), intent(in) :: self integer, intent(in) :: i real, dimension(*), intent(out) :: xb xb(1:3) = self%coord(:,i) end subroutine end module BindTo generates Fortran wrapper code: module balls_bc use :: balls use, intrinsic :: iso_c_binding implicit none contains subroutine balls_ctor_bc(self, n) bind(c,name='balls_ctor') type(c_ptr), intent(out) :: self integer(c_int), intent(in) :: n type(balls_t), pointer :: self_fp allocate(self_fp) self = c_loc(self_fp) call balls_ctor(self_fp, n) end subroutine subroutine set_coord_bc(self, bc, m1, m2) bind(c, name= 'set_coord') type(c_ptr), intent(inout) :: self real(c_float), dimension(m1,m2), intent(in) :: bc type(balls_t), pointer :: self_fp integer(c_int), intent(in) :: m1 integer(c_int), intent(in) :: m2 call c_f_pointer(self, self_fp) call set_coord(self_fp, bc) end subroutine subroutine get_coord_bc(self, i, xb) bind(c,name='get_coord') type(c_ptr), intent(in) :: self integer(c_int), intent(in) :: i real(c_float), dimension(*), intent(out) :: xb type(balls_t), pointer :: self_fp call c_f_pointer(self, self_fp) call get_coord(self_fp, i, xb) end subroutine function balls_t_ctor_bc() bind(c,name='balls_t_ctor') type(c_ptr) :: balls_t_ctor_bc type(balls_t), pointer :: this_fp allocate(this_fp) balls_t_ctor_bc = c_loc(this_fp) end function subroutine balls_t_dtor_bc(this_cp) bind(c,name='balls_t_dtor') type(c_ptr), intent(in) :: this_cp type(balls_t), pointer :: this_fp call c_f_pointer(this_cp, this_fp) deallocate(this_fp) end subroutine end module Here the subroutine balls_ctor was recognized as a constructor and therefore the allocation of self_fp takes place in the subroutine balls_ctor_bc. In addition, a constructor function balls_t_ctor_bc, which takes no arguments, was created also. Freeing of memory is performed in the destructor subroutine balls_t_dtor_bc. Do not forget to call it for every derived type variable from your C/C++ code. BindTo generated the following header file: #ifdef __cplusplus extern "C" { #endif #ifndef BALLS_BC_H #define BALLS_BC_H // Module 'balls' procedures void balls_ctor(void** self, int* n); void set_coord(void** self, float* bc, int* m1, int* m2); void get_coord(void** self, int* i, float* xb); void* balls_t_ctor(); void balls_t_dtor(void** this_cp); #endif #ifdef __cplusplus } #endif Below is an example of C++ code in which Fortran is called: #include <iostream> #include "bind/balls_bc.h" int main() { void* balls; // Fortran derived type int n = 2; int dim = 3; balls_ctor(&balls, &n); float bcoord[] = {1.,2.,3.,4.,5.,6.}; set_coord(&balls, bcoord, &dim, &n); int i = 2; float xp[3]; get_coord(&balls, &i, xp); for (int j=0; j<3; j++) { std::cout<<"xp[j]="<<xp[j]<<"\n"; } balls_t_dtor(&balls); return 0; } Fortran to C: Arrays BindTo recognizes explicit-shape arrays (var(n)), assumed-size arrays (var(n,*)) and assumes-shape arrays (var(:,:)). To call assumed-shape arrays from C, such array is wrapped with explicit-shape array. For example, if in Fortran an array is declared with integer :: ivar(:), then in generated wrapper code we will have integer :: iver(m1) and m1 will be added to the list of dummy arguments of the procedure. An example of passing an assumed-shape array can be found in the previous section in the subroutine set_coord. What if Fortran procedure has several assumed-shape arrays as the dummy arguments? In such case a separate argument will be created for every array dimension. But probably several arrays have the same dimension? For such case a special directive can be used. If a line in the Fortran code starts with !BindTo sentinel (case doesn’t matter), this line interpreted by BindTo tool. In case of assumed-shape arrays, it may be useful to explicitly define dimensions of arrays. For example: subroutine calculate(a, b, c) real, dimension(:,:) :: a, b, c !bindto dimension(m,m) :: a, b, c .... end subroutine The generated Fortran wrapper code: subroutine calculate_bc(a, b, c, m) bind(c,name='calculate') real(c_float), dimension(m,m) :: a real(c_float), dimension(m,m) :: b real(c_float), dimension(m,m) :: c integer(c_int), intent(in) :: m call calculate(a, b, c) end subroutine Generated C header: void calculate(float* a, float* b, float* c, int* m); Here you see, that !BindTo sentinel was recognized and instead of 6 different variables for array sizes just one m was added to the list of the arguments. More about BindTo directives read in the section "Fortran to Python: BindTo directives". Fortran to Python: Introduction Python interpretable language is attractive for its flexibility, simplicity etc. One of the attractive features of Python for a number crunching folk is its convenient support of arrays using NumPy library. It is possible to operate on NumPy arrays in a similar fashion as in Fortran. However, pure Python program may quickly come to a point, where speed of calculations forces to use a compilable language. At this point the delegation of the number crunching part for Fortran can really be useful. BindTo tool can generate not only code required Fortran to be called from C, however this tool can generate Cython files which enable Fortran to be called from Python. From Cython documentation: “Cython is a programming language that makes writing C extensions for the Python language as easy as Python itself.” Don’t be afraid, you don’t need to learn yet another programming language. To enable Python to call your Fortran code, you just need to install Cython (how to do it for your system, you can find on cython.org) and compile BindTo generated files into Python extension module. It’s all. Few rules: Fortran arrays are wrapped with NumPy arrays. BindTo assumes, that NumPy multidimensional array is C-order (row-major order). Therefore, an array arr(m,n) in Fortran should be created in Python with e.g. arr=numpy.zeros([n,m]). It is assumed that arrays are contiguous and no check for it is performed. Fortran to Python: Example Here, the same example from section “Fortran to C: Simple example” is used to show how Fortran code can be called from Python. Open “add.f90” file, then go to BindTo dialog (Fortran->BindTo…), select tab “Python” and enable “Generate Cython files”. After pressing “OK”, the files “add.pyx”, “add_f.pxd” “setup_add.py” are generated in “bind” directory additionaly. File “add.pyx”: #!python #cython: boundscheck=False, wraparound=False import numpy as np cimport numpy as np cimport add_f def add_it(int a, int b): cdef int c add_f.add_it(&a, &b, &c) return c Here you see a declaration of Python function def add_it(int a, int b) which returns c variable. In this function the Fortran subroutine is called. Because in the Fortran code the variables a and b have intent(in) attributes, these variables do not go to the return list. c variable has intent(out), therefore it is returned with return statement. If a variable has no intent attribute, intent(in) is assumed. File “setup_add.py”: # Run this file using: # python setup_add.py build_ext --inplace from distutils.core import setup from distutils.extension import Extension from Cython.Build import cythonize import numpy extensions = [ Extension('add', ['add.pyx'], runtime_library_dirs=['./'], library_dirs=['../bin/Debug'], include_dirs=[numpy.get_include()], libraries=['bind_me', 'gfortran'], ), ] setup( ext_modules = cythonize(extensions), ) "setup_*.py” file is used for the compilation of Python extension. You may need to adjust this file to include additional libraries and library search paths. The file can be involved with python setup_add.py build_ext --inplace in the shell. If you just do it now, you may get an error during the compilation. To avoid it, it is required to make one adjustment. In the example, we compiled “add.f90” and “add_bc.f90” into static library “libbind_me.a”. It is OK. But now this library should be included into another Python extension shared library. Therefore GCC requires to recompile our library with added “-fPIC” option. Go to “Project->Build options…” dialog and add “-fPIC” to the compiler options (see screenshot on Fig.***). Alternatively, C::B project with Fortran files can be compiled into a shared library (*.so or *.dll), if you prefer. Fig.4. Compiler options Now the Python extension library can be compiled with python setup_add.py build_ext --inplace. As a result of compilation, “add.so” or “add.pyd” is created. Test it with Python (Fig.5). Fig.5. Python calling Fortran Fortran to Python: BindTo directives Fortran comment lines, which starts with “!BindTo” sentinel (case doesn’t matter) are interpreted by BindTo tool. Here you can add additional information about procedure dummy arguments. “!BindTo” directives should come after the declaration of the procedure somewhere near to declaration of dummy arguments. Syntax: !bindto <attributes> :: <variables> Supported attributes are: - intent(<intent_list>) - dimension(<array_spec>) <intent_list> is a comma separated list of keys: - in intent(in) means the same as in the standard Fortran, i.e. argument is for input only. Therefore a dummy argument with this attribute is not placed in Python function return list. - out An argument with intent(out) attribute is considered as an return variable. It is removed from the Python function argument list and added to the function return list. - inout A dummy argument with intent(inout) attribute is considered as input-output. - hide A dummy argument with intent(hide) attribute is removed from the list of arguments. Normally, this attribute should be combined with an expression for initialization. Such expression can be any valid Python statement with an assignment. If the argument is array, then Python statement should create a NumPy array too (see the example below). - copy This attribute is placed to make a copy of the original argument and in this way to preserve original values. For scalar arguments this attribute is useless, because scalar arguments are copied automatically anyway. Only one intent is allowed in one !BindTo directive, however copy and other attribute can be combined into one, e.g. intent(inout, copy). BindTo assumes, that NumPy arrays are contiguous and are C-order (row-major order). dimension(<array_spec>) is useful for adding an additional information about the dimensions of array in the case of assumed shape or assumed size array. An example: subroutine work_power(ain, aout, ainout, ano) implicit none integer, dimension(:) :: ain integer, dimension(:), intent(out) :: aout integer, dimension(:), intent(inout) :: ainout integer, dimension(:) :: ano !bindto intent(in, copy), dimension(n) :: ain !bindto dimension(n) :: aout, ano !bindto intent(hide) :: ainout=np.arange(1,n+1,dtype=np.intc) .... end subroutine Fortran to Python: Benchmark test In general, Python is slower than Fortran and, when connecting Fortran to Python, some overhead should be expected. However, how big is this overhead? In this section a small benchmark test is performed to make a rough impression what to expect. Here is a short Fortran code which is used in the test: subroutine sum_arr(a, b, c, m) real, dimension(m), intent(in) :: a, b real, dimension(m), intent(out) :: c integer, intent(in) :: m c = a + b end subroutine Using BindTo Cython files are generated and a Python extension module “test.so” is produced from Cython files and Fortran static library with the test code. GCC 4.8 is used. Fortran code compiled with -O3 compiler option, while Cython code compiled with the default options using python setup_test.py build_ext --inplace command line. For the comparison purposes, a Fortran program is compiled too. The program code: program test implicit none real, dimension(2) :: a, b, c integer :: i integer :: niterations = 1 do i = 1, niterations a = i b = i call sum_arr(a, b, c, 2) end do end program At first, niterations=1 is set: $ time ./test real 0m0.001s user 0m0.000s sys 0m0.001s The corresponding Python code: import test_me import numpy as np niterations = 1 a = np.empty([2],np.float32) b = np.empty([2],np.float32) for i in range(niterations): a[:] = i b[:] = i c = test_me.sum_arr(a,b,2) Command line time python test_it.py produces: real 0m0.047s user 0m0.039s sys 0m0.008s This performed test showed what is overhead of starting Python interpreter. In the second test case, niterations=100000 is set. With pure Fortran code output is: real 0m0.002s user 0m0.002s sys 0m0.002s Python-Fortran version: real 0m0.304s user 0m0.292s sys 0m0.012s In the performed test, the useful calculations are very short. Almost all of the measured time is spent in the Python side, not in the Fortran. And (0.304s-0.047s)/100000 is the cost for calling Fortran ones. If your Fortran calculations are performed e.g. in 10s in 100k calls, then with Python-Fortran code you may expect 10.3s. It is up to you if you want to pay this 0.3s price. Not supported features - Generally, is not supported everything except what is supported. Just few Fortran features which come into my mind: - Fixed form source code (???) - Type bound procedures (yet?) - Procedure pointer in an argument list (how to deal with it?) - Common blocks (how to deal with it?) - Derived types with Bind(C) attribute in binding to Python (C wrapping is supported) - Module level variables - ...
http://cbfortran.sourceforge.net/bindto/
CC-MAIN-2017-30
refinedweb
3,932
56.96
have to install and configure RAS server on you W2003 box to include DHCP for RAS clients. IF the remote office has it's own DC server [DNS etc]you ought to make it a site within your domain namespace and use active directory sites and services to create a site-link for AD replication. Give ya some DC redundancy for the network and allows the site to share resources with each other [trusts between the 2]. . well you didn't mention SBS just Windows 2003 server. Small Business Server makes a BIG difference. Now that you have, you still need to install Routing and Remote Access Server RRAS for remote clients to be able to log in to the SBS server. Also you CAN have additional DCs with SBS server. I have 2 with my SBS 2000. You can't create trusts between domains with SBS so making the remote site a site won't work. BUT you can have the remote site seen as not remote by SBS. Just don't make the site a site. SBS doesn't care if the link between DCs are WAN or LAN as long as the DCs are within the same domain. Still bottom linke is you HAVE to have RRAS installed and configured with the default 5 VPN PPTP and L2TP ports AND your firewall has to allow PPTP /L2TP port 1723 passthrough for remote VPN into your network. there's a bunch of articles on Microsofts help and support site for remote access to SBS via VPN. Oh by the way you running standard or platinum edition [ISA server]??? Here's a link to the Microsoft Help and Support article dealing with VPN Remote Access for SBS;en-us;324262&Product=sbs Heres a link for RRAS DHCP;en-us;232703&Product=sbs This last link is for SBS 2000 but it applies as a basis for RRAS in SBS2003;en-us;320697&Product=sbs Just to give you an idea that you need RRAS installed to allow VPN Remote Access. ah an oops damnit. Have to get new bifocals. What Linksys VPN router your got? If you have the Linksys BEFSX41 VPN Endpoint, thats a whole nother ball of wax. The Endpoint routers establish and maintain the VPN tunnel. thing is you need 2 of em to create the tunnel. THEN there is the exchange of data within the tunnel between the remote clients and the SBS server. It works better for remote access if you didn't use the VPN tunnels on the router itself and use RRAS on the SBS box and have to router foward the port 1723 traffic to the SBS box. Client to VPN to Server This conversation is currently closed to new comments.
http://www.techrepublic.com/forums/discussions/client-to-vpn-to-server/
CC-MAIN-2017-22
refinedweb
461
79.19
Hi, Below a description of how one can use zim to implement Getting Things Done. This is a direct result of the thread about the Getting Things Gnome application earlier. After that discussion I actually went to get a copy of the GTD book and did some thinking about how to structure my task lists. ( Not that this actually improved the schedule for releasing the python port of zim ... but I'm making good progress now and will make an announcement of current status soonish ... but I digress ) Advertising This way of implementing GTD leaves a lot of freedom to the user, maybe to much freedom. If you like a task manager app to enforce some discipline this may not be for you. Still it may be useful. Please feedback any comments to improve content (I know style is still lacking - of course help rewriting is also welcome). My intention is to include this in the manual as an example on how you can use zim. All the way at the bottom I give some ideas for improving zim to allow better task management in the notebook. But keep in mind when that zim is not intended to be a full fledged task manager. Regards, Jaap ====== GTD ====== The GTD methodology basically calls for maintaining lists of all loose ends that need be taken care of. The idea is that when all things that need to be done are recorded on a list it will give you peace because you do not have to keep them in your mind all the time. However it is essential that you can access the lists in and sort through them so you always know what the next thing is that you can do given the time and tools available at a certain moment. For those not familiar with the book either read it or check any of the numerous websites discussing it. Would like to include the general flowchart from the book here but this is of course copyrighted. But go check google image search for any of numerous online copies. ==== How I implement GTD in Zim ==== First create a new notebook to be used specific as a task tracker. Create namespaces for the various categories. I use "Projects", "SomeDay" and "Archive" for current, incubating and dormant projects. There are two special pages, one called "INBOX" which is a generic dump for incoming stuff and one called "Chores" (which is in the projects namespace), this is a generic list of tasks that do not belong to any particular project. I also have several pages living in the top level of the notebook with various lists. These do not contain tasks. For e.g. there is a list there with books I have on loan or have loaned out to other people and there is a list birthday present ideas. If you have many of these lists consider putting them in a "Lists" namespace. Important is that a list does not contain tasks. Now for more complex sets of tasks, or projects each has it's own page below the "Projects" namespace. It can have any number of children with information that relates to this particular project and can have tasks all over the place. Some items start out as a project from the start, others first live as a bunch of related tasks on the "Chores" page until they take up to much room and get moved out to their own page. To define individual tasks I use checkboxes. This forces the main description to be a single line, which is good to make sure each task clearly states a physical action. Of course just below the checkbox their can be a whole paragraph or even many sub-pages with all the details. If the description sounds more like a topic than like an action most likely it should be divided in smaller items that are actions. These task line items can have tags like "@work", "@home" etc. which will allow you to filter them more easily in the task overview. Also you can use the tag "@waiting" for tasks that you need to check on but are now waiting on someone else to take action. Also task line items can have a due date, which will show up in the task list. But I do not use this - timing changes all the time anyway. Priority can still be assigned using "!", but don't over-use it, prios are shifting all the time anyway as well. Only use it for things that need to be done ASAP to feel comfortable again. Now to get an overview of all the tasks that can be done I open the TODOList plugin and check the "include all open checkboxes" option. This will show a flat list of the tasks defined throughout the notebook and sorts by priority. One can filter on keywords or tags. Projects that are still under incubation, so I collect ideas, but no action yet, live under the "SomeDay" namespace. These do not contain any tasks - as soon as they do they should move to the "Projects" namespace. Projects that are finished, abandoned or on hold should go under the "Archive" namespace. These should not contain any open tasks as well. If there are any open tasks when I move a project there I check them off with the [x] checkbox to show they will not be done. As an extension I also use the Calendar plugin to have a journal page for each day with notes from meetings etc. Action items from meetings may live there, but this usage is a bit at odds with the use of the INBOX page. At least I can reference discussion notes of a certain date from a project page etc. === Summary === • Each action belongs to an open project - "Chores" is the collection bucket for small tasks • Open projects go in the "Projects" namespace • Open projects should have a clearly defined goal which can be evaluated and stamped "finished" at a certain point in time • Otherwise they go in either "SomeDay" or "Archive" • Each action should have a checkbox - possible follow up actions can have normal bullets if they are not actionable yet • Tags on action are used to generate lists • Some tickler lists can have their own pages, like "Loans" === Possible improvement for the TODOList plugin === 1. Rename to TaskList plugin 2. Add side pane in the TODO list showing tags -> patch committed, available from launchpad 3a. Distinguise a tree view, showing hierarchy of checkbox lists and page, and a list view only showing outer branches which represent actionable items 3b. Configure a special tag for items that are waiting (like @waiting) and use this to filter actionable items 4. Make items directly editable from the TODOList dialog _______________________________________________ Mailing list: Post to : zim-wiki@lists.launchpad.net Unsubscribe : More help :
https://www.mail-archive.com/zim-wiki@lists.launchpad.net/msg00320.html
CC-MAIN-2018-26
refinedweb
1,145
77.67
In MVC 3 you get a lot of stuff for free. Convention determines a lot of the basic stuff. This is very awesome and helps alleviate some of the more mundane and boring work involved with setting up an ASP.NET site. However, with magic comes limitations. In my particular case I need to have an action with an arbitrary number of parameters without the need to have an overload in my controller for each parameter. Lets look at what I’m talking about. My example is a bit of a stretch, but should help explain what I’m doing. If I have a controller named ContactsController with an action named Find which takes one parameter which is a name, then I could call and this would return the contact with the name Jon. However, if I want an action named CreateGroup which will take in a list of names which will then create a group including all of the people whose names were passed in, I’d need to do. What I will be addressing in this blog is how to do this without the need for an overload of CreateGroup for 2, 3, 4, etc. names as parameters. In the Global.asax file we will need to map the correct routes. If we use the MVC 3 Internet Template in VS2010 then we’ll have the following default route: routes.MapRoute( "Default", // Route name "{controller}/{action}/{id}", // URL with parameters new { controller = "Home", action = "Index", id = UrlParameter.Optional } // Parameter defaults ); This default route handles the Find action just fine, however it will not work with CreateGroup. We need a way to specify an unlimited number of parameters. In order to accomplish this we need the following route: //NOTE: This prevents creating routes after this route that have a static number of parameters routes.MapRoute( "DynamicActionRoute", "{controller}/{action}/{*rawParams}" ); What we did here is say that for any given Controller and any given Action use the no parameter Action regardless of how many parameters there are. So in our controller we will now use: public class ContactsController : Controller { public ActionResult Find(string id) { return View(); } public ActionResult CreateGroup(string rawParams) { var names = rawParams.Split('/'); //TODO: process names return View(); } } As long as the name of the parameter in the route matches the name of the parameter in the Action we will receive our string of names. After we parse it out we’ll be good to go! Some Words of Caution When doing this you cannot have an overloaded Action named CreateGroup. If CreateGroup is overloaded you will get the following error of ambiguity: The current request for action ‘CreateGroup’ on controller type ‘ContactsController’ is ambiguous between the following action methods: System.Web.Mvc.ActionResult CreateGroup(System.String) on type Mvc3RazorPoC.Controllers.ContactsController System.Web.Mvc.ActionResult CreateGroup() on type Mvc3RazorPoC.Controllers.ContactsController This makes sense in our situation, because we always want to handle all parameters in this one Action. Also, this should be the last route in the Global.asax file. It will prevent overloaded routes from being fired correctly. If you have an Action that takes in 2 parameters and the route is placed after our dynamic route the dynamic route will kick in and the parameters will not be filled correctly even though the Action will be called correctly. If the 2 parameter routed is above the dynamic routed all will be well and work as expected. I have found that it is far more manageable in large MVC projects to specify the controller for special routing rules like these. It makes it more manageable because it won’t break other routes with specific behaviors and then this only needs to be placed after the default route and after any other routes for that controller. We have also found that creating a static method called “createRoutes” in each of our controllers to be helpful. These methods take in the routeCollection and apply all the controller specific routes needed. We then call these methods from the global.ashx to bind the routes. This provides the added advantage of having the routes visible to you while in the controller class you are working on and not having to switch between files.
http://blogs.interknowlogy.com/2011/10/28/dynamic-actions-in-mvc-3/
CC-MAIN-2015-22
refinedweb
704
61.97
30, 2007 12:00 PMA question on the ruby-core list yielded a discussion about duck typing in Ruby : According to irb, >> StringIO.is_a?(IO) >> => false This seems illogical to me. Is this intentional? If so, why?. interfaces, as a way of: java.lang.UnsupportedOperationExceptionwhen working with, say, Java's Collection API.). def setElementText(element : {def setText(text : String)}, text : String) =This function requires the argument { element.setText(text.trim() .replaceAll("\n","") .replaceAll("\t","")) } elementto have a setText(text:String)method, but does not care about the particular type of the object.. -behaviour(gen_server).to state that it implements the Behavior gen_server- which means that it exports a certain set of functions. This is supported by the Erlang compiler in that it complains if the module doesn't export all of the Behaviors required functions. The Role of Open Source in Data Integration Give-away eBook – Confessions of an IT Manager Free $40 SOA Demystified Book Offer Ebook: Scaling Agile with C/ALM Adobe® Rich Internet Application Project Portal I don't know about Ruby .... You could do the same in ruby, although the problem still remains, it has to be called once before the method is attached. ... or did other people also think that was Duct Taping, at first glance? :-D Of course, the real question you should ask yourself is why you want additional checking of the protocol of your objects.. We have had the same problem with duck typing and implicit assumptions that can be made based on particular implementation details in Python and Zope. Python is tackling the problem by adding abstract base classes to Python 3: Zope tackled the problem by using metaprogramming to supply their own interface implementation: What bothers me about protocols is that intent seems to be left aside. For example, in the following, what is intended by the call to draw() by DrawThing? With interfaces, at least you can see from the inheritence what is intended if you implement only one of the interfaces that specifies a draw method. public interface Graphics { void draw(); } public interface DeckOfCards { void draw(); } public final class DrawThing implements DeckOfCards, Graphics { public void draw() { } } Ahh the old - it looks like a duck, sounds like a duck - it must be a duck. Except it's not.. These ideas sound a lot like what we are trying to accomplish with Qi4j in Java: By focusing on interfaces, and allowing an object to easily implement as many interfaces as are necessary without having to use class inheritance or manual delegation to implement it, it becomes a lot easier to write and (re)use code.. First of all, let me say, the original article was great, and I really enjoyed it. Second, I'd like to weigh in on this conversation regarding if and when you should program to an implementation. It seems like you can divide up programming into three sorts of "layers". I don't mean layers of an application, but layers of code. You have the "system" layer, which is virtually defined by it's need to program to implementations. Nowadays we have runtimes like Java and the CLI to do that for us, so we rarely need to worry about this any more. A piece of "system" code serves no purpose other than to provide functionality to the higher levels. Then you have "libraries" which mix interface and implementation access. These are somewhat meant for higher-order code to consume, but also need to be very effective dealing with the specifics of the system layer. Finally you have the "application" layer. This layer should be 100% interface driven. I can't see any reason why implementation details should leak into this layer, they should instead be in the library layer if your system layer isn't providing them.. The concern was expressed that a Ruby class that implements part of its API dynamically via ActiveRecord::Base overrides respond_to?() to return true for all the attribute getters and setters inferred from the underlying table, even though these methods are not directly declared.) I found traits to be a valuable concept in this matter.. Good paper.." Please don't post messages like this. My whole office is staring at me because I burst out laughing and disturbed
http://www.infoq.com/news/2007/11/protocols-for-ducktyping
crawl-002
refinedweb
709
61.87
There are many ways to unit test your Scala and Play code. At the time of this writing (Scala 2.11.6) I prefer using Scala Test + Play. It’s pretty easy to use and seems to let me do all I need. Setup Dependency Open build.sbt and add these dependencies. libraryDependencies ++= Seq( jdbc, cache, ws, specs2 % Test, //Scala Test + Play "org.scalatest" %% "scalatest" % "2.2.1" % "test", "org.scalatestplus" %% "play" % "1.4.0-M3" % "test" ) Write a Test Suite Create a class that extends org.scalatestplus.play.PlaySpec. import org.scalatestplus.play.PlaySpec class MyTestSpec extends PlaySpec { "Application" should { "Addition test" in { (2 + 2) mustBe 4 } "Multiply test" in { (2 * 4) mustBe 8 } } } Running a Test Suite To run all tests using sbt run: sbt test In IntelliJ right click the test suite class – MyTestSpec – and select Run MyTestSpec. Override Application Settings If your Play application uses settings from application.conf then you probably wish to override some of the settings just during unit testing. This can be done fairly easily. Let’s say that our application.conf has this: money-server-host="10.100.22.152" money-server-port=3412 During unit testing we will like money-server-host to point to localhost. To do this we have to setup a play.api.test.FakeApplication. import org.scalatestplus.play.{OneAppPerSuite, PlaySpec} import play.api.Play._ import play.api.test.FakeApplication class MyTestSpec extends PlaySpec with OneAppPerSuite { //Override test specific settings here. implicit override lazy val app: FakeApplication = FakeApplication( additionalConfiguration = Map( "money-server-host" -> "localhost" ) ) "Application" should { "Settings test" in { current.configuration.getString("money-server-host").get mustBe "localhost" } } }
https://mobiarch.wordpress.com/2016/01/19/title/
CC-MAIN-2016-44
refinedweb
272
53.68
Are you sure? This action might not be possible to undo. Are you sure you want to continue? 0 New Language Features Auto Implemented Properties 1. Auto-implemented properties make property-declaration more concise when no additional logic is required in the property accessors. 2. When you declare a property the compiler creates a private, anonymous backing field can only be accessed through the property's get and set accessors. 3. Auto-implemented properties must declare both a get and a set accessor. 4. To create a readonly auto-implemented property, give it a private set accessor. 5. It doesn’t improve the performance since the compiler will generate the same as the way what it generates for normal class that doesn’t have auto-implemented properties 6. Use auto implemented properties rather than the properties used in earlier version as it removes extra lines of code and make it simpler. Object Initializers 1. Allow you to assign values to any field or properties of the object at creation time. 2. Initializes objects in a declarative manner without having to explicitly invoke a constructor. Collection Initializers 1. Allows creating a collection and initializing it with a series of objects in a single statement. 2. Let you initialize a collection class consisting of various elements. 3. Element Initializers can be simple value, an expression or object initializer. Extension Methods 1. Extension methods enable you to "add" methods to existing types without creating a new derived type, recompiling, or otherwise modifying the original type. 2. Extension methods are a special kind of static method, but they are called as if they were instance methods on the extended type. 3. Extension methods are defined as static methods but are called by using instance method syntax. 4. Their first parameter specifies which type the method operates on, and the parameter is preceded by the “this” modifier. 5. Extension methods are only in scope when you explicitly import the namespace into your source code with a using directive. 6. Extension methods can only be declared in static, non-generic and non-nested classes. 7. The first parameter of an extension method can only have this modifier (no other modifiers are allowed) and the parameter type cannot be a pointer type. 8. Extension methods have lower precedence than regular instance methods on a type. Lambda Expressions 1. A lambda expression is an anonymous function that can contain expressions and statements, and can be used to create delegates or expression tree types. 2. All lambda expressions use the lambda operator =>, which read as “goes to”. i. E.g. x=> x * x read as x goes to x times x. 3. The left side of the lambda operator specifies the input parameters (if any) and the right side hold the expression or statement block. “Var” keyword 1. One of the new features of C# 3.0 is the var keyword, which can be used to implicitly declare a variable’s type. 2. An implicitly typed local variable is strongly typed just as if you had declared the type yourself. 3. The var keyword instructs the compiler to infer the type of the variable from the expression on the right side of the initialization statement. 4. The inferred type may be a built-in type, an anonymous type, a user-defined type, or a type defined in the .NET Framework class library. 5. The following two declarations of i are functionally equivalent: var i = 10; // implicitly typed int i = 10; //explicitly typed 6. var can also be useful with query expressions in which the exact constructed type of the query variable is difficult to determine. This can occur with grouping and ordering operations. 7. The var keyword can also be useful when the specific type of the variable is tedious to type on the keyboard, or is obvious, or does not add to the readability of the code. Anonymous Type 1. Anonymous types provide a convenient way to encapsulate a set of read-only properties into a single object without having to first explicitly define a type. 2. The type name is generated by the compiler and is not available at the source code level. 3. The type of the properties is inferred by the compiler. The following example shows an anonymous type being initialized with two properties called Amount and Message. var v = new { Amount = 108, Message = "Hello" }; 4. Anonymous types are typically used in the select clause of a query expression to return a subset of the properties from each object in the source sequence. 5. Anonymous types are created by using the new operator with an object initializer. 6. Anonymous types are class types that consist of one or more public read-only properties. 7. An anonymous type cannot be cast to any interface or type except for object.
https://www.scribd.com/document/19135500/New-Language-Features
CC-MAIN-2018-13
refinedweb
803
56.15
import "github.com/faiface/pixel/text" Package text implements efficient text drawing for the Pixel library. ASCII is a set of all ASCII runes. These runes are codepoints from 32 to 127 inclusive. func RangeTable(table *unicode.RangeTable) []rune RangeTable takes a *unicode.RangeTable and generates a set of runes contained within that RangeTable. Atlas is a set of pre-drawn glyphs of a fixed set of runes. This allows for efficient text drawing. Atlas7x13 is an Atlas using basicfont.Face7x13 with the ASCII rune set NewAtlas creates a new Atlas containing glyphs of the union of the given sets of runes (plus unicode.ReplacementChar) from the given font face. Creating an Atlas is rather expensive, do not create a new Atlas each frame. Do not destroy or close the font.Face after creating the Atlas. Atlas still uses it. Ascent returns the distance from the top of the line to the baseline. Contains reports wheter r in contained within the Atlas. Descent returns the distance from the baseline to the bottom of the line. func (a *Atlas) DrawRune(prevR, r rune, dot pixel.Vec) (rect, frame, bounds pixel.Rect, newDot pixel.Vec) DrawRune returns parameters necessary for drawing a rune glyph. Rect is a rectangle where the glyph should be positioned. Frame is the glyph frame inside the Atlas's Picture. NewDot is the new position of the dot. Glyph returns the description of r within the Atlas. Kern returns the kerning distance between runes r0 and r1. Positive distance means that the glyphs should be further apart. LineHeight returns the recommended vertical distance between two lines of text. Picture returns the underlying Picture containing an arrangement of all the glyphs contained within the Atlas. Glyph describes one glyph in an Atlas. type Text struct { // Orig specifies the text origin, usually the top-left dot position. Dot is always aligned // to Orig when writing newlines. Orig pixel.Vec // Dot is the position where the next character will be written. Dot is automatically moved // when writing to a Text object, but you can also manipulate it manually Dot pixel.Vec // Color is the color of the text that is to be written. Defaults to white. Color color.Color // LineHeight is the vertical distance between two lines of text. // // Example: // txt.LineHeight = 1.5 * txt.Atlas().LineHeight() LineHeight float64 // TabWidth is the horizontal tab width. Tab characters will align to the multiples of this // width. // // Example: // txt.TabWidth = 8 * txt.Atlas().Glyph(' ').Advance TabWidth float64 // contains filtered or unexported fields } Text allows for effiecient and convenient text drawing. To create a Text object, use the New constructor: txt := text.New(pixel.ZV, text.NewAtlas(face, text.ASCII)) As suggested by the constructor, a Text object is always associated with one font face and a fixed set of runes. For example, the Text we created above can draw text using the font face contained in the face variable and is capable of drawing ASCII characters. Here we create a Text object which can draw ASCII and Katakana characters: txt := text.New(0, text.NewAtlas(face, text.ASCII, text.RangeTable(unicode.Katakana))) Similarly to IMDraw, Text functions as a buffer. It implements io.Writer interface, so writing text to it is really simple: fmt.Print(txt, "Hello, world!") Newlines, tabs and carriage returns are supported. Finally, if we want the written text to show up on some other Target, we can draw it: txt.Draw(target) Text exports two important fields: Orig and Dot. Dot is the position where the next character will be written. Dot is automatically moved when writing to a Text object, but you can also manipulate it manually. Orig specifies the text origin, usually the top-left dot position. Dot is always aligned to Orig when writing newlines. The Clear method resets the Dot to Orig. New creates a new Text capable of drawing runes contained in the provided Atlas. Orig and Dot will be initially set to orig. Here we create a Text capable of drawing ASCII characters using the Go Regular font. ttf, err := truetype.Parse(goregular.TTF) if err != nil { panic(err) } face := truetype.NewFace(ttf, &truetype.Options{ Size: 14, }) txt := text.New(orig, text.NewAtlas(face, text.ASCII)) Atlas returns the underlying Text's Atlas containing all of the pre-drawn glyphs. The Atlas is also useful for getting values such as the recommended line height. Bounds returns the bounding box of the text currently written to the Text excluding whitespace. If the Text is empty, a zero rectangle is returned. BoundsOf returns the bounding box of s if it was to be written to the Text right now. Clear removes all written text from the Text. The Dot field is reset to Orig. Draw draws all text written to the Text to the provided Target. The text is transformed by the provided Matrix. This method is equivalent to calling DrawColorMask with nil color mask. If there's a lot of text written to the Text, changing a matrix or a color mask often might hurt performance. Consider using your Target's SetMatrix or SetColorMask methods if available. DrawColorMask draws all text written to the Text to the provided Target. The text is transformed by the provided Matrix and masked by the provided color mask. If there's a lot of text written to the Text, changing a matrix or a color mask often might hurt performance. Consider using your Target's SetMatrix or SetColorMask methods if available. Write writes a slice of bytes to the Text. This method never fails, always returns len(p), nil. WriteByte writes a byte to the Text. This method never fails, always returns nil. Writing a multi-byte rune byte-by-byte is perfectly supported. WriteRune writes a rune to the Text. This method never fails, always returns utf8.RuneLen(r), nil. WriteString writes a string to the Text. This method never fails, always returns len(s), nil. Package text imports 11 packages (graph) and is imported by 34 packages. Updated 2020-08-22. Refresh now. Tools for package owners.
https://godoc.org/github.com/faiface/pixel/text
CC-MAIN-2020-40
refinedweb
1,011
69.58
Photos module Can some demonstrate all the methods and function of the photos module? I'm have trouble using create_image_asset(). I want to take a photo and save it to a perticular album. After that I want the photo sent via email and then deleted for the album. @ccc when I take a pic with the capture_image() how do I name it so I can pass it to create_image_asset() then send it via a mime email. @resserone13 create a photo in camera roll via import photos import os pil_image = photos.capture_image() # returns a PIL Image path = 'new.jpg' pil_image.save(path, format='JPEG') photos.create_image_asset(path) os.remove(path) @resserone13 to send a captured photo via email, you don't need to save it first in the camera roll. Not tested: import photos pil_image = photos.capture_image() # returns a PIL Image import smtplib # for sending an email from email.mime.image import MIMEImage from email.mime.multipart import MIMEMultipart import io # Create the container email message. me = 'your email' msg = MIMEMultipart() msg['Subject'] = 'your subject' msg['From'] = me msg['To'] = me msg.preamble = 'your text' # attach photo to mail buf = io.BytesIO() pil_image.save(buf, format='JPEG') photo_data = buf.getvalue() img = MIMEImage(photo_data) msg.attach(img) # Send the email via our own SMTP server. s = smtplib.SMTP('your smtp server') s.sendmail(me, me, msg.as_string()) s.quit() @cvp, Caution about leaving io.BytesIO() open after you are done with them. In a memory constrained environment like Pythonista and with large objects like images, running out of RAM can be a problem. # attach photo to mail buf = io.BytesIO() pil_image.save(buf, format='JPEG') photo_data = buf.getvalue() # might be safer rewritten: with io.BytesIO() as buf: pil_image.save(buf, format='JPEG') photo_data = buf.getvalue() @ccc As usual, you're right and as usual, I forgot that... Thanks for @resserone13 and for me @cvp @ccc I'm still fitting pieces of the code you suggested into my existing code. Having a bit of trouble but going to try to work on it more. I have hear the the with keyword will open and close things. Thanks @cvp @ccc. Thanks. I'm able to take a pick and send it via email. I used ... #converts image to bytes and create mime image for mime email. with io.BytesIO() as buf: req_photo.save(buf,format='JPEG') sent_img = buf.getvalue() img_sent = MIMEImage(sent_img) # And.. main_email.attach(img_sent) I was using and I'm wondering if it's better to send pic via email with. attachment = open('/var/mobile/Media/DCIM/', 'rb',) as f: #attachment = open('/var/mobile/Media/DCIM/', 'rb',) #Adds attachment and encoding for attachment. #part = MIMEBase(sent_img, 'png') #part = MIMEBase('application', 'octet-stream') #part.set.payload(attachment.read()) #encoders.encode_base64(part) #part.add_header('Content-disposition', 'attachment: file' + sent_img) @resserone13 Oups 😬 no idea
https://forum.omz-software.com/topic/5975/photos-module/2
CC-MAIN-2022-33
refinedweb
469
62.04
module defines the following functions: Open the URL url, which can be either a string or a Request object., FTP and FTPS connections. This function returns a file-like object with two additional methods:. Changed in version 2.6: timeout was added.. Beginning in Python 2.3, a BaseHandler subclass may also change its handler_order member variable to modify its position in the handlers list. The following exceptions are raised2 cookielib. OpenerDirector instances have the following methods: handler should be an instance of BaseHandler. The following methods are searched, and added to the possible chains (note that HTTP errors are a special case).,.) The ProxyHandler will have a method protocol_open for every protocol which has a proxy in the proxies dictionary given in the constructor. The method will modify requests to go through the proxy, by calling request.set_proxy(), and call the next handler in the chain to actually execute the protocol.:22 >>> req = urllib2.Request(url='', ... data='This data is passed to stdin of the CGI') >>> f = urllib2.urlopen(req) >>> print f.read()2.ProxyHandler({'http': ''}) proxy_auth_handler = urllib2.ProxyBasicAuthHandler() proxy_auth_handler.add_password('realm', 'host', 'username', 'password') opener = urllib2.build_opener(proxy_handler, proxy_auth_handler) # This time, rather than install the OpenerDirector, we use it directly: opener.open('') Adding HTTP headers: Use the headers argument to the Request constructor, or: import urllib2()).
https://docs.python.org/2.6/library/urllib2.html
CC-MAIN-2015-22
refinedweb
218
52.97
Error: SCNEARIO# 1 Namespaces don’t get displayed in WSDL created by WS Provider. WS Provider is created using an XML Schema which is created using XML file (Convert to XSD/DTD option is Checked in XML Schema). Also this schema couldn’t upload in Data Mapper, it gives Error Message: error:'arr:string' is not a valid NCName. SCNEARIO# 2 Although, If the WS Provider is created using the XML Schema which is created using the same XML file but ‘Convert to XSD/DTD option is Unchecked’ in XML Schema. Then it throws NullPointerException with message: ‘Error in updating web service publishing parameters’ Cause: Known issue of Adeptia that if the XML contains the namespace then XSD generated from it will be invalid. Recommendation: We would recommend that you use an XSD for creating and not a sample XML file when creating XML Schema in order to resolve Namespace issues
https://support.adeptia.com/hc/en-us/articles/207877653-Namespaces-don-t-appear-in-WSDL
CC-MAIN-2019-35
refinedweb
151
55.37
In chapter 1. We saw one such example in section 3.3.5, where the objects of computation were arithmetic constraints. In a constraint system the direction and the order of computation are not so well specified; in carrying out a computation the system must therefore provide more detailed ``how to'' knowledge than would be the case with an ordinary arithmetic computation. This does not mean, however, that the user is released altogether from the responsibility of providing imperative knowledge. There are many constraint networks that implement the same set of constraints, and the user must choose from the set of mathematically equivalent networks a suitable network to specify a particular computation. The nondeterministic program evaluator of section 4.3 also moves away from the view that programming is about constructing algorithms for computing unidirectional functions. In a nondeterministic language, expressions can have more than one value, and, as a result, the computation is dealing with relations rather than with single-valued functions. Logic programming extends this idea by combining a relational vision of programming with a powerful kind of symbolic pattern matching called unification.58 This approach, when it works, can be a very powerful way to write programs. Part of the power comes from the fact that a single ``what is'' fact can be used to solve a number of different problems that would have different ``how to'' components. As an example, consider the append operation, which takes two lists as arguments and combines their elements to form a single list. In a procedural language such as Lisp, we could define append in terms of the basic list constructor cons, as we did in section 2.2.1: (define (append x y) (if (null? x) y (cons (car x) (append (cdr x) y)))) This procedure can be regarded as a translation into Lisp of the following two rules, the first of which covers the case where the first list is empty and the second of which handles the case of a nonempty list, which is a cons of two parts: Using the append procedure, we can answer questions such as Find the append of (a b) and (c d). But the same two rules are also sufficient for answering the following sorts of questions, which the procedure can't answer: Find a list y that appends with (a b) to produce (a b c d). Find all x and y that append to form (a b c d). In a logic programming language, the programmer writes an append ``procedure'' by stating the two rules about append given above. ``How to'' knowledge is provided automatically by the interpreter to allow this single pair of rules to be used to answer all three types of questions about append.60 Contemporary logic programming languages (including the one we implement here) have substantial deficiencies, in that their general ``how to'' methods can lead them into spurious infinite loops or other undesirable behavior. Logic programming is an active field of research in computer science.61 Earlier in this chapter we explored the technology of implementing interpreters and described the elements that are essential to an interpreter for a Lisp-like language (indeed, to an interpreter for any conventional language). Now we will apply these ideas to discuss an interpreter for a logic programming language. We call this language the query language, because it is very useful for retrieving information from data bases by formulating queries, or questions, expressed in the language. Even though the query language is very different from Lisp, we will find it convenient to describe the language in terms of the same general framework we have been using all along: as a collection of primitive elements, together with means of combination that enable us to combine simple elements to create more complex elements and means of abstraction that enable us to regard complex elements as single conceptual units. An interpreter for a logic programming language is considerably more complex than an interpreter for a language like Lisp. Nevertheless, we will see that our query-language interpreter contains many of the same elements found in the interpreter of section 4.1. In particular, there will be an ``eval'' part that classifies expressions according to type and an ``apply'' part that implements the language's abstraction mechanism (procedures in the case of Lisp, and rules in the case of logic programming). Also, a central role is played in the implementation by a frame data structure, which determines the correspondence between symbols and their associated values. One additional interesting aspect of our query-language implementation is that we make substantial use of streams, which were introduced in chapter 3. Logic programming excels in providing interfaces to data bases for information retrieval. The query language we shall implement in this chapter is designed to be used in this way. In order to illustrate what the query system does, we will show how it can be used to manage the data base of personnel records for Microshaft, a thriving high-technology company in the Boston area. The language provides pattern-directed access to personnel information and can also take advantage of general rules in order to make logical deductions.62 .. Formulate compound queries that retrieve the following information: a. the names of all people who are supervised by Ben Bitdiddle, together with their addresses; b. all people whose salary is less than Ben Bitdiddle's, together with their salary and Ben Bitdiddle's salary; c. all people who are supervised by someone who is not in the computer division, together with the supervisor's name and job. In addition to primitive queries and compound queries, the query language provides means for abstracting queries. These are given by rules. The rule (rule (lives-near ?person-1 ?person-2) (and (address ?person-1 (?town . ?rest-1)) (address ?person-2 (?town . ?rest-2)) (not (same ?person-1 ?person-2)))) specifies that two people live near each other if they live in the same town. The final not clause prevents the rule from saying that all people live near themselves. The same relation is defined by a very simple rule:65 (rule (same ?x ?x)) The following rule declares that a person is a ``wheel'' in an organization if he supervises someone who is in turn a supervisor: (rule (wheel ?person) (and (supervisor ?middle-manager ?person) (supervisor ?x ?middle-manager))) The general form of a rule is (rule <conclusion> <body>) where <conclusion> is a pattern and <body> is any query query (lives-near ?x (Bitdiddle Ben)) results in (lives-near (Reasoner Louis) (Bitdiddle Ben)) (lives-near (Aull DeWitt) (Bitdiddle Ben)) To find all computer programmers who live near Ben Bitdiddle, we can ask (and (job ?x (computer programmer)) (lives-near ?x (Bitdiddle Ben))) As in the case of compound procedures, rules can be used as parts of other rules (as we saw with the lives-near rule above) or even be defined recursively. For instance, the rule (rule (outranked-by ?staff-person ?boss) (or (supervisor ?staff-person ?boss) (and (supervisor ?staff-person ?middle-manager) (outranked-by ?middle-manager ?boss)))) says that a staff person is outranked by a boss in the organization if the boss is the person's supervisor or (recursively) if the person's supervisor is outranked by the boss. Exercise 4.57. Define a rule that says that person 1 can replace person 2 if either person 1 does the same job as person 2 or someone who does person 1's job can also do person 2's job, and if person 1 and person 2 are not the same person. Using your rule, give queries that find the following: a.. Ben Bitdiddle has missed one meeting too many. Fearing that his habit of forgetting meetings could cost him his job, Ben decides to do something about it. He adds all the weekly meetings of the firm to the Microshaft data base by asserting the following: (meeting accounting (Monday 9am)) (meeting administration (Monday 10am)) (meeting computer (Wednesday 3pm)) (meeting administration (Friday 1pm)) Each of the above assertions is for a meeting of an entire division. Ben also adds an entry for the company-wide meeting that spans all the divisions. All of the company's employees attend this meeting. (meeting whole-company (Wednesday 4pm)) a. On Friday morning, Ben wants to query the data base for all the meetings that occur that day. What query should he use? b. Alyssa P. Hacker is unimpressed. She thinks it would be much more useful to be able to ask for her meetings by specifying her name. So she designs a rule that says that a person's meetings include all whole-company meetings plus all meetings of that person's division. Fill in the body of Alyssa's rule. (rule (meeting-time ?person ?day-and-time) <rule-body>) c. Alyssa arrives at work on Wednesday morning and wonders what meetings she has to attend that day. Having defined the above rule, what query should she make to find this out? Exercise 4.60. By giving the query (lives-near ?person (Hacker Alyssa P)) Alyssa P. Hacker is able to find people who live near her, with whom she can ride to work. On the other hand, when she tries to find all pairs of people who live near each other by querying (lives-near ?person-1 ?person-2) she notices that each pair of people who live near each other is listed twice; for example, (lives-near (Hacker Alyssa P) (Fect Cy D)) (lives-near (Fect Cy D) (Hacker Alyssa P)) Why does this happen? Is there a way to find a list of people who live near each other, in which each pair appears only once? Explain. described at the beginning of section 4.4. As we said, append can be characterized by the following two rules: To express this in our query language, we define two rules for a relation (append-to-form x y z) which we can interpret to mean ``x and y append to form z'': (rule (append-to-form () ?y ?y)) (rule (append-to-form (?u . ?v) ?y (?u . ?z)) (append-to-form -to-form (a b) (c d) ?z) ;;; Query results: (append-to-form (a b) (c d) (a b c d)) What is more striking, we can use the same rules to ask the question ``Which list, when appended to (a b), yields (a b c d)?'' This is done as follows: ;;; Query input: (append-to-form (a b) ?y (a b c d)) ;;; Query results: (append-to-form (a b) (c d) (a b c d)) We can also ask for all pairs of lists that append to form (a b c d): ;;; Query input: (append-to-form ?x ?y (a b c d)) ;;; Query results: (append-to-form () (a b c d) (a b c d)) (append-to-form (a) (b c d) (a b c d)) (append-to-form (a b) (c d) (a b c d)) (append-to-form (a b c) (d) (a b c d)) (append-to-form (a b c d) () (a b c, as we will see in section 4.63. The following data base (see Genesis 4) traces the genealogy of the descendants of Ada back to Adam, by way of Cain: . (See exercise 4.69 for some rules to deduce more complicated relationships.) In section 4.4.4 we will present an implementation of the query interpreter as a collection of procedures. In this section we give an overview that explains the general structure of the system independent of low-level implementation details. After describing the implementation of the interpreter, we will be in a position to understand some of its limitations and some of the subtle ways in which the query language's logical operations differ from the operations of mathematical logic. It should be apparent that the query evaluator must perform some kind of search in order to match queries against facts and rules in the data base. One way to do this would be to implement the query system as a nondeterministic program, using the amb evaluator of section 4.3 (see exercise 4.78). Another possibility is to manage the search with the aid of streams. Our implementation follows this second approach. The query system is organized around two central operations called pattern matching and unification. We first describe pattern matching and explain how this operation, together with the organization of information in terms of streams of frames, enables us to implement both simple and compound queries. We next discuss unification, a generalization of pattern matching needed to implement rules. Finally, we show how the entire query interpreter fits together through a procedure that classifies expressions in a manner analogous to the way eval classifies expressions for the interpreter described in section 4.1. A pattern matcher is a program that tests whether some datum fits a specified pattern. For example, the data list ((a b) c (a b)) matches the pattern (?x c ?x) with the pattern variable ?x bound to (a b). The same data list matches the pattern (?x ?y ?z) with ?x and ?z both bound to (a b) and ?y bound to c. It also matches the pattern ((?x ?y) c (?x ?y)) with ?x bound to a and ?y bound to b. However, it does not match the pattern (?x a ?y), since that pattern specifies a list whose second element is the symbol a. The pattern matcher used by the query system takes as inputs a pattern, a datum, and a frame that specifies bindings for various pattern variables. It checks whether the datum matches the pattern in a way that is consistent with the bindings already in the frame. If so, it returns the given frame augmented by any bindings that may have been determined by the match. Otherwise, it indicates that the match has failed. For example, using the pattern (?x ?y ?x) to match (a b a) given an empty frame will return a frame specifying that ?x is bound to a and ?y is bound to b. Trying the match with the same pattern, the same datum, and a frame specifying that ?y is bound to a will fail. Trying the match with the same pattern, the same datum, and a frame in which ?y is bound to b and ?x is unbound will return the given frame augmented by a binding of ?x to a. The pattern matcher is all the mechanism that is needed to process simple queries that don't involve rules. For instance, to process the query (job ?x (computer programmer)) we scan through all assertions in the data base and select those that match the pattern with respect to an initially empty frame. For each match we find, we use the frame returned by the match to instantiate the pattern with a value for ?x. The testing of patterns against frames is organized through the use of streams. Given a single frame, the matching process runs through the data-base entries one by one. For each data-base entry, the matcher generates either a special symbol indicating that the match has failed or an extension to the frame. The results for all the data-base entries are collected into a stream, which is passed through a filter to weed out the failures. The result is a stream of all the frames that extend the given frame via a match to some assertion in the data base.67 In our system, a query takes an input stream of frames and performs the above matching operation for every frame in the stream, as indicated in figure 4.4. That is, for each frame in the input stream, the query generates a new stream consisting of all extensions to that frame by matches to assertions in the data base. All these streams are then combined to form one huge stream, which contains all possible extensions of every frame in the input stream. This stream is the output of the query. To answer a simple query, we use the query with an input stream consisting of a single empty frame. The resulting output stream contains all extensions to the empty frame (that is, all answers to our query). This stream of frames is then used to generate a stream of copies of the original query pattern with the variables instantiated by the values in each frame, and this is the stream that is finally printed. The real elegance of the stream-of-frames implementation is evident when we deal with compound queries. The processing of compound queries makes use of the ability of our matcher to demand that a match be consistent with a specified frame. For example, to handle the and of two queries, such as (and (can-do-job ?x (computer programmer trainee)) (job ?person ?x)) (informally, ``Find all people who can do the job of a computer programmer trainee''), we first find all entries that match the pattern (can-do-job ?x (computer programmer trainee)) This produces a stream of frames, each of which contains a binding for ?x. Then for each frame in the stream we find all entries that match (job ?person ?x) in a way that is consistent with the given binding for ?x. Each such match will produce a frame containing bindings for ?x and ?person. The and of two queries can be viewed as a series combination of the two component queries, as shown in figure 4.5. The frames that pass through the first query filter are filtered and further extended by the second query. Figure 4.6 shows the analogous method for computing the or of two queries as a parallel combination of the two component queries. The input stream of frames is extended separately by each query. The two resulting streams are then merged to produce the final output stream. Even from this high-level description, it is apparent that the processing of compound queries can be slow. For example, since a query may produce more than one output frame for each input frame, and each query in an and gets its input frames from the previous query, an and query could, in the worst case, have to perform a number of matches that is exponential in the number of queries (see exercise 4.76).68 Though systems for handling only simple queries are quite practical, dealing with complex queries is extremely difficult.69 From the stream-of-frames viewpoint, the not of some query acts as a filter that removes all frames for which the query can be satisfied. For instance, given the pattern (not (job ?x (computer programmer))) we attempt, for each frame in the input stream, to produce extension frames that satisfy (job ?x (computer programmer)). We remove from the input stream all frames for which such extensions exist. The result is a stream consisting of only those frames in which the binding for ?x does not satisfy (job ?x (computer programmer)). For example, in processing the query (and (supervisor ?x ?y) (not (job ?x (computer programmer)))) the first clause will generate frames with bindings for ?x and ?y. The not clause will then filter these by removing all frames in which the binding for ?x satisfies the restriction that ?x is a computer programmer. In order to handle rules in the query language, we must be able to find the rules whose conclusions match a given query pattern. Rule conclusions are like assertions except that they can contain variables, so we will need a generalization of pattern matching -- called unification -- in which both the ``pattern'' and the ``datum'' may contain variables. A unifier takes two patterns, each containing constants and variables, and determines whether it is possible to assign values to the variables that will make the two patterns equal. If so, it returns a frame containing these bindings. For example, unifying (?x a ?y) and (?y ?z a) will specify a frame in which ?x, ?y, and ?z must all be bound to a. On the other hand, unifying (?x ?y a) and (?x b ?y) will fail, because there is no value for ?y that can make the two patterns equal. (For the second elements of the patterns to be equal, ?y would have to be b; however, for the third elements to be equal, ?y would have to be a.) The unifier used in the query system, like the pattern matcher, takes a frame as input and performs unifications that are consistent with this frame. The unification algorithm is the most technically difficult part of the query system. With complex patterns, performing unification may seem to require deduction. To unify (?x ?x) and ((a ?y c) (a b ?z)), for example, the algorithm must infer that ?x should be (a b c), ?y should be b, and ?z should be c. We may think of this process as solving a set of equations among the pattern components. In general, these are simultaneous equations, which may require substantial manipulation to solve.71 For example, unifying (?x ?x) and ((a ?y c) (a b ?z)) may be thought of as specifying the simultaneous equations ?x = (a ?y c) ?x = (a b ?z) These equations imply that (a ?y c) = (a b ?z) which in turn implies that a = a, ?y = b, c = ?z, and hence that ?x = (a b c) In a successful pattern match, all pattern variables become bound, and the values to which they are bound contain only constants. This is also true of all the examples of unification we have seen so far. In general, however, a successful unification may not completely determine the variable values; some variables may remain unbound and others may be bound to values that contain variables. Consider the unification of (?x a) and ((b ?y) ?z). We can deduce that ?x = (b ?y) and a = ?z, but we cannot further solve for ?x or ?y. The unification doesn't fail, since it is certainly possible to make the two patterns equal by assigning values to ?x and ?y. Since this match in no way restricts the values ?y can take on, no binding for ?y is put into the result frame. The match does, however, restrict the value of ?x. Whatever value ?y has, ?x must be (b ?y). A binding of ?x to the pattern (b ?y) is thus put into the frame. If a value for ?y is later determined and added to the frame (by a pattern match or unification that is required to be consistent with this frame), the previously bound ?x will refer to this value.72 Unification is the key to the component of the query system that makes inferences from rules. To see how this is accomplished, consider processing a query that involves applying a rule, such as (lives-near ?x (Hacker Alyssa P)) To process this query, we first use the ordinary pattern-match procedure described above to see if there are any assertions in the data base that match this pattern. (There will not be any in this case, since our data base includes no direct assertions about who lives near whom.) The next step is to attempt to unify the query pattern with the conclusion of each rule. We find that the pattern unifies with the conclusion of the rule (rule (lives-near ?person-1 ?person-2) (and (address ?person-1 (?town . ?rest-1)) (address ?person-2 (?town . ?rest-2)) (not (same ?person-1 ?person-2)))) resulting in a frame specifying that ?person-2 is bound to (Hacker Alyssa P) and that ?x should be bound to (have the same value as) ?person-1. Now, relative to this frame, we evaluate the compound query given by the body of the rule. Successful matches will extend this frame by providing a binding for ?person-1, and consequently a value for ?x, which we can use to instantiate the original query pattern. In general, the query evaluator uses the following method to apply a rule when trying to establish a query pattern in a frame that specifies bindings for some of the pattern variables: Notice how similar this is to the method for applying a procedure in the eval/apply evaluator for Lisp: The similarity between the two evaluators should come as no surprise. Just as procedure definitions are the means of abstraction in Lisp, rule definitions are the means of abstraction in the query language. In each case, we unwind the abstraction by creating appropriate bindings and evaluating the rule or procedure body relative to these. We saw earlier in this section how to evaluate simple queries in the absence of rules. Now that we have seen how to apply rules, we can describe how to evaluate simple queries by using both rules and assertions. Given the query pattern and a stream of frames, we produce, for each frame in the input stream, two streams: Appending these two streams produces a stream that consists of all the ways that the given pattern can be satisfied consistent with the original frame. These streams (one for each frame in the input stream) are now all combined to form one large stream, which therefore consists of all the ways that any of the frames in the original input stream can be extended to produce a match with the given pattern. Despite the complexity of the underlying matching operations, the system is organized much like an evaluator for any language. The procedure that coordinates the matching operations is called qeval, and it plays a role analogous to that of the eval procedure for Lisp. Qeval takes as inputs a query and a stream of frames. Its output is a stream of frames, corresponding to successful matches to the query pattern, that extend some frame in the input stream, as indicated in figure 4.4. Like eval, qeval classifies the different types of expressions (queries) and dispatches to an appropriate procedure for each. There is a procedure for each special form (and, or, not, and lisp-value) and one for simple queries. The driver loop, which is analogous to the driver-loop procedure for the other evaluators in this chapter, reads queries from the terminal. For each query, it calls qeval with the query and a stream that consists of a single empty frame. This will produce the stream of all possible matches (all possible extensions to the empty frame). For each frame in the resulting stream, it instantiates the original query using the values of the variables found in the frame. This stream of instantiated queries is then printed.74 The driver also checks for the special command assert!, which signals that the input is not a query but rather an assertion or rule to be added to the data base. For instance, (assert! (job (Bitdiddle Ben) (computer wizard))) (assert! (rule (wheel ?person) (and (supervisor ?middle-manager ?person) (supervisor ?x ?middle-manager)))) The means of combination used in the query language may at first seem identical to the operations and, or, and not of mathematical logic, and the application of query-language rules is in fact accomplished through a legitimate method of inference.75.76 (see exercise 4.64) or on low-level details concerning the order in which the system processes queries.77 Another quirk in the query system concerns not. Given the data base of section 4.4.1,. See exercise 4.77. personnel data base of section 4.4.1, driver loop for the query system repeatedly reads input expressions. If the expression is a rule or assertion to be added to the data base, then the information is added. Otherwise the expression is assumed to be a query. The driver passes this query to the evaluator qeval together with an initial frame stream consisting of a single empty frame. The result of the evaluation is a stream of frames generated by satisfying the query with variable values found in the data base. These frames are used to form a new stream consisting of copies of the original query in which the variables are instantiated with values supplied by the stream of frames, and this final stream is printed at the terminal: (define input-prompt ";;; Query input:") (define output-prompt ";;; Query results:") (define (query-driver-loop) (prompt-for-input input-prompt) (let ((q (query-syntax-process (read)))) (cond ((assertion-to-be-added? q) (add-rule-or-assertion! (add-assertion-body q)) (newline) (display "Assertion added to data base.") (query-driver-loop)) (else (newline) (display output-prompt) (display-stream (stream-map (lambda (frame) (instantiate q frame (lambda (v f) (contract-question-mark v)))) (qeval q (singleton-stream '())))) (query-driver-loop))))) Here, as in the other evaluators in this chapter, we use an abstract syntax for the expressions of the query language. The implementation of the expression syntax, including the predicate assertion-to-be-added? and the selector add-assertion-body, is given in section 4.4.4.7. Add-rule-or-assertion! is defined in section 4.4.4.5. Before doing any processing on an input expression, the driver loop transforms it syntactically into a form that makes the processing more efficient. This involves changing the representation of pattern variables. When the query is instantiated, any variables that remain unbound are transformed back to the input representation before being printed. These transformations are performed by the two procedures query-syntax-process and contract-question-mark (section 4.4.4.7). To instantiate an expression, we copy it, replacing any variables in the expression by their values in a given frame. The values are themselves instantiated, since they could contain variables (for example, if ?x in exp is bound to ?y as the result of unification and ?y is in turn bound to 5). The action to take if a variable cannot be instantiated is given by a procedural argument to instantiate. (define (instantiate exp frame unbound-var-handler) (define (copy exp) (cond ((var? exp) (let ((binding (binding-in-frame exp frame))) (if binding (copy (binding-value binding)) (unbound-var-handler exp frame)))) ((pair? exp) (cons (copy (car exp)) (copy (cdr exp)))) (else exp))) (copy exp)) The procedures that manipulate bindings are defined in section 4.4.4.8. The 4.4.4.7, implement the abstract syntax of the special forms. 4.6. The output streams for the various disjuncts of the or are computed separately and merged using the interleave-delayed procedure from section 4.4.4.6. (See exercises 4.71 and 4.72.) .4.4.7. Not is handled by the method outlined in section 4.4.2. 4.4.4.7)). Thus, as pattern-match recursively compares cars and cdrs of a data list and a pattern that had a dot, it eventually matches the variable after the dot (which is a cdr of the pattern) against a sublist of the data list, binding the variable to that list. For example, matching the pattern (computer . ?type) against (computer programmer trainee) will match ?type against the list (programmer trainee). important problem in designing logic programming languages is that of arranging things so that as few irrelevant data-base entries as possible will be examined in checking a given pattern. In our system, in addition to storing all assertions in one big stream, we store all assertions whose cars are constant symbols in separate streams, in a table indexed by the symbol. To fetch an assertion that may match a pattern, we first check to see if the car of the pattern is a constant symbol. If so, we return (to be tested using the matcher) all the stored assertions that have the same car. If the pattern's car is not a constant symbol, we return all the stored assertions. Cleverer methods could also take advantage of information in the frame, or try also to optimize the case where the car of the pattern is not a constant symbol. We avoid building our criteria for indexing (using the car, handling only the case of constant symbols) into the program; instead we call on predicates and selectors that embody our criteria. (define THE-ASSERTIONS the-empty-stream) (define (fetch-assertions pattern frame) (if (use-index? pattern) (get-indexed-assertions pattern) (get-all-assertions))) (define (get-all-assertions) THE-ASSERTIONS) (define (get-indexed-assertions pattern) (get-stream (index-key-of pattern) 'assertion-stream)) Get-stream looks up a stream in the table and returns an empty stream if nothing is stored there. (define (get-stream key1 key2) (let ((s (get key1 key2))) (if s s the-empty-stream))) Rules are stored similarly, using the car of the rule conclusion. Rule conclusions are arbitrary patterns, however, so they differ from assertions in that they can contain variables. A pattern whose car is a constant symbol can match rules whose conclusions start with a variable as well as rules whose conclusions have the same car. Thus, when fetching rules that might match a pattern whose car is a constant symbol we fetch all rules whose conclusions start with a variable as well as those whose conclusions have the same car as the pattern. For this purpose we store all rules whose conclusions start with a variable in a separate stream in our table, indexed by the symbol ?. (define THE-RULES the-empty-stream) (define (fetch-rules pattern frame) (if (use-index? pattern) (get-indexed-rules pattern) (get-all-rules))) (define (get-all-rules) THE-RULES) (define (get-indexed-rules pattern) (stream-append (get-stream (index-key-of pattern) 'rule-stream) (get-stream '? 'rule-stream))) Add-rule-or-assertion! is used by query-driver-loop to add assertions and rules to the data base. Each item is stored in the index, if appropriate, and in a stream of all assertions or rules in the data base. (define (add-rule-or-assertion! assertion) (if (rule? assertion) (add-rule! assertion) (add-assertion! assertion))) (define (add-assertion! assertion) (store-assertion-in-index assertion) (let ((old-assertions THE-ASSERTIONS)) (set! THE-ASSERTIONS (cons-stream assertion old-assertions)) 'ok)) (define (add-rule! rule) (store-rule-in-index rule) (let ((old-rules THE-RULES)) (set! THE-RULES (cons-stream rule old-rules)) 'ok)) To actually store an assertion or a rule, we check to see if it can be indexed. If so, we store it in the appropriate stream. (define (store-assertion-in-index assertion) (if (indexable? assertion) (let ((key (index-key-of assertion))) (let ((current-assertion-stream (get-stream key 'assertion-stream))) (put key 'assertion-stream (cons-stream assertion current-assertion-stream)))))) (define (store-rule-in-index rule) (let ((pattern (conclusion rule))) (if (indexable? pattern) (let ((key (index-key-of pattern))) (let ((current-rule-stream (get-stream key 'rule-stream))) (put key 'rule-stream (cons-stream rule current-rule-stream))))))) The following procedures define how the data-base index is used. A pattern (an assertion or a rule conclusion) will be stored in the table if it starts with a variable or a constant symbol. (define (indexable? pat) (or (constant-symbol? (car pat)) (var? (car pat)))) The key under which a pattern is stored in the table is either ? (if it starts with a variable) or the constant symbol with which it starts. (define (index-key-of pat) (let ((key (car pat))) (if (var? key) '? key))) The index will be used to retrieve items that might match a pattern if the pattern starts with a constant symbol. (define (use-index? pat) (constant-symbol? (car pat))) Exercise), 4.4.4.2): 4.4.4.1):81 4.4.4.5) are just the symbols. (define (var? exp) (tagged-list? exp '?)) (define (constant-symbol? exp) (symbol? exp)) Unique variables are constructed during rule application (in section 4.4.4.4))))))). Thus, in assigning credit for the development of logic programming, the French can point to Prolog's genesis at the University of Marseille, while the British can highlight the work at the University of Edinburgh. According to people at MIT, logic programming was developed.
https://mitpress.mit.edu/sicp/full-text/book/book-Z-H-29.html
CC-MAIN-2017-04
refinedweb
6,004
63.59
Groovy Sort List I am posting a simple example on how to sort a list in groovy because the examples google knows about aren't what I was looking for. With some deep digging I was able to find a clue that eventually solved my problem. It's real easy to sort a list of numbers assert [1,2,3,4] == [3,4,2,1].sort() Or even strings assert ['Chad','James','Travis'] == ['James','Travis','Chad'].sort() But this was my example class Person {def list = [new Person(id: '1', name: 'James'),new Person(id: '2', name: 'Travis'), new Person(id: '3', name: 'Chad')] String id String name } list.sort() returns James, Travis, Chad The solution is ridiculously simple (not that I thought the previous sort would work; I have to be realistic; groovy can't do everything for me). list.sort{it.name} will produce an order of Chad, James, Travis. In the previous example note the use of the sort closure sort {} verses the sort() method. Now I am not sure, off the top of my head and without a Groovy book handy, the simplest way to sort case insensitive. assert ['a1','A1'] == ['A1,'a1'].sort{fancy closure} 38 comments: You sir, are my friggin' hero. I've been messing with trying to create my own tag, etc for this, and you have the answer! Thank you 10 times over! Your post really helped me, James. I found myself wanting case-insensitive sorting, too, so here's what worked for me (I tweaked your test data to make it more obvious): assert ['a1', 'A2', 'a3'] == ['A2','a3', 'a1'].sort { a, b -> a.compareToIgnoreCase b } Very helpful indeed - thanks for posting! To no surprise, your post showed up in my google search for groovy list sorting ... thanks again buddy - Sturtz Thank you very much :-) GREAT! Thank you! Wouldn't a better solution be to implement Comparable and override compareTo method? thx bro :) I looked for this for 2 days!! Thanks Good post indeed, but in most cases I think the Anonymous poster from March 1, 2009 9:44 PM had the better solution as it's more reusable. "Wouldn't a better solution be to implement Comparable and override compareTo method?" This is my solution: class Person { String id String name } def list = [new Person(id: '1', name: 'James'),new Person(id: '2', name: 'Travis'), new Person(id: '3', name: 'Chad'), new Person(id:'4', name:'angel')] println list.name // If you want to sort with Case sensitive, uncomment sort with spaceship operator <==> and comment the sort with compareToIgnoreCase // list.sort { p1, p2 -> p1.name <=> p2.name } list.sort { p1, p2 -> p1.name.compareToIgnoreCase(p2.name) } println list.name thanks for posting! It helped me just now and I bet it has helped far more than those who took the time to comment. Yeah but, what if you wanted to reverse the sort order? Much appreciated dude!! To reverse sort order just add a dash before it. I just tried this (but for numeric values) and it worked. list.sort{-it.name} //Henrik Sjostrand Nice, just what I was looking for. Thanks James. To sort a list of map entries, use either: list.sort{it.key} or list.sort{it.value} Thanks for the great job.Can you give us an exmple of how do we apply this on GSPs? Great Job there. Can you give also show us how we can apply this to our table listing on GSPs? Fantastic, many thanks. /Mattias This post helped me greatly. Thanks. For reverse order I found that you need to add .reverse() to ones sort closure. e.g. myList.sort{ it.date}.reverse() Is there any way to specify the order like in mysql we can specify "order by field(id, 2,3,1,4)". Can we specify such order on a property of domain class in grails criteria ? To sort case sensitive: assert ["b","A","C","d"].sort() == ["A","C","b","d"] To sort case insensitive: assert ["b","A","C","d"].sort{it.toLowerCase()} == ["A","b","C","d"] thanks alot! Thank you , it is still helping us. Amazing... I find more and more answers to my questions on your blog. Big fan already. Thanks a bunch. Thank you very much. This really helped a lot. Anon in Feb asked about sorting with multiple db fields. I just did this, here's my solution. Elegance upgrades welcome! things.sort {thing1, thing2-> (thing1.field1<=>thing2.field1)?:(thing1.field2<=>thing2.field2)} Love the terminology ... "Elvis and the spaceships" Note, I actually wanted to reverse order the second field, so I negated the second sort: things.sort {thing1, thing2-> (thing1.field1<=>thing2.field1)?:-(thing1.field2<=>thing2.field2)} This is for two db fields ... multiple? does the structure hold up? for you to try if you want. this blog even helped me...thanks a ton... HOw to sort this array value?? data = [[1,23],[1,13],[2,543],[2,573]] I want to sort on the basis of second value of array like [[1,13],[1,23],[2,543],[2,573]] Thanks to Sonu [[1,23],[1,13],[2,543],[2,573]].sort {a, b -> a[1].compareTo(b[1])} Incredibly helpful. Thank you so much. If you want case insensitive searching but also protection from nulls, try: .sort {it.name?.toLowerCase()} This continues to be very valuable information. Thanks for posting it. Five years later, James's post is still the best thing on the web for Groovy newcomers, like me. After I used this to get my sorting done, I tried it again, but made the very mistake that the original post had warned about--using parentheses instead of curly braces (sorry, James, I need to read more slowly). Watch out for this mistake -- it produces mysterious error messages, such as "Cannot get property 'name' on null object" or "No such property: it for class". This experience prodded me into figuring out what the curly-brace syntax was actually doing (see ) which in turn suggests what I believe is the simplest solution to case-insensitive sorting (at least it worked for me): list.sort {it.name.toLowerCase()} Oops! I now see that Anonymous beat me to that case-insensitive solution by about five months -- and with more robust code too. Oops again. The documentation link I posted should have been to the overloading of sort() that uses a closure: WOW, that was simple, thanks!
http://jlorenzen.blogspot.com/2008/05/groovy-sort-list.html?showComment=1281356961997
CC-MAIN-2017-51
refinedweb
1,072
76.93
Excel files or more commonly known as spreadsheets are used to store, manipulate, analyze and retrieve statistical data. Spreadsheets work as a down-featured version of database management systems. Common uses of spreadsheets include daily price charts and sales data, student result calculation, employee salary management and similar systems. In short, almost all the statistical data can be stored, manipulated and presented in the form of graphs, charts, and tables via spreadsheets. The best thing about spreadsheets is that they come with built-in mathematical formulas to perform common calculations such as finding an average of all the values in the column, calculating factorial of values in a particular row or column and finding percentages etc. Apart from built-in functions, excel spreadsheets also allow users to define custom formulas. For instance, you can take a square of each term in the column, subtract the original term from the squared value and then add them via some custom formula. The possibilities are virtually limitless. Though excel can be used as a standalone application, sometimes we need to work with excel documents via application code. Simple tasks include read from excel file, write in excel file and so on and so forth. In this article, we are going to explain how we can read Microsoft excel spreadsheet file and how to write data in excel file. We shall explain how to interact with excel spreadsheets via Microsoft.NET programs. For this tutorial, we shall use Bytescout spreadsheet SDK. This standard development kit comes with built-in functions that can be used to read and write data from excel files. To download, Bytescout SDK, go to this link and download the SDK. If you are using Windows computer, your downloaded SDK should be located in C:\Program Files\Bytescout Spreadsheet SDK. From there you can select the .NET version of your choice for the DLL and import in the .NET program. Creating an Excel Spreadsheet via Bytescout SDK. Creating an Excel spreadsheet via Bytescout SDK is pretty simple. In the following code snippet, we shall create a C# console application in visual studio. Once the project is created, right-click the name of the project and click add reference. Here you should choose the dynamic link library named Bytescout.Spreadsheet.dll. The following console application is created with .NET Framework version 4.0, therefore the Bytescout library, added in this case, will be that in the 4.0 folder. Now, to create a spreadsheet, take a look at the following code. The explanation for the code is given after that. using System; using System.Collections.Generic; using System.Linq; using System.Text; using Bytescout.Spreadsheet; using System.IO; using System.Diagnostics; namespace ByteScoutApplication { class Program { static void Main(string[] args) { Spreadsheet exceldoc = new Spreadsheet(); Worksheet excelSheet = exceldoc.Workbook.Worksheets.Add("ExcelSampleSheet"); excelSheet.Cell("A1").Value = "Formula in Textual Form"; excelSheet.Columns[0].Width = 200; excelSheet.Cell("B1").Value = "Formula1 (calculated Value)"; excelSheet.Columns[1].Width = 200; excelSheet.Cell("C1").Value = "Formula2 (calculated Value)"; excelSheet.Columns[2].Width = 200; excelSheet.Cell("A2").Value = "50-2-10"; excelSheet.Cell("B2").Value = "=50*2+10"; excelSheet.Cell("C2").Value = "=50/2-10"; if (File.Exists("Demo.xls")) { File.Delete("Demo.xls"); } exceldoc.SaveAs("Demo.xls"); exceldoc.Close(); Process.Start("Demo.xls"); } } } Let’s explain the above code line by line. The Spreadsheet class is used to create a Spreadsheet object which is basically the parent class for all the sheets in the excel documents. Here the reference to Spreadsheet object is stored in “exceldoc” variable. Next, we use “exceldoc.Workbook.Worksheets.Add(“ExcelSampleSheet”); method to create a worksheet inside the exceldoc. The above function returns a handler for the newly created sheet. The handler name in our case is “excelSheet”. Now to access cell within an excel sheet, we simply call “Cell” function on the excelSheet variable and pass it the name of the cell. For instance, we use excelSheet.Cell(“A1”). value in order to set the value of the column. In similar ways, we set values for the B1 and C1 columns. Similarly, we set the width of any column inside the spreadsheet via “excelSheet.Columns[1].Width” property. In the above code, we set the width of A, B and C columns to 200. Now again we access the A2 cell and add some text to it. In the B2 and C2 cells, we specify that this should be the result of 50*2+10 and 50/2-10. Finally, we check if “demo.xls” exists, if it does exist we delete the previous file and save our new file with the name “demo.xls”. In the end, we open the newly created file. So if you did everything correctly and you run the above code, an excel file shall be opened with a sheet named “ExcelSampleSheet”. It will have three columns and two rows filled. And you shall see the result of 50*2+10 and 50/2-10 in the second and third columns of the second row respectively. This is how you basically create an excel file and add some values to it using Bytescout SDK. Reading an Excel Sheet Via Bytescout SDK The process of reading an excel sheet via Bytescout SDK is simple. You first have to import the corresponding Bytescout.Spreadsheet.dll into your program. The following explains how to read each value in the demo.xls that we created in last code sample. using System; using System.Collections.Generic; using System.Linq; using System.Text; using Bytescout.Spreadsheet; using System.IO; using System.Diagnostics; namespace ByteScoutApplication { class Program { static void Main(string[] args) { Spreadsheet excelDoc = new Spreadsheet(); excelDoc.LoadFromFile("demo.xls"); Worksheet excelSheet = excelDoc.Workbook.Worksheets.ByName("ExcelSampleSheet"); for (int i = 0; i < 2; i++) { for (int j=0; j<3;j++) { Cell currentCell = excelSheet.Cell(i, j); Console.WriteLine( currentCell.Value); } } excelDoc.Close(); Console.ReadKey(); } } } The above code is very straight forward, here we simply use the loadFromFile function of the Spreadsheet object to load “demo.xls” file into excelDoc object. Next, we obtained the handler for the spreadsheet whose cell we want to access which is “ExcelSampleSheet” in our case. Finally, we use two for loops to loop over each and every cell in the spreadsheet. The outer loop iterates over each row while inner loop iterates over each column, we use Cell function of the spreadsheet to access each cell. It takes two parameters the row and column. In the console output, we displayed the value for each cell in the demo.xls file. From the above two examples, it is clear that Bytescout spreadsheet SDK is extremely handy when it comes to reading and writing excel files. Apart from Spreadsheet SDK, Bytescout provides a variety of developer tools that are used to perform different functionalities. A list of such tools is available at the following link .
https://bytescout.com/blog/2016/12/advanced-action-formulas-functions-reading-writing-excel-files.html
CC-MAIN-2017-17
refinedweb
1,145
60.01
by Lucas Hild How to send notifications to your Web App using Python Native apps have become hugely popular recently, mostly because of features such as working offline, transitions, easy distributability and of course push notifications. But unfortunately, you need a good knowledge of languages like Java or Swift to create a valuable native app. Progressive Web Apps Progressive Web Apps (PWAs) are JavaScript applications that run in the browser. They make the effort to bring some of the native app features to the web. PWAs are easy to develop if you have a fundamental knowledge of HTML, CSS, and in particular JavaScript. Moreover, if your service is already accessible for desktop devices on a website, it is easier to add the functionalities of a Web App, instead of developing a native mobile app. Notifications Notifications keep users informed about new messages, tell them about a new blog post, and so on. Many native apps send push notifications to the user. But this is also possible using PWAs and the Notifications API. OneSignal In this tutorial, we will be using OneSingal to send notifications to our web app. OneSignal is a powerful tool that provides a simple interface to push notifications. They also provide a Rest API, which we will be using to send notifications. Setup OneSignal To send push notifications, you need to setup OneSignal first. Therefor you need an account on OneSignal. Head over to their website and press “Log in” in the upper right corner. Next you will have to create an app. Give it a name and choose “Setup platform”. Here you select “All Browsers”. After that, you choose “custom code” as the integration. Then you have to provide some information about your website. In the settings area of your app, there is a tab called “Keys & IDs”. Copy both keys for later. Important: Do not share you REST API Key. Keep it private! That’s it for setting up OneSignal. That was easy! Setup our website In the next part, we will be adding the notification functionality to our website. The website will have to wait for notifications sent by OneSignal and display them to the user. To let the browser know that you are creating a Progressive Web App, we will add a file called manifest.json to the root of our project. { "name": "My Application", "short_name": "Application", "start_url": ".", "display": "standalone", "background_color" : "#fff" , "description": "We send notifications to you", "gcm_sender_id": "482941778795", "gcm_sender_id_comment": "Do not change the GCM Sender ID"} The first six key-value-pairs describe the appearance of the application. The gcm_send_id is important for sending notifications. If you want to know more about manifest.json, you can have a look in the Mozilla Documentation. Your browser doesn’t automatically look for the manifest. You have to put the path to it in every HTML document in the <head> tag. <head> ... <link rel="manifest" href="manifest.json"> ...</head> Additionally, we need some JavaScript code to connect our website to OneSignal. You can put the code for that in a script tag in the <head> part. Don’t forget to replace my-app-id with your own OneSignal app id. <head> <script src="" async=""></script> <script> var OneSignal = window.OneSignal || []; OneSignal.push(function () { OneSignal.init({ appId: "my-app-id", autoRegister: false, notifyButton: { enable: true, }, }); }); <script></head> When you want to prompt the user to subscribe to your notifications, you execute this piece of code. OneSignal.push(function () { OneSignal.showHttpPrompt();}); Moreover, you need a service worker, which listens in the background for notifications. Therefore, you need two files in the root directory of your project. OneSignalSDKUpdaterWorker.js importScripts(''); OneSignalSDKWorker.js importScripts(''); Access the API using Python OneSignal has an easy-to-use Rest API. The endpoints are documented in the OneSignal Developer Documentation. To access it, we need to send HTTP requests. Therefore, we will use a library called requests. To install it, you can use pip, the package manager of Python. pip install requests This is the API endpoint we need to send a notification:. The HTTP protocol has several methods. In this case, we want to make a POST request. To do so, we need to import requests and execute a function. import requests requests.post("") OneSignal wants to verify that only you can send notifications to your website. So you have to add an HTTP header with your Rest API Key from OneSignal. requests.post( "", headers={"Authorization": "Basic my-rest-api-key"}) Remember to replace my-rest-api-key with your Rest API Key. Moreover, you need some basic information about your notification. data = { "app_id": "my-app-id", "included_segments": ["All"], "contents": {"en": "Hello"}} requests.post( "", headers={"Authorization": "Basic my-rest-api-key"}, json=data) Replace my-app-id with your own app id. Next you choose who will receive your notifications. Example values are "All", "Active Users", "Inactive Users”. But you can also create your own segments. And for the last one, you add some content of the message in English. If you need another language, you can add it here too. That’s it! If you subscribed to the notifications, you should get a push notification. Send notifications using an API Wrapper Because my code became kind of messy with many different notifications, I created an API wrapper for OneSignal. API Wrapper But what is an API wrapper? An API wrapper makes it easier for you to access an API. You can say that it is an API for an API. You call the API wrapper instead of the API directly. You can install the wrapper called OneSignal-Notifications from pip. pip install onesignal-notifications Now you can import it and setup your client. from onesignal import OneSignal, SegmentNotificationclient = OneSignal("MY_APP_ID", "MY_REST_API_KEY") To send a Notification, you have to initialize the class SegmentNotification and use the method send. notification_to_all_users = SegmentNotification( { "en": "Hello from OneSignal-Notifications" }, included_segments=SegmentNotification.ALL)client.send(notification_to_all_users) Maybe this looks kind of unnecessary to you, because it takes even more lines of code. But if you have several notifications, it makes the process much easier and your code more beautiful. For example if you want to send a notification, which is based on some conditions, the API wrapper has a custom class for that. from onesignal import OneSignal, FilterNotification, Filterclient = OneSignal("MY_APP_ID", "MY_REST_API_KEY") filter_notification = FilterNotification( { "en": "Hello from OneSignal-Notifications" }, filters=[ Filter.Tag("my_key", "<", "5"), "AND", Filter.AppVersion(">", "5"), "OR", Filter.LastSession(">", "1"), ]) There are many custom parameters you can provide to adapt your notification. For example, you can add buttons to the notification. All list of all parameters can be found here. from onesignal import OneSignal, FilterNotification, Filterclient = OneSignal("MY_APP_ID", "MY_REST_API_KEY") filter_notification = SegmentNotification( { "en": "Hello from OneSignal-Notifications" }, web_buttons=[ { "id": "like-button", "text": "Like", "icon": "", "url": ""} ], included_segments=SegmentNotification.ALL) If you want to find out more about OneSignal-Notifications, you can have a look in the GitHub Repository or in the docs.
https://www.freecodecamp.org/news/how-to-send-notifications-to-your-web-app-using-python-ba490b893292/
CC-MAIN-2019-43
refinedweb
1,146
50.53
POE::Filter::XML::Node - An enhanced XML::LibXML::Element subclass. use 5.010;. stream_start() called without arguments returns a bool on whether or not the node in question is the top level document tag. In an xml stream such as XMPP this is the <stream:stream> tag. Called with a single argument (a bool) sets whether this tag should be considered a stream starter. This method is significant because it determines the behavior of the toString() method. If stream_start() returns bool true, the tag will not be terminated. (ie. <iq to='test' from='test'> instead of <iq to='test' from='test'/>) stream_end() called without arguments returns a bool on whether or not the node in question is the closing document tag in a stream. In an xml stream such as XMPP, this is the </stream:stream>. Called with a single argument (a bool) sets whether this tag should be considered a stream ender. This method is significant because it determines the behavior of the toString() method. If stream_end() returns bool true, then any data or attributes or children of the node is ignored and an ending tag is constructed. (ie. </iq> instead of <iq to='test' from='test'><child/></iq>)=""/> This method returns all of the attribute nodes on the Element (filtering out namespace declarations). getChildrenHash() returns a hash reference to all the children of that node. Each key in the hash will be node name, and each value will be an array reference with all of the children with that name. Each child will be blessed into POE::Filter::XML::Node. This is a convenience method that basically does: (getChildrenByTagName($name))[0] The returned object will be a POE::Filter::XML::Node object. The default XML::LibXML::Element constructor is overridden to provide some extra functionality with regards to attributes. If the $array_ref argument is defined, it will be passed to setAttributes(). Returns a newly constructed POE::Filter::XML::Node. This method overrides the base cloneNode() to propogate the stream_[start|end] bits on the node being cloned. The $deep argument is passed unchanged to the base class. This returns a POE::Filter::XML::Node object. Depending on the arguments provided, this method either 1) instantiates a new Node and appends to the subject or 2) appends the provided Node object. An array reference of attributes may also be provided in either case, and if defined, will be passed to setAttributes(). toString() was overridden to provide special stringification semantics for when stream_start and stream_end are boolean true. Use this exported function to get PFX::Nodes from XML::LibXML::Elements. This is useful for inherited methods that by default return Elements instead of Nodes. This Node module is 100% incompatible with previous versions. Do NOT assume this will upgrade cleanly. When using XML::LibXML::Element methods, the objects returned will NOT be blessed into POE::Filter::XML::Node objects unless those methods are explictly overriden in this module. Use POE::Filter::XML::Node::ordain to overcome this. Copyright (c) 2003 - 2009 Nicholas Perez. Released and distributed under the GPL.
http://search.cpan.org/~nperez/POE-Filter-XML-0.38/lib/POE/Filter/XML/Node.pm
CC-MAIN-2017-47
refinedweb
509
65.73
gwyutils — Various utility functions #include <libgwyddion/gwyddion.h> Various utility functions: creating GLib lists from hash tables gwy_hash_table_to_list_cb()), protably finding Gwyddion application directories ( gwy_find_self_dir()), string functions ( gwy_strreplace()), path manipulation ( gwy_canonicalize_path()). gboolean (*GwySetFractionFunc) ( gdouble fraction); Type of function for reporting progress of a long computation. Usually you want to use gwy_app_wait_set_fraction(). TRUE if the computation should continue; FALSE if it should be cancelled. gboolean (*GwySetMessageFunc) ( const gchar *message); Type of function for reporting what a long computation is doing now. Usually you want to use gwy_app_wait_set_message(). TRUE if the computation should continue; FALSE if it should be cancelled. void gwy_hash_table_to_slist_cb ( gpointer unused_key, gpointer value, gpointer user_data); GHashTable to GSList convertor. Usble in g_hash_table_foreach(), pass a pointer to a GSList* as user data to it. void gwy_hash_table_to_list_cb ( gpointer unused_key, gpointer value, gpointer user_data); GHashTable to GList convertor. Usble in g_hash_table_foreach(), pass a pointer to a GList* as user data to it. gchar * gwy_strkill ( gchar *s, const gchar *killchars); Removes characters in killchars from string s , modifying it in place. Use gwy_strkill(g_strdup( s ), killchars ) to get a modified copy. s itself, the return value is to allow function call nesting. gchar * gwy_strreplace ( const gchar *haystack, const gchar *needle, const gchar *replacement, gsize maxrepl); Replaces occurences of string needle in haystack with replacement . See gwy_gstring_replace() for a function which does in-place replacement on a GString. A newly allocated string. gint gwy_strdiffpos ( const gchar *s1, const gchar *s2); Finds position where two strings differ. The last position where the strings do not differ yet. Particularly, -1 is returned if either string is NULL, zero-length, or they differ in the very first character. gboolean gwy_strisident ( const gchar *s, const gchar *more, const gchar *startmore); Checks whether a string is valid identifier. Valid identifier must start with an alphabetic character or a character from startmore , and it must continue with alphanumeric characters or characters from more . Note underscore is not allowed by default, you have to pass it in more and/or startmore . TRUE if s is valid identifier, FALSE otherwise. gboolean gwy_ascii_strcase_equal ( gconstpointer v1, gconstpointer v2); Compares two strings for equality, ignoring case. The case folding is performed only on ASCII characters. This function is intended to be passed to g_hash_table_new() as key_equal_func , namely in conjuction with gwy_ascii_strcase_hash() hashing function. TRUE if the two string keys match, ignoring case. guint gwy_ascii_strcase_hash ( gconstpointer v); Converts a string to a hash value, ignoring case. The case folding is performed only on ASCII characters. This function is intended to be passed to g_hash_table_new() as hash_func , namely in conjuction with gwy_ascii_strcase_equal() comparison function. The hash value corresponding to the key v . guint gwy_stramong ( const gchar *str, ...); Checks whether a string is equal to any from given list. Zero if str does not equal to any string from the list, nozero othwerise. More precisely, the position + 1 of the first string str equals to is returned in the latter case. gpointer gwy_memmem ( gconstpointer haystack, gsize haystack_len, gconstpointer needle, gsize needle_len); Find a block of memory in another block of memory. This function is very similar to strstr(), except that it works with arbitrary memory blocks instead of NUL-terminated strings. If needle_len is zero, haystack is always returned. On GNU systems with glibc at least 2.1 this is a just a trivial memmem() wrapper. On other systems it emulates memmem() behaviour. Pointer to the first byte of memory block in haystack that matches needle ; NULL if no such block exists. gboolean gwy_file_get_contents ( const gchar *filename, guchar **buffer, gsize *size, GError **error); Reads or mmaps file filename into memory. The buffer must be treated as read-only and must be freed with gwy_file_abandon_contents(). It is NOT guaranteed to be NUL-terminated, use size to find its end. Whether it succeeded. In case of failure buffer and size are reset too. gboolean gwy_file_abandon_contents ( guchar *buffer, gsize size, GError **error); Frees or unmmaps memory allocated by gwy_file_get_contents(). Whether it succeeded. Since 2.22 it always return TRUE. gchar * gwy_find_self_dir ( const gchar *dirname); Finds a system Gwyddion directory. On Unix, a compiled-in path is returned, unless it's overriden with environment variables (see gwyddion manual page). On Win32, the directory where the libgwyddion DLL from which this function was called resides is taken as the base and the location of other Gwyddion directories is calculated from it. The returned value is not actually tested for existence, it's up to caller. To obtain the Gwyddion user directory see gwy_get_user_dir(). The path as a newly allocated string. const gchar * gwy_get_user_dir ( void); Returns the directory where Gwyddion user settings and data should be stored. On Unix this is usually a dot-directory in user's home directory. On modern Win32 the returned directory resides in user's Documents and Settings. On silly platforms or silly occasions, silly locations (namely a temporary directory) can be returned as fallback. To obtain a Gwyddion system directory see gwy_find_self_dir(). The directory as a constant string that should not be freed. const gchar * gwy_get_home_dir ( void); Returns home directory, or temporary directory as a fallback. Under normal circumstances the same string as g_get_home_dir() would return is returned. But on MS Windows, something like "C:\Windows\Temp" can be returned too, as it is as good as anything else (we can write there). Something usable as user home directory. It may be silly, but never NULL or empty. gchar * gwy_canonicalize_path ( const gchar *path); Canonicalizes a filesystem path. Particularly it makes the path absolute, resolves ..' and .', and fixes slash sequences to single slashes. On Win32 it also converts all backslashes to slashes along the way. Note this function does NOT resolve symlinks, use g_file_read_link() for that. The canonical path, as a newly created string. gboolean gwy_filename_ignore ( const gchar *filename_sys); Checks whether file should be ignored. This function checks for common file names indicating files that should be normally ignored. Currently it means backup files (ending with ~ or .bak) and Unix hidden files (starting with a dot). TRUE to ignore this file, FALSE otherwise. gchar * gwy_sgettext ( const gchar *msgid); Translate a message id containing disambiguating prefix ending with `|'. Translated message, or msgid itself with all text up to the last `|' removed if there is no translation. gchar * gwy_str_next_line ( gchar **buffer); Extracts a next line from a character buffer, modifying it in place. buffer is updated to point after the end of the line and the "\n" (or "\r" or "\r\n") is replaced with "\0", if present. The final line may or may not be terminated with an EOL marker, its contents is returned in either case. Note, however, that the empty string "" is not interpreted as an empty unterminated line. Instead, NULL is immediately returned. The typical usage of gwy_str_next_line() is: The start of the line. NULL if the buffer is empty or NULL. The return value is not a new string; the normal return value is the previous value of buffer . guint gwy_str_fixed_font_width ( const gchar *str); Measures the width of UTF-8 encoded string in fixed-width font. This corresponds to width of the string displayed on a text terminal, for instance. Zero and double width characters are taken into account. It is not guaranteed all terminals display the string with the calculated width. String width in fixed font, in character cells. guint gwy_gstring_replace ( GString *str, const gchar *old, const gchar *replacement, gint count); Replaces non-overlapping occurrences of one string with another in a GString. Passing NULL or the empty string for replacement will cause the occurrences of old to be removed. Passing NULL or the empty string for old means a match occurs at every position in the string, including after the last character. So replacement will be inserted at every position in this case. See gwy_strreplace() for a function which creates a new plain C string with substring replacement. The number of replacements made. A non-zero value means the string has been modified, no-op replacements do not count. void gwy_gstring_to_native_eol ( GString *str); Converts "\n" in a string to operating system native line terminators. Text files are most easily written by opening them in the text mode. This function can be useful for writing text files using functions such as g_file_set_contents() that do not permit the conversion to happen automatically. It is a no-op on all POSIX systems, including OS X. So at present, it actually performs any conversion at all only on MS Windows. void gwy_memcpy_byte_swap ( const guint8 *source, guint8 *dest, gsize item_size, gsize nitems, gsize byteswap); Copies a block of memory swapping bytes along the way. The bits in byteswap correspond to groups of bytes to swap: if j-th bit is set, adjacent groups of 2j bits are swapped. For example, value 3 means items, use byte swap pattern j-1. When byteswap is zero, this function reduces to plain memcpy(). void gwy_convert_raw_data ( gconstpointer data, gsize nitems, gssize stride, GwyRawDataType datatype, GwyByteOrder byteorder, gdouble *target, gdouble scale, gdouble offset); Converts a block of raw data items to doubles. Note that conversion from 64bit integral types may lose information as they have more bits than the mantissa of doubles. All other conversions should be precise. guint gwy_raw_data_size ( GwyRawDataType datatype); Reports the size of a single raw data item. The size of a single raw data item of type datatype . gboolean gwy_assign_string ( gchar **target, const gchar *newvalue); Assigns a string, checking for equality and handling NULLs. This function simplifies handling of string value setters. The new value is duplicated and the old string is freed in a safe manner (it is possible to pass a pointer somewhere within the old value as the new value, for instance). Any of the old and new value can be NULL. If both values are equal (including both unset), the function returns FALSE. TRUE if the target string has changed. void gwy_object_set_or_reset ( gpointer object, GType type, ...); Sets object properties, resetting other properties to defaults. All explicitly specified properties are set. In addition, all unspecified settable properties of type type (or all unspecified properties if type is 0) are reset to defaults. Settable means the property is writable and not construction-only. The order in which properties are set is undefined beside keeping the relative order of explicitly specified properties, therefore this function is not generally usable for objects with interdependent properties. Unlike g_object_set(), it does not set properties that already have the requested value, as a consequences notifications are emitted only for properties which actually change. gboolean gwy_set_member_object ( gpointer instance, gpointer member_object, GType expected_type, gpointer member_field, ...); Replaces a member object of another object, handling signal connection and disconnection. If member_object is not NULL a reference is taken, sinking any floating objects (and conversely, the reference to the previous member object is released). The purpose is to simplify bookkeeping in classes that have settable member objects and (usually but not necessarily) need to connect to some signals of these member objects. Since this function both connects and disconnects signals it must be always called with the same set of signals, including callbacks and flags, for a specific member object. Example for a GwyFoo class owning a GwyGradient member object, assuming the usual conventions: The gradient setter then usually only calls and disposing of the member object again only calls set_gradient() but with set_gradient() NULL gradient. TRUE if member_field was changed. FALSE means the new member is identical to the current one and the function reduced to no-op (or that an assertion faled). FILE * gwy_fopen ( const gchar *filename, const gchar *mode); A wrapper for the stdio fopen() function. The fopen() function opens a file and associates a new stream with it. Because file descriptors are specific to the C library on Windows, and a file descriptor is part of the FILE struct, the FILE* returned by this function makes sense only to functions in the same C library. Thus if the GLib-using code uses a different C library than GLib does, the FILE* returned by this function cannot be passed to C library functions like fprintf() or fread(). See your C library manual for more details about fopen(). A FILE* if the file was successfully opened, or NULL if an error occurred. gint gwy_fprintf ( FILE *file, gchar const *format, ...); An implementation of the standard fprintf() function which supports positional parameters, as specified in the Single Unix Specification. the number of bytes printed. Types of raw data. Multibyte types usually need to be complemented with GwyByteOrder to get a full type specification. Type of byte order. utility macros
http://gwyddion.net/documentation/libgwyddion/libgwyddion-gwyutils.php
CC-MAIN-2021-43
refinedweb
2,070
57.57
In this post, I will walkthrough to “Play Background Audio “in Mango or Window 7.1 phone. This post is fully based on Beta version. In this post, I will show how to play local media. Step1 First let us create Windows Phone Application. To create, Open Visual Studio and select Windows Phone Application from installed template. Since Background Audio playing is feature of 7.1 versions, so do not forget to select Target Version as Windows Phone 7.1 Step 2 Next step we need to do is to add a project for Audio Playback Agent. So right click to the solution and Add new project and from installed templates select Windows Phone Audio Playback Agent project type. After adding this project, you would have two projects in solution explorer. One Audio Player Back Agent and another is Phone Application Step 3 Since both the projects have been created, now add few music files to play. To do this right clicks on AudioPlaybackAgent project and new folder. Give desired name to this folder. In this case, I am naming newly created folder as Music. Right click on Music folder and add existing items. Add music files to this folder. After adding Music files select all the Music files and right click and open Properties In Property windows you need to change Copy to Output Directory Properties to Copy if newer Step 4 By Let us add required functionalities or modify default functionalities to perform various operations on Audio file. Returning List of Songs First you need to return List of songs. To return list of songs GetSongs() in AudioPlayer.cs class Above function is returning List of Tracks. To track the record number adds a class level global variable. Let us say we have added Playing Song Now we need to add a function to play a song. Create a function called Play. - As Input parameter pass BackGroundAudioPlayer object - Create a Track to play. As parameter you need to pass Source of the track, title, artist and album name. - After creation of Track call Play () method on object of BackgroundAudioPlayer. Playing Next Song To play next song we need to track the current track record and increase it by 1. Once it is equal to total number of songs in list reinitialized it to 0. Playing Previous Song To play previous song we need to track the current track record and decrease it by 1. Once it is less than 0 then reinitialized it to maximum number of song in list. Handling User actions We need to handle user actions like Play, Stop, and Pause etc. For that you need to modify overridden function onUserAction . Add below switch case in onUserAction method. Handling Play state changes To handle Play state changes add below switch case in OnPlayStateChanged() overwritten method. Finally adding all the required functions and modification AudioPlyaer.cs class would look like below AudioPlayer.cs using 5 Add reference of AudioPlaybackAgent1 project in Phone Application project. For that right click on Phone Application project and select Add Reference. In dialog box choose Projects tab and choose AudioPlaybackAgent1 Step 6 To play audio we have done all modification and added codes to AudioPlayBackAgent1 project. Now you need to create a UI in Phone application project Create UI - Add Three buttons for Play, Previous and Next - Add a Text block to display Track <TextBlock x: </StackPanel> <Grid x: <StackPanel Orientation="Horizontal" Width="420" Margin="18,40,18,0" VerticalAlignment="Top"> <Button Content="prev" x: <Button Content="play" x: <Button Content="next" x: </StackPanel> <TextBlock x: </Grid> </Grid> </phone:PhoneApplicationPage> After modifying above code user interface would look like We need to handle click events. Very first add below namespace And click event for three buttons would be fairly straight forward. We need only to call SkipNext , be as below ,(); } protected override void OnNavigatedTo(NavigationEventArgs e) { if (PlayState.Playing == BackgroundAudioPlayer.Instance.PlayerState) { playButton.Content = "pause"; txtCurrentTrack.Text = BackgroundAudioPlayer.Instance.Track.Title + " by " + BackgroundAudioPlayer.Instance.Track.Artist; } else { playButton.Content = "play"; txtCurrentTrack.Text = ""; } } } } Step 7 BackGroundAudioPlayer can only play song from remote URL or file in Isolated Storage. For that you need to add a function in App.Xaml.cs Adding music File to Isolated Storage In above function points to be noted - Files name are exactly same as you added in previous steps - Folder name is exactly the same. -/" + _fileName; StreamResourceInfo resource = Application.GetResourceStream(new Uri(_filePath, UriKind.Relative)); using (IsolatedStorageFileStream file = storage.CreateFile(_fileName)) { int chunkSize = 4096; byte[] bytes = new byte[chunkSize]; int byteCount; while ((byteCount = resource.Stream.Read(bytes, 0, chunkSize)) > 0) { file.Write(bytes, 0, byteCount); } } } } } } } } Step 8 Now you have created a background music player. Press F5 to run the application. I hope this post was useful. Thanks for reading Sir, I have one doubt regarding OS 1.1 development Framework .Sir I am using VS 2010 Ultimate Version.But The Released Version OS7.1 Toolkit for developing (MANGO)apps is for VS 2010 SP1?? Is it so ? Sir, I have one doubt regarding OS 1.1 development Framework .Sir I am using VS 2010 Ultimate Version.But The Released Version OS7.1 Toolkit for developing (MANGO)apps is for VS 2010 SP1?? Is it so ? Pingback: Dew Drop – May 29, 2011 | Alvin Ashcraft's Morning Dew Pingback: Walkthrough on Play Background Audio in Windows 7.1 [Mango] Phone – Pingback: Presented Demo on Mango Phone in Ahmedabad Community Tech Days on Road | DEBUG MODE...... Pingback: Monthly Report May 2011: Total Posts 16 « debug mode…… Is it support only MP3 format ?
http://debugmode.net/2011/05/29/walkthrough-on-play-background-audio-in-windows-7-1-mango-phone/
CC-MAIN-2015-22
refinedweb
923
59.19
Hello I am newbie to Struts, I have a form class viz: HierAddForm, this form has a member variable called company which is an ArrayList. This company array is nothing but array of objects of class AddDTO. The class AddDTO looks like as shown below public class AddDTO { private String city; private String name; private String zip; ..... Corresponding setter and getter methods go here... } How do I use the logic:iterate tag to display City, Name and Zip in my jsp page ? Thanks in advance JP Struts- logic:iterate tag (3 messages) Threaded Messages (3) - Struts- logic:iterate tag by Madhu Raj Soudathikar on September 16 2004 05:32 EDT - Struts- logic:iterate tag by Ameeta Dabke on September 27 2004 03:39 EDT - but if i want to populate text boxes by Gurpreet Kohli on March 08 2005 08:11 EST Struts- logic:iterate tag[ Go to top ] Hi Jayu - Posted by: Madhu Raj Soudathikar - Posted on: September 16 2004 05:32 EDT - in response to Jay Pawar Hope that this works <logic:iterate <bean:write <bean:write <bean:write </logic:iterate> Struts- logic:iterate tag[ Go to top ] This was a very good info.Stated in simple words & understood. - Posted by: Ameeta Dabke - Posted on: September 27 2004 03:39 EDT - in response to Madhu Raj Soudathikar Thanks! but if i want to populate text boxes[ Go to top ] hi madhu - Posted by: Gurpreet Kohli - Posted on: March 08 2005 08:11 EST - in response to Madhu Raj Soudathikar if i have an arrayObject and i want to fill this array from user.Suppose there are 5 rows in a table with similar type of inputs from user.? Suppose u got what i want to say..... i got stuk .need help Thanx
http://www.theserverside.com/discussions/thread.tss?thread_id=28361
CC-MAIN-2016-40
refinedweb
294
66.88
Automate the boring stuff with Python Practical Python programming for non-engineers Photo by Jen Wike Huger, CC BY-SA; Original photo by Torkild Retvedt "Learn. These problems are too specific for commercial software to solve, but with some programming knowledge, users can create their own solutions. Learning to code can turn users into power users. Dealing with files For example, say you have a folder full of hundreds of files. Each one is named something like Apr2015.csv, Mar2015.csv, Feb2015.csv, and so on, going all the way back to 1980. You have to sort these files by year. But the automatic sorts available to you won’t work; you can’t sort them alphabetically. You could rename each file so that the year comes first and replace all the months with numbers so that an automatic sort would work, but renaming hundreds of files would be brain-meltingly boring and also take hours. Here’s a Python program that took me about 15 minutes to write that does the job instead: import os, shutil monthMapping = {'Jan': '1', 'Feb': '2', 'Mar': '3', 'Apr': '4', 'May': '5', 'Jun': '6', 'Jul': '7', 'Aug': '8', 'Sep': '9', 'Oct': '10', 'Nov': '11', 'Dec': '12'} for filename in os.listdir(): monthPart = filename[:3] yearPart = filename[3:7] newFilename = yearPart + '_' + monthMapping[monthPart] + '.csv' print('Renaming ' + filename + ' to ' + newFilename) #shutil.move(filename, newFilename) Python is an ideal language for beginners because of its simple syntax. It’s not a series of cryptic 1’s and 0’s; you’ll be able to follow along without any programming experience. Let’s go through this program step by step. First, Python’s os and shutil modules have functions that can do the filesystem work we need. We don’t have to write that code ourselves, we just import those modules on the first line. Next, a variable named monthMapping contains a dictionary that maps the month abbreviation to the month number. If 'Apr' is the month abbreviation, monthMapping['Apr'] will give us the month number. The for loop runs the code on each file in the current directory, or folder. The os.listdir() function returns the list of files. The first three letters of the filename will be stored in a variable named monthPart. This just makes the code more readable. Similarly, the years in the filename are stored in a variable named yearPart. The newFilename variable will be created from yearPart, an underscore, the month number (as returned from monthMapping[monthPart]), and the .csv file extension. It’s helpful to display output on the screen as the program runs, so the next line prints the new filename. The final line calls the shutil module’s move() function. Normally, this function moves a file to a different folder with a different name, but by using the same folder it just renames each file. The # at the start of the line means that the entire line is a comment that is ignored by Python. This lets you run the program without it renaming the files so you can check that the printed output looks correct. When you’re ready to actually rename the files, you can remove the # and run the program again. Computer time is cheap / software developer time is expensive This program takes less than a second to rename hundreds of files. But even if you have to process gigabytes of data you don’t need to be able to write "elegant" code. If your code takes 10 hours to run instead of 2 hours because you aren’t an algorithms expert, that’s still a lot faster than finding a software developer, explaining your requirements to them, negotiating a contract, and then verifying their work. And it will certainly be faster than processing all this data by hand. In short, don’t worry about your program’s efficiency: computer processing time is cheap; it’s developer time that’s expensive. More Python My new book, Automate the Boring Stuff with Python, from No Starch Press, is released under a Creative Commons license and teaches beginning programmers how to write Python code to take care of boring tasks. It skips the abstract computer science approach and focuses on practical application. You can read the complete book online. Ebook and print editions are available from Amazon, nostarch.com, and in bookstores. Many programming tutorials use examples like calculating Fibonacci numbers or solving the "8 Queens" chess problem. Automate the Boring Stuff with Python teaches you code to solve real-world problems. The first part of the book is a general Python tutorial. The second part of the book covers things like reading PDF, Word, Excel, and CSV files. You’ll learn how to scrape data off of web sites. You’ll be able to launch programs according to a schedule and send out automatic notifications by email or text message. If you need to save yourself from tedious clicking and typing, you’ll learn how to write programs that control the keyboard and mouse for you. 2 Comments Great article showing how simple little bit of python code could help you solve a brain numbing problem. Only one thing I would change is to add a check to only rename CSV files. Sorry but the formmating did not want to work ========= import os, shutil monthMapping = {'Jan': '1', 'Feb': '2', 'Mar': '3', 'Apr': '4', 'May': '5', 'Jun': '6', 'Jul': '7', 'Aug': '8', 'Sep': '9', 'Oct': '10', 'Nov': '11', 'Dec': '12'} for filename in os.listdir("./"): if filename.endswith(".csv"): monthPart = filename[:3] yearPart = filename[3:7] newFilename = yearPart + '_' + monthMapping[monthPart] + '.csv' print('Renaming ' + filename + ' to ' + newFilename) shutil.move(filename, newFilename) =========== As "software developer time is expensive" indeed, why number the array's months in the first place rather than use their index?
https://opensource.com/life/15/5/practical-python-programming-non-engineers
CC-MAIN-2016-40
refinedweb
966
71.55
Determine what projects are blocking you from porting to Python 3 Project description This script takes in a set of dependencies and then figures out which of them are holding you up from porting to Python 3. Command-line/Web Usage You can specify your dependencies in multiple ways: caniusepython3 -r requirements.txt test-requirement.txt caniusepython3 -m PKG-INFO caniusepython3 -p numpy scipy ipython # If your project's setup.py uses setuptools # (note that setup_requires can't be checked) ... python setup.py caniusepython3 The output of the script will tell you how many (implicit) dependencies you need to transition to Python 3 in order to allow you to make the same transition. It will also list what projects have no dependencies blocking their transition so you can ask them to start a port to Python 3. If you prefer a web interface you can use by Jannis Leidel. Integrating With Your Tests If you want to check for Python 3 availability as part of your tests, you can use icanusepython3.check(): def check(requirements_paths=[], metadata=[], projects=[]): """Return True if all of the specified dependencies have been ported to Python 3. The requirements_paths argument takes a sequence of file paths to requirements files. The 'metadata' argument takes a sequence of strings representing metadata. The 'projects' argument takes a sequence of project names. Any project that is not listed on PyPI will be considered ported. """ You can then integrate it into your tests like so: import unittest import caniusepython3 class DependenciesOnPython3(unittest.TestCase): def test_dependencies(self): # Will begin to fail when dependencies are no longer blocking you # from using Python 3. self.assertFalse(caniusepython3.check(projects=['ipython'])) For the change log, how to tell if a project has been ported, as well as help on how to port a project, please see the project website. Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/caniusepython3/2.0.3/
CC-MAIN-2018-22
refinedweb
325
62.78
org.h2.jdbc.JdbcSQLException: Table "USER" not found; SQL statementSatinder Singh Jul 1, 2017 11:04 AM I am trying to build a simple application and through a Servlet I have managed to establish a Connection to MySQL database. I have configured DataSource in WildFly Server (ver 10 Final). I the Servlet however, PASS - DataSource access via JNDI PASS - Connection retrieval from the Datasource. FAIL - SQL Statement when executed with simple query (select * from test.user). I get error (SQLException) "org.h2.jdbc.JdbcSQLException: Schema TEST not found" FAIL- When SQL Statement provided without the Schema PREFIX of "test" (i.e select * from user) I get error (SQL Exception) "org.h2.jdbc.JdbcSQLException: Table USER not found" This id plain old JDBC work that I have done for donkey years, but I have never come across such a basic error. I have even shifted from Eclipse to NetBeans in vain. The only "suspicion" I have is that error log is talking of "h2" database and no where it is "MySQL". Please help. 1. Re: org.h2.jdbc.JdbcSQLException: Table "USER" not found; SQL statementWolfgang Mayer Jul 3, 2017 5:30 AM (in response to Satinder Singh) Did you link your datasource to the MySQL driver like: <datasource jta="true" jndi- <connection-url>jdbc:mysql://{hostname}:3306/test</connection-url> <driver-class>com.mysql.jdbc.Driver</driver-class> <driver>mysql</driver> ... </datasource> 2. Re: org.h2.jdbc.JdbcSQLException: Table "USER" not found; SQL statementSatinder Singh Jul 9, 2017 4:29 PM (in response to Wolfgang Mayer) Hi Thanks for the reply. Apologies I could not respond earlier> i have been without any Internet. Yes I have tried your prescribed solution but unfortunately, I keep getting the same result. 3. Re: org.h2.jdbc.JdbcSQLException: Table "USER" not found; SQL statementYoshimasa Tanabe Jul 9, 2017 6:46 PM (in response to Satinder Singh) Please share your standalone*.xml you use and how to get DataSource. I suspect your app lookups predefined ExampleDS. 4. Re: org.h2.jdbc.JdbcSQLException: Table "USER" not found; SQL statementSatinder Singh Jul 10, 2017 5:31 AM (in response to Yoshimasa Tanabe) I have the following setup in standalone.xml: > <datasource datasource <connection-url>jdbc:mysql://localhost:3306/test?useSSL=false</connection-url> <driver-class>com.mysql.jdbc.Driver</driver-class> <driver>mysql</driver> <pool> <min-pool-size>2</min-pool-size> <max-pool-size>5</max-pool-size> </pool> <security> <user-name>root</user-name> <password>root</password> </security> </datasource> As per the Wildfly documentstion, I have placed the Driver (mysql-connector-java-5.1.42-bin.jar) at the following location along with module.xml: /Users/satindersingh/servers/wildfly1010/modules/system/layers/base/com/mysql/main The contents of module.xml are: <?xml version="1.0" encoding="UTF-8"?> <module xmlns="urn:jboss:module:1.1" name="com.mysql"> <resources> <resource-root </resources> <dependencies> <module name="javax.api"/> <module name="javax.transaction.api"/> </dependencies> </module> The DataSource calling in the Servlet is: @WebServlet(name = "NewServlet", urlPatterns = {"/NewServlet"}) public class NewServlet extends HttpServlet { private static final Logger LOGGER=Logger.getLogger("NewServlet"); @Resource(name = "java:/MySQLDS") DataSource ds; And the calling code within the same Servlet is: out.println("<h1>Servlet NewServlet at " + request.getContextPath() + "</h1>"); works well and displays Servlet NewServlet at /TestDS on the Browser. out.println("<h2>DataSource found: " + ds.toString() + "</h2>");works well and displays DataSource found: org.jboss.as.connector.subsystems.datasources.WildFlyDataSource@1e910448 on the Browser.try{ con=ds.getConnection(); works well and displays if (con!=null){ out.println("<h2>Connection found: " + con.toString() + "</h2>"); works well and displays Connection found: org.jboss.jca.adapters.jdbc.jdk7.WrappedConnectionJDK7@4f4dd345 out.println("<h2>Auto Commit is : " + con.getAutoCommit() + "</h2>"); Display: Auto Commit is : true String catalog=con.getCatalog(); if (catalog!=null){ out.println("<h2>Catalog name is : " + catalog + "</h2>"); Catalog name is : TEST } String query="select * from TEST.USER"; Statement stm=con.createStatement(); ERROR IS HERE 09:24:21,696 ERROR [stderr] (default task-1) org.h2.jdbc.JdbcSQLException: Schema "TEST" not found; SQL statement: 09:24:21,705 ERROR [stderr] (default task-1) select * from TEST.USER [90079-173] 5. Re: org.h2.jdbc.JdbcSQLException: Table "USER" not found; SQL statementWolfgang Mayer Jul 10, 2017 7:40 AM (in response to Satinder Singh) Try to use @Resource(mappedName = "java:/MySQLDS") instead. 6. Re: org.h2.jdbc.JdbcSQLException: Table "USER" not found; SQL statementSatinder Singh Jul 10, 2017 7:50 AM (in response to Wolfgang Mayer) @Wolfgang Thanks for your help, but it has not worked. I have just tried. Regards 7. Re: org.h2.jdbc.JdbcSQLException: Table "USER" not found; SQL statementWolfgang Mayer Jul 10, 2017 8:07 AM (in response to Satinder Singh) Is the message the same? It looks like WildFly is using the default data source (ExampleDS) if mappedName is not specified. 8. Re: org.h2.jdbc.JdbcSQLException: Table "USER" not found; SQL statementSatinder Singh Jul 10, 2017 9:32 AM (in response to Wolfgang Mayer) Yes message is indeed the same. Unfortunately, I can't check what DataSource is being used. I was hoping that DataSource.toString() method will reveal the name of the database being used. But that method does not return any 'plain english' value. It is just a gobbledygook org.jboss.as.connector.subsystems.datasources.WildFlyDataSource@7fb14721 Unfortunately, even Connection.toString() method is not meaningful: org.jboss.jca.adapters.jdbc.jdk7.WrappedConnectionJDK7@73e13d23 But a method Connection.getCatalog() gives the expected value of the Schema name i.e "TEST" The following statements behave nicely as well: Connection conn = ds.getConnection(); Statement stm=conn.createStatement(); And finally the error is thrown when I perform - ResultSet rs=stm.executeQuery("select * from TEST.USER") The exception is as per my original posting: 14:18:00,503 ERROR [stderr] (default task-1) org.h2.jdbc.JdbcSQLException: Schema "TEST" not found; SQL statement: 14:18:00,512 ERROR [stderr] (default task-1) select * from TEST.USER [90079-173] I shall keep trying but anu clue or help is greatly appreciated. Thanks 9. Re: org.h2.jdbc.JdbcSQLException: Table "USER" not found; SQL statementWolfgang Mayer Jul 10, 2017 10:35 AM (in response to Satinder Singh) That is really strange. But you could try to figure out details regarding your connection via con.getMetaData(); 10. Re: org.h2.jdbc.JdbcSQLException: Table "USER" not found; SQL statementjaikiran pai Jul 11, 2017 12:40 AM (in response to Satinder Singh) I think it all comes down to using wrong datasource. In your standalone.xml your JNDI name for the datasource is java:/MySqlDS <datasource datasource jta="true" jndi-name="java:/MySqlDS" So you should be using: @Resource(lookup="java:/MySqlDS") to inject this datasource. The case of the string matters. In one of the replies, Wolfgang Mayer suggested this change, but there was an oversight with the case of the string which is why I think it didn't work (it's a different issue that it didn't throw an exception in that case). Do this specific change and get the latest logs (if it still fails). 11. Re: org.h2.jdbc.JdbcSQLException: Table "USER" not found; SQL statementSatinder Singh Jul 11, 2017 5:33 AM (in response to jaikiran pai) Yes it works indeed. It was just a matter of getting the name of the resource in correct case. Thanks a lot everyone. Appreciate all your help.
https://developer.jboss.org/thread/275392
CC-MAIN-2018-22
refinedweb
1,226
50.23
How to use OpenCV inside QtCreator? Hi guys. I've been trying to make the OpenCV library to work with QtCreator the last couple of days, but I can't make it work. I've looked at a bunch of guides around the web, and tried each of them several times, but still nothing. Some guides use older versions of OpenCV, so I tried and use those versions, but it didn't work. All guides pretty much follow the steps reported here: The steps are basically to use CMake to build the OpenCV library (adding the option WITH_QT while doing it), install it, and then link the library to the QtProject. Although the part with CMake works a little different for me, meaning that it gives me some errors saying that it can't find some folders related to Qt (even though it points exactly where the directories are... which is odd), it still works and I manage to build the library and get the dll files. But when I try creating a simple project in QtCreator, nothing seems to work. As of right now, I'm using QtCreator 3.0.1 based on Qt 5.2.1 and OpenCV 2.4.9. Basically the code is simply this: #include <opencv2/core/core.hpp> #include <opencv2/highgui/highgui.hpp> int main() { cv::namedWindow("My Image"); return 1; } So nothing really, it just uses a random OpenCV function. The .pro file is like this: QT += core QT -= gui TARGET = OpenCV CONFIG += console CONFIG -= app_bundle TEMPLATE = app SOURCES += main.cpp INCLUDEPATH += D:\Development\opencv-mingw\install\include LIBS += D:\Development\opencv-mingw\install\x64\mingw\bin\libopencv_core249.dll \ D:\Development\opencv-mingw\install\x64\mingw\bin\libopencv_highgui249.dll \ D:\Development\opencv-mingw\install\x64\mingw\bin\libopencv_imgproc249.dll \ D:\Development\opencv-mingw\install\x64\mingw\bin\libopencv_features2d249.dll \ D:\Development\opencv-mingw\install\x64\mingw\bin\libopencv_calib3d249.dll The paths are alright, the code compiles. But it crashes at startup without giving any errors. If I try using the debugger, it does the same but it also gives me an error: "An exception was triggered: Exception at 0x77018f05, code: 0xc000007b; flags=0x0. During startup program exited with code 0xc000007b." So probably the problem is that it can't find the dlls. But I checked and re-checked, they ARE there, and I added the paths to the system PATH variable, as you can see in this screenshot: So, I'm out of ideas. Can you guys help me linking the libraries? Or maybe there is another problem? - SGaist Lifetime Qt Champion Hi, Aren't you providing the path to a 64bit version of OpenCV while using a 32bit Qt ? On a side note, you shouldn't link to .dll files but to .lib files. Hei even i tried to follow the instructions from the same link which you attached. In the Cmake i am getting errors after i select c and c++ compilers and enter finish Error in configuration process, project files may be invalid Do you know what to do? Damn, you're right. How did I not see that? I built the libraries again using the right mingw compiler and now it seems to be working! Thanks a lot! :D About the .lib files, there are none in the library. Or at least none I could find, only dll. From what I've seen it's kind of normal for Cmake to throw some errors here and there. If you try hitting the configure botton again what happens? - SGaist Lifetime Qt Champion You're welcome ! Late night debugging can have that kind of effect ;) Since you have it working now, please mark the thread as solved using the "Topic Tool" button so that other forum users may know a solution has been found :) @Beriol its the same error again and again can you write in brief what all you have done, it will be much useful for me and others who are trying it for first time. I followed exactly the guide I posted in the first post. For me the hard part was actually linking the libraries, not building them. Are you sure you selected the right compilers? The path should be something like: C:\YourPathToQT\Qt\Tools\mingw48_32\bin\gcc.exe <-- for C C:\YourPathToQT\Qt\Tools\mingw48_32\bin\g++.exe <-- for C++ In my case inside the "Tools" folder there is also a folder called MinGW which also contain the bin with the compilers, but do NOT use that (which was my mistake). Besides this, did you add the path to CMake to your system PATH variable and restarted the computer? @Beriol I downloaded and extracted openCV. Later installed Qt. Added the directories C:\Qt\5.5\mingw492_32\bin C:\opencv\build\x86\vc12\bin to the path in Environment variables. Later downloaded CMake and installed it and added its path C:\CMake\bin Created an empty folder opencv-mingw in C-Drive Rebooted Laptop In CMake i selected the sourcecode Directory as C:\Qt\Tools\mingw492_32\bin (also tried with C:\Qt\5.5\mingw492_32\bin) Buid the binary in C:/opencv-mingwand hit configure button Selected MinGW Makefiles from dropdown menu. Then it asked to specify native compilers Specified C:/Qt/Tools/mingw491_32/bin/gcc.exefor C specified C:/Qt/Tools/mingw491_32/bin/g++.exefor C++ and hit finish IT GAVE ME ERROR Error in configuration process, Project files may be invalid The "sourcecode directory" inside CMake should point to the folder that contains the source code for OpenCV, not the folder that contains compiler executables like the one you're using! So, you should put the path to the folder where you extracted the OpenCV stuff. Something like "C:/opencv/sources". That's why you're getting the error at start, there are no source files to configure :) @Beriol Oh yes, thanks for the that It was my silly mistake... It is working and I have generated the compilers in Cmake. But in Command prompt after entering mingw32–make, I was left with errors at 31%, I referred the comments in the link and unchecked WITH_IPPand regenerated in Cmake. But this time I was a getting errors at 37% in Command prompt. makefile: 148: recipe for target 'all' failed mingw32-make: ***[all] Error 2 Do you know anything about this? It happened to me too, but I don't remember what exactly fixxed that problem. Try this: in CMake tick the option "Advanced" (it's on the right after the search bar), and search for CMAKE__MAKE_PROGRAM; make sure that it points to the mingw32-make.exe inside you Qt folder "C:\Qt\Tools\mingw492_32\bin\mingw32-make.exe". I remember that in my case it wasn't set right and I used to get errors because of that. Besides this I'm not sure what else you could do... Did you check "WITH_QT"? Also, try unchecking "WITH_CUDA". I remember reading somewhere that it might give problems, but I actually don't remember if it gave me any while building.
https://forum.qt.io/topic/65235/how-to-use-opencv-inside-qtcreator
CC-MAIN-2018-26
refinedweb
1,175
65.32
Differentiate from "trending because whale" and "trending because a lot of views2020-11-27 18:41:33<div><p>From EF on discord: <p>Idea from ... That way, people will understand why low-view content is up there sometimes (okay, a lot). 该提问来源于开源项目:lbryio/lbry-desktop</p></div> TV Trending Missing2020-12-08 20:16:49<div><p>Not sure if this is a bug or I messed something up but if it makes any difference on the W32 version it works fine, on my Linux VM console displays this <p>File "/usr/lib/python2.6/... Trending shows very old posts2020-12-08 18:05:08<div><p>Since the last update we the /trending category will show obsolete posts which are several days old. These posts seem to remain there until the second payout happens (4 weeks), which is ... Trending repos are not showing2021-01-09 23:31:10<div><p>Is it maybe a html parsing issue? Heard that github changed some html tags, so it might have messed it up. </p><p>该提问来源于开源项目:ThirtyDegreesRay/OpenHub</p></div> Back-Stack not properly working when visiting Trending2021-01-12 19:47:41<p>I think it would be more intuitive if you would get sent back to the Trending app to browse for more projects instead of having to click on the Trending tab again to bring it up.</p><p>该提问来源于... producthunt-trending-extension:浏览器扩展,以在新选项卡上查看Product Hunt趋势产品:red_triangle_...2021-02-04 15:56:08什么是产品搜寻趋势标签扩展 “产品搜索趋势标签”用“产品搜索”趋势产品替换了浏览器的新标签屏幕,因此猎人,制造商或关注者每天都可以了解趋势产品。 它会在后台定期加载趋势产品,因此您无需在每次打开新标签时... SWERC13 Trending Topic2014-08-25 01:21:15map暴力。。。 Imagine you are in the hiring process for a company whose principal activity is the analysis ...of information in the ... One of the tests consists in writing a program for maintaining up map暴力。。。: • All words are composed only of lowercase letters of size at most 20. • The maximum number of different words that can appear is 20000. • The maximum number of words per day is 20000. • Words of length less than four characters are considered of no interest. • The number of days will be at most 1000. • 1 ≤ N ≤. SAMPLE INPUT <text> imagine you are in the hiring process of a company whose main business is analyzing the information that appears in the web </text> <text> a simple test consists in writing a program for maintaining up to date a set of trending topics </text> <text> you will be hired depending on the efficiency of your solution </text> <top 5 /> <text> they provide you with a file containing the text corresponding to a highly active blog </text> <text> the text is organized daily and you have to provide the sorted list of the n most frequent words during last week when asked </text> <text> each input file contains one test case the text corresponding to a day is delimited by tag text </text> <text> the query of top n words can appear between texts corresponding to two different days </text> <top 3 /> <text> blah blah blah blah blah blah blah blah blah please please please </text> <top 3 /> 2 Problem IProblem I Trending Topic SAMPLE OUTPUT <top 5> analyzing 1 appears 1 business 1 company 1 consists 1 date 1 depending 1 efficiency 1 hired 1 hiring 1 imagine 1 information 1 main 1 maintaining 1 process 1 program 1 simple 1 solution 1 test 1 that 1 topics 1 trending 1 whose 1 will 1 writing 1 your 1 </top> <top 3> text 4 corresponding 3 file 2 provide 2 test 2 words 2 </top> <top 3> blah 9 text 4 corresponding 3 please 3 </top> #include <iostream> #include <cstdio> #include <cstring> #include <algorithm> #include <string> #include <map> #include <vector> using namespace std; typedef pair<int,int> pII; map<string,int> Hash; vector<int> dy[11]; string rHash[20200]; int day_sum[11][20200]; char cache[30]; int now=9,pre=0,id=1; int arr[20020],na; string rss[20020]; bool vis[20020]; void DEBUG(int x) { int sz=dy[x].size(); for(int i=0;i<sz;i++) { cout<<"ID: "<<dy[x][i]<<" : "<<rHash[dy[x][i]]<<endl; cout<<"sum: "<<day_sum[x][dy[x][i]]<<endl; } } struct RSP { int times; string word; }rsp[20020]; bool cmpRSP(RSP a,RSP b) { if(a.times!=b.times) return a.times>b.times; else return a.word<b.word; } void get_top(int now,int k) { int sz=dy[now].size(); na=0; int _7dayago=(now+3)%10; memset(vis,false,sizeof(vis)); for(int i=0;i<sz;i++) { if(vis[dy[now][i]]==false) { arr[na++]=day_sum[now][dy[now][i]]-day_sum[_7dayago][dy[now][i]]; vis[dy[now][i]]=true; } } sort(arr,arr+na); int sig=arr[max(0,na-k)]; int rn=0; memset(vis,false,sizeof(vis)); for(int i=0;i<sz;i++) { int times=day_sum[now][dy[now][i]]-day_sum[_7dayago][dy[now][i]]; if(times >= sig &&vis[dy[now][i]]==false) { rsp[rn++]=(RSP){times,rHash[dy[now][i]]}; vis[dy[now][i]]=true; } } sort(rsp,rsp+rn,cmpRSP); printf("<top %d>\n",k); for(int i=0;i<rn;i++) { cout<<rsp[i].word<<" "<<rsp[i].times<<endl; } printf("</top>\n"); } int main() { while(scanf("%s",cache)!=EOF) { if(strcmp(cache,"<text>")==0) { ///read cache pre=now; now=(now+1)%10; dy[now]=dy[pre]; memcpy(day_sum[now],day_sum[pre],sizeof(day_sum[0])); ///7 day ago .... while(scanf("%s",cache)) { if(cache[0]=='<') break; if(strlen(cache)<4) continue; string word=cache; if(Hash[word]==0) { rHash[id]=word; Hash[word]=id++; } int ID=Hash[word]; if(day_sum[pre][ID]==0) dy[now].push_back(ID); day_sum[now][ID]++; } } else if(strcmp(cache,"<top")==0) { int top; scanf("%d",&top); scanf("%s",cache); get_top(now,top); } } return 0; } Add Loading to Est Account & Steem Trending Charts2020-12-28 19:21:54<div><p>Fixes #929. * Est. account value doesn...t show up <p>Changes: - Add loading to Est Account Value - Add loading to steem trending charts</p><p>该提问来源于开源项目:busyorg/busy</p></div> Trending themes showing themes with single-digit usage2020-12-30 04:40:50<div><p>This is what I see ... There should be a minimum threshold of usage to appear on that list, to avoid random new stuff to show up.</p><p>该提问来源于开源项目:mozilla/addons-server</p></div> Trakt Not Working In Movies/Trending2020-12-09 04:00:08<div><p>When you select Trakt an error message pops up.</p><p>该提问来源于开源项目:nixgates/plugin.video.seren</p></div> Add a show Trakt Trending/Search/Sidepanel issues2020-11-23 08:05:28So Dtv returns to the library list view, but leaves the preview sidepanel up with a <code>loading</code> header. IMO, It should either: a. stay on the filtered list page and show the clicked poster ... feature request: llen trending / monitorig for redis keys2020-12-27 18:03:39d be really useful to have llen trending data on a per key basis.. especially when there are multiple keys in play. <p>Thanks in advance</p><p>该提问来源于开源项目:MeetMe/newrelic-plugin-agent... Trakt TV Most Trending TV-Shows in series overview2020-11-23 08:00:01ve just hacked up the first commit to implement trending shows from Trakt.TV into the favorites dialog, which is very awesome if I may say so myself <p><img alt="image" src=... Formly-form mangled when viewed from a favourites or trending page2020-11-23 08:05:48<div><p><strong>What build of DuckieTV are you using (Standlone / Chrome Extension (New Tab / ...- start up DuckieTV and use fastsearch or the calendar to pull up a series and access the seriesSettings ... Left panel (News, Feed, ...) hides when scrolling with Trending Topics expanded2020-12-27 14:30:03at Trending Topics</li><li>Scroll down</li><li>News, Feed is going up</li><li> * Click View less and View more again and the left panel is immediately moved </li><li> <p>Browser: Chrome </li><li>... opt-in: heuristic research, trending techniques, community rulesets2020-12-02 05:00:50trending and declining techniques sites with most tracking real-world data to see if fingerprinting techniques are actually used 3) heuristic analysis for developers 4) additional data could be ... Adds support for showing the metadata and trending Artists to a Gene VC2020-12-28 00:25:08How to get set up with this PR? <p>To run on your computer: <pre><code>git fetch origin pull/327/head:orta-327-checkout git checkout orta-327-checkout yarn install cd example; pod install; cd ..... adding a second show from active search results, causes display of trending results instead.2020-11-24 10:55:45it dosnt seem like it should, i have to keep relooking up shows and jump back and forth...make it stop please when im browsing search result it should stay there until i navigate away intentionally ... Swift与Objective-C:与恐龙有关的趋势2020-08-01 21:50:15by Colin Smith 通过科林·史密斯 Swift与Objective-C:... Objective-C: The trending up-and-comer vs. the dinosaur) Swift的简短历史 (A short history of Swift) I remember how pivotal it was when Swift... by Colin Smith 通过科林·史密斯 Swift与Objective-C:与恐龙有关的趋势 (Swift vs. Objective-C: The trending up-and-comer vs. the dinosaur) Swift的简短历史 (A short history of Swift) I remember how pivotal it was when Swift was introduced at Apple’s 2014 WWDC (Worldwide Developers Conference). It was the talk of the town and all the devs I worked with couldn’t wait to try it out. The iOS community was buzzing and there was a lot of excitement around the new language. 我记得在Apple的2014年WWDC(全球开发者大会)上推出Swift时,它具有多么关键的意义。 那是小镇的话题,与我合作的所有开发人员都迫不及待地想尝试一下。 iOS社区嗡嗡作响,新语言引起了很多兴奋。 It was developed in order to carry on some concepts we saw in Objective-C such as extensible programming. But it pushed towards a different approach to coding with the protocol-oriented design and increased safety with static typing. 开发它是为了继承我们在Objective-C中看到的一些概念,例如可扩展编程。 但是,它采用了面向协议的设计,从而寻求了一种不同的编码方法,并通过静态类型提高了安全性。 It was a huge hit and saw its growth sky rocket in the years after introduction. It was the most loved programming language in 2015, the second most loved in 2016, the 11th most popular programming language in 2017, beating out Objective-C, and it also beat out Objective-C in 2018. 在推出后的数年中,这是一次巨大的成功,并见证了它的增长。 这是最喜欢 2015年的编程语言中, 第二个最喜爱的2016年,第11届最流行的 2017年的编程语言,击败了Objective-C的,也击败了 2018年的Objective-C。 Swift is also a bet by Apple on winning over novices to become iOS developers. The hope is that new developers will learn the language and use it to build iOS apps. This then increases the ecosystem of the app store. Since Swift is optimized to work with iOS apps, this ensures the apps being written are of high quality. Swift也是苹果公司的赌注,旨在赢得新手成为iOS开发人员。 希望新的开发人员将学习该语言并将其用于构建iOS应用。 然后,这增加了应用商店的生态系统。 由于Swift经过优化可与iOS应用程序一起使用,因此可以确保所编写的应用程序具有高质量。 Swifts popularity only continues to increase, especially for smaller apps and start-ups. The gap between Swift and Objective-C will only continue to grow. The future is bright for this young language. Swifts的受欢迎程度只会继续增加,尤其是对于较小的应用程序和初创公司。 Swift和Objective-C之间的差距只会继续扩大。 这种年轻语言的前途一片光明。 Objective-C的简短历史 (A short history of Objective-C) Objective-C is an object-oriented programming language that is a superset of C, as the name of the language might reveal. This means that any valid C program will compile with an Objective-C compiler. It derives all its non-object oriented syntax from C and its object oriented syntax from SmallTalk. It was developed in 1984, so it has had time to mature as a language and is much more stable than Swift. 正如语言的名称所揭示的那样,Objective-C是一种面向对象的编程语言,是C的超集。 这意味着任何有效的C程序都将使用Objective-C编译器进行编译。 它从C导出所有非面向对象的语法,并从SmallTalk导出其面向对象的语法。 它开发于1984年,因此已经有一段时间成为一种语言,并且比Swift稳定得多。 Most people know Objective-C as the language that is used to develop apps for the iPhone, but the history goes much deeper than that. I’d recommend reading this article for a more in-depth look. 大多数人都知道Objective-C是用于为iPhone开发应用程序的语言,但是历史远不止于此。 我建议阅读本文以更深入地了解。 Swift的优势 (The strengths of Swift) Swift has grown tremendously in popularity for a few key reasons. First off, there are a lot of great development tools Apple has provided to work in conjunction with Swift. One of my personal favorites is the Playground, which is only compatible with Swift. Apple introduced Playgrounds in 2016. They were introduced as a way to learn how to code, but I loved them for a different reason. 出于几个关键原因,Swift已Swift普及。 首先,苹果提供了许多很棒的开发工具来与Swift一起使用。 我个人最喜欢的游戏之一是Playground,仅与Swift兼容。 苹果于2016年推出了Playgrounds。 引入它们是一种学习编码的方法,但是出于不同的原因,我喜欢它们。 Mobile development has always had more roadblocks than web development. You need a simulator, you usually need a proprietary Integrated Development Environment (IDE), and you need to set up a whole project just to test some small prototype. In Apple’s case, you also need a developer account. The nice thing about Playgrounds is you get around some of this. You do need Xcode or the Playgrounds app, but that is all. And you can get started with coding and compiling your code right away. 移动开发始终比Web开发具有更多的障碍。 您需要一个模拟器,通常需要一个专有的集成开发环境(IDE),并且需要设置一个整个项目来测试一些小型原型。 在Apple的情况下,您还需要一个开发人员帐户。 Playgrounds的好处是您可以解决一些问题。 您确实需要Xcode或Playgrounds应用程序,仅此而已。 您可以立即开始编码和编译代码。 Yet, another huge advantage of Swift is the fact that it is open source. If you have ever wondered how a programming language worked under the hood, then you can go see for yourself! This is a great way to understand the programming language you work with daily on a deeper level. 然而,Swift的另一个巨大优势是它是开源的。 如果您曾经想知道编程语言是如何工作的,那么您可以自己看看 ! 这是更好地了解您每天使用的编程语言的好方法。 An honorable mention goes to a nice utility only available to Swift, the Swift Package Manager. The Swift Package Manager is simply a dependency manager that is integrated with the Swift build system. It isn’t a game changer by any means, since CocoaPods and Carthage were doing this job a long time ago, but it’s another solution available if needed. 值得一提的是,一个不错的实用程序仅适用于Swift( Swift软件包管理器) 。 Swift软件包管理器只是与Swift构建系统集成的依赖项管理器 。 由于CocoaPods和Carthage很久以前就从事这项工作,因此无论如何都不会改变游戏规则,但是如果需要的话,它是另一个可用的解决方案。 A lot of evidence here supports the fact that Apple is doing a lot to make Swift more desirable as the programming language of choice for iOS developers. They are creating nice utilities and auxiliaries to entice people to start using the language. This shows that Apple is pushing for Swift in full force. 这里有许多证据支持这样的事实,即苹果正在做出很多努力,使Swift更受iOS开发人员的青睐。 他们正在创建实用的工具和辅助工具,以诱使人们开始使用该语言。 这表明苹果正在全力推动Swift。 语言特征 (Language features) Let’s get into some of the details of the language itself. Swift is safer due to its static typing and the use of optionals. In Swift, if your code requires a string, the features of Swift will guarantee that your code gets a string and not another type, such as an int. This of course depends on if you’re using the language as it is intended and not force unwrapping everything. 让我们深入探讨一下语言本身的一些细节。 Swift由于其静态类型和使用可选参数而更加安全。 在Swift中,如果您的代码需要字符串,则Swift的功能将确保您的代码获取字符串,而不是其他类型(例如int)。 当然,这取决于您是否按预期使用该语言,而不是强制展开所有内容。 Another great feature of Swift is its syntax. Especially compared to Objective-C. The best word to describe the syntax would be “succinct”. There is no need for semi-colons, calls to self or parentheses around if statements. It feels like you are skipping a lot of things that you don’t really need anyway. It can make the process of typing a lot of code “flow” better. Swift的另一个强大功能是语法。 特别是与Objective-C相比。 描述语法的最佳词是“简洁的”。 不需要分号,对self的调用或if语句周围的括号。 感觉就像您正在跳过很多您本来不需要的东西。 它可以使键入许多代码“流”的过程更好。 Some people say this leads to development velocity improvements, but I wouldn’t exactly say that myself. The continual need to unwrap objects to comply with Swifts type-safety offsets the development gains that come with the succinctness. 有人说这会导致开发速度的提高,但是我自己并不能完全说出来。 不断需要拆开对象以符合Swifts类型安全性的需求抵消了简洁带来的开发收益。 Swift also has a lot of great control flow options with guard, if-let, advanced switch statements, repeat-while and defer. I like all the different options because it lets people control the flow of their code in a way that makes sense to them. A lot of people hate defers but love guards and vice versa. It doesn’t really matter what you like or dislike, but the options are there and you can code in the way that feels best to you. Swift还具有很多出色的控制流选项,包括后卫,if-let,高级switch语句,while-while和defer。 我喜欢所有不同的选项,因为它使人们可以以对他们有意义的方式控制代码流。 许多人讨厌延缓延误,但是爱护卫兵,反之亦然。 您喜欢或不喜欢的对象并不重要,但是可以使用这些选项,并且可以按照最适合自己的方式进行编码。 I can’t forget all the functional programming features such as filter, map and reduce. This is great for handling collections and comes in handy quite often. 我不能忘记所有功能性的编程功能,例如过滤,映射和归约。 这非常适合处理集合,并且经常派上用场。 弱点 (The weaknesses) Swift is a young language, and with that, comes some shifting. The migrations between versions are simply a pain. At a small company, the migration tool provided by Apple can be helpful and cover most cases. It becomes less helpful the more code you have. It’s even worse if your codebase contains both Objective-C and Swift code that interoperate. Swift是一门年轻的语言,随之而来的是一些转变。 版本之间的迁移简直是痛苦。 在一家小公司中,Apple提供的迁移工具可能会有所帮助,并且可以解决大多数情况。 您拥有的代码越多,帮助就越小。 如果您的代码库同时包含可互操作的Objective-C和Swift代码,则情况更糟。 At my last company, the migration effort took a dedicated group a whole weekend to do. They had to do it on the weekend so that they wouldn’t run into merge conflicts from other devs pushing code. This was incredibly painful for everyone involved. 在我上一家公司,迁移工作整个周末花了一个专门小组。 他们必须在周末这样做,以免遇到其他开发人员在推动代码时的合并冲突。 这对于每个参与人员来说都非常痛苦。 A reason for these migrations is the fact that Swift isn’t ABI stable. That means newer versions of Swift cannot work with older versions of Swift. That also means that the language cannot be packaged with the OS. This is a big deal for companies with large apps that actively combat app size because Swift is being bundled with the app and increasing the size. 进行这些迁移的原因是Swift 的ABI不稳定 。 这意味着新版本的Swift无法与旧版本的Swift一起使用。 这也意味着该语言无法与OS打包在一起。 对于拥有大型应用程序且积极应对应用程序尺寸的公司而言,这是一笔大买卖,因为Swift已与应用程序捆绑在一起并增加了尺寸。 Another issue is that Swift does not play well with Xcode. Xcode feels very choppy when working with Swift and autocomplete simply doesn’t work sometimes. This is strange given how hard Apple is pushing Swift. You would think that they would want make the experience of using Swift with Xcode a delight. 另一个问题是,Swift无法与Xcode配合使用。 Xcode在使用Swift时感觉非常不稳定,有时自动完成功能根本不起作用 。 考虑到苹果在推动Swift的努力,这很奇怪。 您可能会认为,他们希望使Xcode结合使用Swift的体验。 Swift also has problems with string handling, see the code example above. It is clunky as hell. In your day to day, this isn’t too bad. Where this comes into play the most is during interviews. Unfortunately for Swift devs, interviewers love asking questions that involve string manipulation. This is compounded by the fact that the way strings are handled has changed between versions of Swift. Swift在字符串处理方面也有问题,请参见上面的代码示例。 笨拙地狱。 在您的日常工作中,这还算不错。 发挥作用最大的是采访期间。 对于Swift开发人员而言,不幸的是,访问员喜欢询问涉及字符串操作的问题。 在不同版本的Swift之间,处理字符串的方式有所不同,这使情况更加复杂。 Objective-C的优势 (The strengths of Objective-C) Objective-C is a highly dynamic, object oriented language. It is dynamic to the point that you can swap out method invocations at runtime using techniques like Swizzling. It is able to do these kinds of thing due to its message sending paradigm. This lets objects send messages to other objects at run time to determine the invocation of the method being called. Objective-C是一种高度动态的,面向对象的语言。 动态的是,您可以在运行时使用Swizzling之类的方法交换方法调用。 由于其消息发送范例,它能够执行此类操作。 这样,对象可以在运行时将消息发送到其他对象,以确定调用的方法。 In practical purposes, what does this mean? Well,. 实际上,这是什么意思? 好吧,一大优势是运行时的适应性。 这意味着可以在运行时访问私有API或执行诸如模拟对象之类的操作。 对于单元测试,这可能特别有用。 像OCMock这样的库使此操作变得更加容易,并允许进行非常复杂的测试设置。 拥有良好的单元测试将使您的应用程序更加稳定和可靠。. 说到稳定性,Objective-C已经存在很长时间了,这使其成为非常稳定的语言。 使用Swift,您将遇到一些令人惊讶的错误,这些错误会破坏应用程序的稳定性。 在上面链接的示例中,此崩溃是由您用来编写应用程序代码的实际语言引起的,而不是由您编写的代码创建的任何错误引起的。 这可能令人沮丧。. 最后一点,对于某些公司来说更重要,是与C和C ++库的兼容性。 由于Objective-C是C的超集,因此很容易在Objective-C中使用C和C ++代码。 如果您愿意,甚至可以使用Objective-C ++。 如果您依赖第三方C和C ++库,则这一点很重要。 弱点 (The weaknesses). 我听到的关于Objective-C的第一个主要抱怨是语法。 我使用Objective-C开始了我的职业生涯,因此没有任何问题。 使用方括号时它很冗长,有点不合常规。 但是关于语法的意见就是这样。 我想我会列出这一点,因为当您提到Objective-C时,这是第一件事。 One thing I do agree with though is that block syntax is frustrating. There is even a website dedicated to decoding the mysteries of blocks in Objective-C. I actually use this website pretty often as a reference. 我确实同意的一件事是,块语法令人沮丧。 甚至还有一个网站专门致力于在Objective-C中解码块的奥秘。 实际上,我经常使用该网站作为参考。. 目前,Objective-C面临的最大问题是,有一天,苹果可能会放弃Cocoa和其他用于创建iOS应用程序的公共库对Objective-C的支持。 由于Objective-C主要用于创建iOS应用,因此这是该语言的丧钟。 这也意味着,iOS社区的新手会害怕立即致力于学习Objective-C,因为它可能在将来不再使用。. 让我们回到语言本身。 由于该语言的动态特性,因此易于调试问题。 可以将消息发送为nil并且不会由于缺乏严格的键入而崩溃的能力是导致这些难以调试的问题的一些示例。. 在涉及到这些事情时,Objective-C也不会牵手。 虽然将消息发送到nil时应用程序不会崩溃很不错,但它可能会使您的应用程序处于怪异状态。 调试此类问题非常困难。 Swift具有严格的类型,并且使用了unwrapping可选选项,这一事实在编译时阻止了这些事情。 我应该学习Swift还是Objective-C? . 对于大多数人来说,答案将是Swift。 苹果显然正在将Swift作为其iOS应用程序开发社区的首选语言。 随着ABI稳定性的引入和Swift与OS本身的打包,Swift只会继续变得更加高性能。. 如果您想找一份iOS开发人员的工作,Swift将是您想要学习的语言。 大多数初创公司到中级公司将完全使用Swift编写其iOS应用程序。 这意味着,如果您学习Swift,就可以申请并面试更多工作。 Even at larger companies where Objective-C is still used heavily, interviews can still be done in Swift. So you can learn Objective-C once you join the company and not worry about burdening yourself with more things to learn before the interview. 即使在仍大量使用Objective-C的大型公司中,也可以在Swift中进行采访。 因此,您在加入公司后就可以学习Objective-C,而不必担心在面试前会给自己负担更多要学习的东西。 You will want to learn Objective-C if you are already working at a start up or mid-level company and want to jump to a larger company. Skills with Objective-C will give you specialized knowledge and an edge over other interview candidates. 如果您已经在初创公司或中级公司工作,并且想要跳到一家更大的公司,则将需要学习Objective-C。 具有Objective-C的技能将为您提供专业知识,并且比其他面试候选人更胜一筹。 Liked what you read? Take a look at some of my other articles: 喜欢您阅读的内容吗? 看看我的其他一些文章: Tips for your first tech interview. Starting a tech career from nothing. Should you get a computer science degree? 翻译自: Laravel基于评级的碳趋势2015-08-03 17:37:32<p>I want to find out if a product is trending on my website (trending up or down) based on the period that is passed in. I.e. If week is passed in, it compares the current week up to the day to the ... Plugin API Support2021-01-11 12:11:04<div><p>This project is awesome!... So in the above example, we could see the code coverage percentage trending up or down over time.</p><p>该提问来源于开源项目:es-analysis/plato</p></div> Self-Reward Attack: Up to 1% liquid steem p. week with 5 forged accounts2020-12-08 18:09:33The risk of getting exposed would increase significantly.</li><li>Positive side-effect: More diversity of authors on Trending.</li><li>Positive side-effect: Higher Distribution of SBD and SP.</li></... YouTube的每日趋势视频数据集2020-10-08 17:25:58原文: This dataset includes several ... Data is included for the US, GB, DE, CA, and FR regions (USA, Great Britain, Germany, Canada, and France, respectively), with up to 200 listed trending video 原文: This dataset includes several months (and counting) of data on daily trending YouTube videos. Data is included for the US, GB, DE, CA, and FR regions (USA, Great Britain, Germany, Canada, and France, respectively), with up to 200 listed trending videos per day. EDIT: Now includes data from RU, MX, KR, JP and IN regions (Russia, Mexico, South Korea, Japan and India respectively) over the same time period. Each region’s data is in a separate file. Data includes the video title, channel title, publish time, tags, views, likes and dislikes, description, and comment count. The data also includes a category_id field, which varies between regions. To retrieve the categories for a specific video, find it in the associated JSON. One such file is included for each of the five regions in the dataset. 译: 这个数据集包含了几个月(和计数)的数据,这些数据来自于YouTube的每日趋势视频。数据包括美国、英国、德国、加拿大和法国地区(分别为美国、英国、德国、加拿大和法国),每天最多有200个列出的趋势视频。 编辑:现在包括来自俄罗斯、墨西哥、韩国、日本和印度的同期数据。 每个区域的数据都在一个单独的文件中。数据包括视频标题、频道标题、发布时间、标记、视图、喜欢和不喜欢、描述和评论计数。 数据还包括一个category_id字段,该字段因地区而异。要检索特定视频的类别,请在相关联的JSON中找到它。对于数据集中的五个区域,每个区域都包含一个这样的文件。 大家可以到官网地址下载数据集,我自己也在百度网盘分享了一份。可关注本人公众号,回复“2020100802”获取下载链接。 [missing verbatim-tag] pongo2 and the css code in single line html file will cause error2020-11-29 10:22:29padding-left:14px}#rr-search-tn .trending-now .down{background-position:0 -2657px}#rr-search-tn .trending-now .up{background-position:0 -1305px}#rr-search-tn .trending-now .down img,.trending-now .up ... ERROR Message when running gettrending.py2020-12-31 11:39:06I was able to get everything working up to this point. When I run gettrending.py the browser window opens and begins to load the page, then I get this error message. <p>Traceback (most recent call ... Rewrite of variable_decay.py for speed improvements2021-01-12 05:02:54It ended up being a total rewrite. Instead of using list operations with O(N) searches to handle the set of "spikes" to be applied to trending scores in the future, I'm now using an in-... 【英语学习】【Level 08】U05 Better option L4 Being social2019-12-24 09:46:08张贴trending: 有此趋势的;流行的social network site: 社交网站social media: 社交媒体cater to sb. : 迎合某人sign up: 注册Grammar“多种多样的” 表达 Word Preparation post: 发布;张贴 to publish something... 文章目录 Word Preparation to publish something such as a message or picture on a website or using social media Teresa just posted pictures of her new dog on Facebook. Teresa刚刚在脸书(Facebook)上更新了自己新养的狗的一些照片。 She posts something new just about every hour. 她每隔一个小时左右都会更新状态。 trending: 有此趋势的;流行的 very popular I like to surf the Internet for what’s trending. 我喜欢上网搜索热门事件。 All the trending information is usually on the front page. 所有热门事件通常都在头版。 social network site: 社交网站 a website for sharing information and communicating Twitter is a social networking site that nay celebrities use. Twitter(推特)是一个社交网站,很多名人都在用。 I try not to spend too much time on social networking sites. 我尽量控制自己浏览社交网站的时间。 social media: 社交媒体 forms of communication that let people share information using the Internet Information spreads faster because of social media. 有社交媒体的推动,信息的传递变得更快捷了。 Everyone has been talking about these events through social media. 大家都在通过社交媒体谈论这些事件。 cater to sb. : 迎合某人 to give someone what he or she wants These exercise program mostly caters to young people. 这项运动计划通常适用于年轻人。 These parks cater to young children and their mothers. 这几个公园非常适合小孩子跟妈妈一起去。 to register You need to sign up before you can download anything. 下载任何东西之前,你需要先注册。 I don’t want to sign up if it will take longer than twenty minutes. 如果注册过程超过20分钟,我就不想注册了。 Grammar “多种多样的” 表达 various (= a variety of) 通常用来强调种类的数目; different 普通用词,强调事物间的区别或本质的不同。 diverse 语气较强,指性质完全不同,着重显著的区别。 - The jacket is available in various colors. - Our two sons are very different from each other. - Although these states and their people are diverse, they share the common goal of economic development. Improve resumption and navigation to desired content access modes2020-11-28 12:25:09<div><p>And come up with a better name for that ^ <p>Some people might want a quick link to "new for channels I follow" or "trending for everyone" or "trending for ~porn~ science... Send SYNC commands to power supplies on a power on2020-12-26 18:21:34<div><p>Need this to line up the PSs timers when providing trending data. <p>The requirements doc also mentions an enhancement would be to send this every hour as well. Can look into this.</p><p>该... Voting on comments2020-12-25 23:44:11t raising the comment in the trending comment page? <p>When a I voted a comment before the comment moved up immediately, did that change?</p><p>该提问来源于开源项目:steemit/condenser</p></div> 空空如也 空空如也
https://www.csdn.net/tags/MtjaMg4sNDI5MDQtYmxvZwO0O0OO0O0O.html
CC-MAIN-2021-17
refinedweb
4,273
63.09