text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
[ aws . cloudsearchdomain ]
Retrieves a list of documents that match the specified search criteria. How you specify the search criteria depends on which query parser you use. Amazon CloudSearch supports four query parsers:.
See also: AWS API Documentation
See 'aws help' for descriptions of global parameters.
search [--cursor <value>] [--expr <value>] [--facet <value>] [--filter-query <value>] [--highlight <value>] [--partial | --no-partial] [--query-options <value>] [--query-parser <value>] [--return <value>] [--size <value>] [--sort <value>] [--start <value>] [--stats <value>] --search-query <value> [--cli-input-json <value>] [--generate-cli-skeleton <value>]
--cursor (string)
Retrieves a cursor value you can use to page through large result sets. Use the size parameter to control the number of hits to include in each response. You can specify either the cursor or start parameter in a request; they are mutually exclusive. To get the first cursor, set the cursor value to initial . In subsequent requests, specify the cursor value returned in the hits section of the response.
For more information, see Paginating Results in the Amazon CloudSearch Developer Guide .
--expr (string)
Defines one or more numeric expressions that can be used to sort results or specify search or filter criteria. You can also specify expressions as return fields.
You specify the expressions in JSON using the form {"EXPRESSIONNAME":"EXPRESSION"} . You can define and use multiple expressions in a search request. For example:
{"expression1":"_score*rating", "expression2":"(1/rank)*year"}
For information about the variables, operators, and functions you can use in expressions, see Writing Expressions in the Amazon CloudSearch Developer Guide .
--facet (string)
Specifies one or more fields for which to get facet information, and options that control how the facet information is returned. Each specified field must be facet-enabled in the domain configuration. The fields and options are specified in JSON using the form {"FIELD":{"OPTION":VALUE,"OPTION:"STRING"},"FIELD":{"OPTION":VALUE,"OPTION":"STRING"}} ..
To count particular buckets of values, use the buckets option. For example, the following request uses the buckets option to calculate and return facet counts by decade.
{"year":{"buckets":["[1970,1979]","[1980,1989]","[1990,1999]","[2000,2009]","[2010,}"]}}
To sort facets by facet count, use the count option. For example, the following request sets the sort option to count to sort the facet values by facet count, with the facet values that have the most matching documents listed first. Setting the size option to 3 returns only the top three facet values.
{"year":{"sort":"count","size":3}}
To sort the facets by value, use the bucket option. For example, the following request sets the sort option to bucket to sort the facet values numerically by year, with earliest year listed first.
{"year":{"sort":"bucket"}}
For more information, see Getting and Using Facet Information in the Amazon CloudSearch Developer Guide .
--filter-query (string)
Specifies a structured query that filters the results of a search without affecting how the results are scored and sorted. You use filterQuery in conjunction with the query parameter to filter the documents that match the constraints specified in the query parameter. Specifying a filter controls only which matching documents are included in the results, it has no effect on how they are scored and sorted. The filterQuery parameter supports the full structured query syntax.
For more information about using filters, see Filtering Matching Documents in the Amazon CloudSearch Developer Guide .
--highlight (string)
Retrieves highlights for matches in the specified text or text-array fields. Each specified field must be highlight enabled in the domain configuration. The fields and options are specified in JSON using the form {"FIELD":{"OPTION":VALUE,"OPTION:"STRING"},"FIELD":{"OPTION":VALUE,"OPTION":"STRING"}} . lt;emgt; . The default for text highlights is * .
- post_tag : specifies the string to append to an occurrence of a search term. The default for HTML highlights is lt;/emgt; . The default for text highlights is * .
If no highlight options are specified for a field, the returned field text is treated as HTML and the first match is highlighted with emphasis tags: lt;emsearch-termlt;/emgt; .
For example, the following request retrieves highlights for the actors and title fields.{ "actors": {}, "title": {"format": "text","max_phrases": 2,"pre_tag": "**","post_tag": "** "} }
--partial | --no-partial (boolean)
Enables partial results to be returned if one or more index partitions are unavailable. When your search index is partitioned across multiple search instances, by default Amazon CloudSearch only returns results if every partition can be queried. This means that the failure of a single search instance can result in 5xx (internal server) errors. When you enable partial results, Amazon CloudSearch returns whatever results are available and includes the percentage of documents searched in the search results (percent-searched). This enables you to more gracefully degrade your users' search experience. For example, rather than displaying no results, you could display the partial results and a message indicating that the results might be incomplete due to a temporary system outage.
--query-options (string)
Configures options for the query parser specified in the queryParser parameter. You specify the options in JSON using the following form {"OPTION1":"VALUE1","OPTION2":VALUE2"..."OPTIONN":"VALUEN"}.
The options you can configure vary according to which parser you use:
- defaultOperator : The default operator used to combine individual terms in the search string. For example: defaultOperator: .
--query-parser (string) .
Possible values:
- simple
- structured
- lucene
- dismax
--return (string)
Specifies the field and expression values to include in the response. Multiple fields or expressions are specified as a comma-separated list. By default, a search response includes all return enabled fields (_all_fields ). To return only the document IDs for the matching documents, specify _no_fields . To retrieve the relevance score calculated for each document, specify _score .
--size (long)
Specifies the maximum number of search hits to include in the response.
--sort (string)
Specifies the fields or custom expressions to use to sort the search results. Multiple fields or expressions are specified as a comma-separated list. You must specify the sort direction (asc or desc ) for each field; for example, year desc,title asc . To use a field to sort results, the field must be sort-enabled in the domain configuration. Array type fields cannot be used for sorting. If no sort parameter is specified, results are sorted by their default relevance scores in descending order: _score desc . You can also sort by document ID (_id asc ) and version (_version desc ).
For more information, see Sorting Results in the Amazon CloudSearch Developer Guide .
--start (long)
Specifies the offset of the first search hit you want to return. Note that the result set is zero-based; the first result is at index 0. You can specify either the start or cursor parameter in a request, they are mutually exclusive.
For more information, see Paginating Results in the Amazon CloudSearch Developer Guide .
--stats (string)
Specifies one or more fields for which to get statistics information. Each specified field must be facet-enabled in the domain configuration. The fields are specified in JSON using the form:{"FIELD-A":{},"FIELD-B":{}}
There are currently no options supported for statistics.
--search-query (string)
Specifies the search criteria for the request. How you specify the search criteria depends on the query parser used for the request and the parser options specified in the queryOptions parameter. By default, the simple query parser is used to process requests. To use the structured , lucene , or dismax query parser, you must also specify the queryParser parameter.
For more information about specifying search criteria, see Searching Your Data in the Amazon CloudSearch -> (structure)
The status information returned for the search request.
timems -> (long)How long it took to process the request, in milliseconds.
rid -> (string)The encrypted resource ID for the request.
hits -> (structure)
The documents that match the search criteria.
found -> (long)The total number of documents that match the search request.
start -> (long)The index of the first matching document.
cursor -> (string)A cursor that can be used to retrieve the next set of matching documents when you want to page through a large result set.
hit -> (list)
A document that matches the search request.
(structure)
Information about a document that matches the search request.
id -> (string)The document ID of a document that matches the search request.
fields -> (map)
The fields returned from a document that matches the search request.
key -> (string)
value -> (list)(string)
exprs -> (map)
The expressions returned from a document that matches the search request.
key -> (string)
value -> (string)
highlights -> (map)
The highlights returned from a document that matches the search request.
key -> (string)
value -> (string)
facets -> (map)
The requested facet information.
key -> (string)
value -> (structure)
A container for the calculated facet values and counts.
buckets -> (list)
A list of the calculated facet values and counts.
(structure)
A container for facet information.
value -> (string)The facet value being counted.
count -> (long)The number of hits that contain the facet value in the specified facet field.
stats -> (map)
The requested field statistics information.
key -> (string)
value -> (structure)
The statistics for a field calculated in the request.
min -> (string)
The minimum value found in the specified field in the result set.
If the field is numeric (int , int-array , double , or double-array ), min is the string representation of a double-precision 64-bit floating point value. If the field is date or date-array , min is the string representation of a date with the format specified in IETF RFC3339 : yyyy-mm-ddTHH:mm:ss.SSSZ.
max -> (string)
The maximum value found in the specified field in the result set.
If the field is numeric (int , int-array , double , or double-array ), max is the string representation of a double-precision 64-bit floating point value. If the field is date or date-array , max is the string representation of a date with the format specified in IETF RFC3339 : yyyy-mm-ddTHH:mm:ss.SSSZ.
count -> (long)The number of documents that contain a value in the specified field in the result set.
missing -> (long)The number of documents that do not contain a value in the specified field in the result set.
sum -> (double)The sum of the field values across the documents in the result set. null for date fields.
sumOfSquares -> (double)The sum of all field values in the result set squared.
mean -> (string)
The average of the values found in the specified field in the result set.
If the field is numeric (int , int-array , double , or double-array ), mean is the string representation of a double-precision 64-bit floating point value. If the field is date or date-array , mean is the string representation of a date with the format specified in IETF RFC3339 : yyyy-mm-ddTHH:mm:ss.SSSZ.
stddev -> (double)The standard deviation of the values in the specified field in the result set. | https://docs.aws.amazon.com/cli/latest/reference/cloudsearchdomain/search.html | CC-MAIN-2018-47 | refinedweb | 1,780 | 56.35 |
.
No, the Garbage Collection can not be forced explicitly. We may request JVM for garbage collection by calling System.gc() method. But This does not guarantee that JVM will perform the garbage collection..
Signature of finalize() method
protected void finalize() { //finalize-code }
finalize()method is defined in java.lang.Object class, therefore it is available to all the classes.
finalize()method is declare as proctected inside Object class.
finalize()method gets called only once by GC threads.
gc() method is used to call garbage collector explicitly. However gc() method does not guarantee that JVM will perform the garbage collection. It only request the JVM for garbage collection. This method is present in System and Runtime class.
public class Test { public static void main(String[] args) { Test t = new Test(); t=null; System.gc(); } public void finalize() { System.out.println("Garbage Collected"); } }
Output :
Garbage Collected | http://www.studytonight.com/java/garbage-collection.php | CC-MAIN-2015-32 | refinedweb | 144 | 61.93 |
Asked by:
Cannot download updates in SCCM after SP1 Updrade
Hi all,
I have upgraded my SCCM 2012 RTM to SP1 last week and since then I cannot download updates.
It gives the error message "Error: Failed to download content id 16779439. Error: Access is denied
I have tried reinstalling WSUS onto the server which seems to be working fine, but I still cannot download the updates in SCCM.
I am a full administrator of SCCM and a domain admin on the Server itself.
Any help would be appreciated.
Thanks
Richard
Question
All replies
Hi,
It sounds like a problem with Share/file permissions in the folder that you are trying to download the updates to, I would start with checking share permissons/file permissions to the share you have selected as Software Update Package Source.
You can check PatchDownloader.log for more details.
Regards,
Jörgen
-- My System Center blog ccmexec.com -- Twitter @ccmexec
Im not sure what permissions I should be looking for, Everyone has read access to the foler, there is a Network Service account with full access, and other than that WSUS Administrators and Administrators groups have full access also.
Here is a sample of the logs:
Trying to connect to the root\SMS namespace on the AS024.smd.swanseamet.ac.uk machine. $$<Software Updates Patch Downloader><02-13-2013 11:04:48.236+00><thread=4648 (0x1228)>
Connected to \\AS024.smd.swanseamet.ac.uk\root\SMS $$<Software Updates Patch Downloader><02-13-2013 11:04:48.237+00><thread=4648 (0x1228)>
Trying to connect to the \\AS024.smd.swanseamet.ac.uk\root\sms\site_MP1 namespace on the AS024.smd.swanseamet.ac.uk machine. $$<Software Updates Patch Downloader><02-13-2013 11:04:48.240+00><thread=4648 (0x1228)>
Connected to \\AS024.smd.swanseamet.ac.uk\root\sms\site_MP1 $$<Software Updates Patch Downloader><02-13-2013 11:04:48.241+00><thread=4648 (0x1228)>
Download destination = \\as024\UpdateServicesPackages\6357efc3-8d00-4981-95ba-367b12372437.1\accessrtmuisp1-en-us.cab . $$<Software Updates Patch Downloader><02-13-2013 11:04:48.451+00><thread=4648 (0x1228)>
Contentsource = . $$<Software Updates Patch Downloader><02-13-2013 11:04:48.451+00><thread=4648 (0x1228)>
Downloading content for ContentID = 16779439, FileName = accessrtmuisp1-en-us.cab. $$<Software Updates Patch Downloader><02-13-2013 11:04:48.459+00><thread=4648 (0x1228)>
Failed to create directory \\as024\UpdateServicesPackages\6357efc3-8d00-4981-95ba-367b12372437.1\, error 5 $$<Software Updates Patch Downloader><02-13-2013 11:04:48.465+00><thread=4596 (0x11F4)>
ERROR: DownloadContentFiles() failed with hr=0x80070005 $$<Software Updates Patch Downloader><02-13-2013 11:04:48.466+00><thread=4648 (0x1228)>
I hope this is of some use.
Thanks
Richard
First check the "UpdateServicesPackages" share permissions and ensure that the SYSTEM account has access. For testing purposes, you could allow Everyone full control on the share - but only for testing purposes. You are definitely seeing error 5 here, which is access denied to that path.
Secondly, might I recommend creating subfolders within that share for your Software Update Packages, rather than pointing directly at the share. For instance, if you are having a "Windows 7 Updates" package, then specify the path to be "\\as024\UpdateServicesPackages\Windows 7 Updates" and this will store those specific updates within it.
Andy
My Personal Blog:
- We are having this same issue with a new install of SCCM SP1 CU2. All shares are setup correctly. We found when we ran the Consoles as an Administrator everything is working.
Kristopher Turner | Not the brightest bulb but by far not the dimmest bulb.
This is happening in our Lab as well once we installed CU2. Both are Site servers running on Windows 2008 R2. As soon as you start the Console in Administrator mode everything is working. It was working fine before CU2.
Kristopher Turner | Not the brightest bulb but by far not the dimmest bulb.
I know this is old, however, if you are selecting a deployment package and getting the error, try creating a new deployment package and creating a new folder versus using the older one for example: c:\share\source\windowsupdates\2014 and see if that works. I was using windowsupdates and I would receive the same error. When I added the 2014 it worked like a charm.
Regards,
Betty
Hello,
Please try with option - folder option on installation directory WSUS - UpdateServicesPackages folder RC - sharing - advances sharing - on permissions and add member/account from which you want to download the patches.
Hope this will work...
Thanks - AnimeshShankar.
- Edited by Animesh Shankar Tuesday, February 25, 2014 6:04 AM | http://social.technet.microsoft.com/Forums/en-US/550e20d8-f9a3-48cd-a5df-8b2515c984f9/cannot-download-updates-in-sccm-after-sp1-updrade?forum=configmanagergeneral | CC-MAIN-2014-15 | refinedweb | 761 | 58.69 |
NAME
VCL - Varnish Configuration Language
DESCRIPTION
The VCL language is a small domain-specific language designed to be used to define request handling and document caching policies for Varnish Cache. both regular expression and ACL matching using the ~ and the !~ operators. Basic strings are enclosed in " ... ", and may not contain newlines. o deliver o error o fetch o hash o hit_for_pass o lookup o ok o pass o pipe o restart. Directors; } } The family of random directors random director This uses a random number to seed the backend selection. The client director The client director picks a backend based on the clients identity. You can set the VCL variable client.identity to identify the client by picking up the value of a session cookie or similar. The hash director. } Backend probes Backends can be probed to see whether they should be considered healthy or not. The return status can also be checked by using req.backend.healthy. Probes take the following parameters: .url Specify a URL to request from the backend. Defaults to "/". .request Specify a full HTTP request using multiple strings. .request will have \r\n automatically inserted after every string. If specified, .request will take precedence over .url. .window How many of the latest polls we examine to determine backend health. Defaults to 8. .threshold How many of the polls in .window must have succeeded for us to consider the backend healthy. Defaults to 3. .initial How many of the probes are considered good when Varnish starts. Defaults to the same amount as the threshold. .expected_response The expected backend HTTP response code. Defaults to 200. .interval Defines how often the probe should check the backend. Default is every 5 seconds. .timeout How fast each probe times out. Default is 2 seconds."; } ACLs An ACL declaration creates and initializes a named access control list which can later be used to match client addresses:: acl local { "localhost"; // myself "192.0.2.0"/24; // and everyone on the local network ! "192.0.2.23"; // except for the dialin router } If an ACL entry specifies a host name which Varnish is unable to resolve, it will match any address it is com- pared) { return (pipe); } Regular Expressions In Varnish 2.1.0 Varnish switched to using (req.http.host ~ "(?i)example.com$") { ... } Functions The following built-in functions are available: hash_data(str) Adds a string to the hash input. In default.vcl hash_data() is called on the host and URL of the request.. ban(ban expression) ban_url(regex) Bans all objects in cache whose URLs match regex. Subroutines A subroutine is used to group code for legibility or reusability:: sub pipe_if_local { if (client.ip ~ local) { return (pipe); } } Subroutines in VCL do not take arguments, nor do they return values. To call a subroutine, use the call keyword followed by the subroutine's name: call pipe_if_local;_init Called when VCL is loaded, before any requests pass through it. Typically used to initialize VMODs. return() values: ok Normal return, VCL continues loading. sub- mitted over the same client connection are handled normally. The vcl_recv subroutine may terminate with calling return() with one of the following keywords: error code [reason] Return the specified error code to the client and abandon the request. pass Proceed with pass mode. restart Restart the transaction. Increases the restart counter. If the number of restarts is higher than max_restarts varnish emits a guru meditation error. vcl_hash You may call hash_data() on the data you would like to add to the hash. The vcl_hash subroutine may terminate with calling return() with one of the following keywords: hash Proceed. vcl_hit Called after a cache lookup if the requested document was found in the cache. The vcl_hit subroutine may terminate with calling return() with one of the following keywords: deliver Deliver the cached object to the client. Control will eventually pass to vcl_deliver. error code [reason] Return the specified error code to the client and abandon the request. pass Switch to pass mode. Control will eventually pass to vcl_pass. restart Restart the transaction. Increases the restart counter. If the number of restarts is higher than max_restarts varnish emits a guru meditation error. calling return() with one of the following keywords: deliver Possibly insert the object into the cache, then deliver it to the client. Control will eventually pass to vcl_deliver. error code [reason] Return the specified error code to the client and abandon the request. hit_for_pass Pass in fetch. This will create a hit_for_pass object. Note that the TTL for the hit_for_pass object will be set to what the current value of beresp.ttl. Control will be handled to vcl_deliver on the current request, but subsequent requests will go directly to vcl_pass based on the hit_for_pass object. restart Restart the transaction. Increases the restart counter. If the number of restarts is higher than max_restarts varnish emits a guru meditation error. vcl_deliver Called before a cached object is delivered to the client. The vcl_deliver subroutine may terminate with one of the following keywords: deliver Deliver the object to the client. error code [reason] Return the specified error code to the client and abandon the request. restart Restart the transaction. Increases the restart counter. If the number of restarts is higher than max_restarts varnish emits a guru meditation error. vcl_error Called when we hit an error, either explicitly or implicitly due to backend or internal errors. The vcl_error subroutine may terminate by calling return with one of the following keywords: deliver Deliver the error object to the client. restart Restart the transaction. Increases the restart counter. If the number of restarts is higher than max_restarts varnish emits a guru meditation error. vcl_fini Called when VCL is discarded only after all requests have exited the VCL. Typically used to clean up VMODs. return() values: ok Normal return, VCL will be discarded.. client.identity Identification of the client, used to load balance in the client director.. Requires an active probe to be set on the backend. req.http.header The corresponding HTTP header. req.hash_always_miss Force a cache miss for this request. If set to true Varnish will disregard any existing objects and always (re)fetch from the backend. req.hash_ignore_busy Ignore any busy object during cache lookup. You would want to do this if you have two server looking up content from each other to avoid potential deadlocks. req.can_gzip Does the client accept the gzip transfer encoding. req.restarts A count of how many times this request has been restarted. req.esi Boolean. Set to false to disable ESI processing regardless of any value in beresp.do_esi. Defaults to true. This variable is subject to change in future versions, you should avoid using it. req.esi_level A count of how many levels of ESI requests we're currently at. req.grace Set to a period to enable grace. req.xid Unique ID of this request.. Will only be honored if req.esi is true. beresp.do_gzip Boolean. Gzip the object before storing it. Defaults to false. beresp.do_gunzip Boolean. Unzip the object before storing it in the cache. Defaults to false. beresp.proto The HTTP protocol version used the backend replied with. beresp.status The HTTP status code returned by the server. beresp.response The HTTP status message returned by the server. beresp.ttl The object's remaining time to live, in seconds. beresp.ttl is writable. beresp.grace Set to a period to enable grace. beresp.saintmode Set to a period to enable saint mode. beresp.backend.name Name of the backend this response was fetched from. beresp.backend.ip IP of the backend this response was fetched from. beresp.backend.port Port of the backend this response was fetched from. beresp.storage Set to force Varnish to save this object to a particular storage backend. After the object is entered into the cache, the following (mostly read-only) variables are available when the object has been located in cache, typically in vcl_hit and vcl_deliver. obj.proto The HTTP protocol version used when the object was retrieved. obj.status The HTTP status code returned by the server. obj.response The HTTP status message returned by the server. obj.ttl The object's remaining time to live, in seconds. obj.ttl is writable. obj.lastuse The approximate time elapsed since the object was last requests, in seconds. obj.hits The approximate number of times the object has been delivered. A value of 0 indicates a cache miss. obj.grace The object's grace period in seconds. obj.grace is writable. obj.http.header The corresponding HTTP header.; } Grace and saint mode.
EXAMPLES
The following code is the equivalent of the default configuration with the backend address set to "backend.example.com" and no backend port specified:: backend default { .host = "backend.example.com"; .port = "http"; } /*- * Copyright (c) 2006 Verdens Gang AS * Copyright (c) 2006-2011 Varnish Software AS * All rights reserved. * * Author: Poul-Henning Kamp <phk@phk.freebsd default VCL code. * * NB! You do NOT need to copy & paste all of these functions into your * own vcl code, if you do not provide a definition of one of these * functions, the compiler will automatically fall back to the default * code from this file. * * This code will be prefixed with a backend declaration built from the * -b argument. */); } sub vcl_pipe { # Note that only the first request to the backend will have # X-Forwarded-For set. If you use X-Forwarded-For and want to # have it set for all requests, make sure to have: # set bereq.http.connection = "close"; # here. It is not set by default as it might break some broken web # applications, like IIS with NTLM authentication. return (pipe); } sub vcl_pass { return (pass); } sub vcl_hash { hash_data(req.url); if (req.http.host) { hash_data(req.http.host); } else { hash_data(server.ip); } return (hash); } sub vcl_hit { return (deliver); } sub vcl_miss { return (fetch); } sub vcl_fetch { if (beresp.ttl <= 0s || beresp.http.Set-Cookie || beresp.http.Vary == "*") { /* * Mark as "Hit-For-Pass" for the next 2 minutes */ set beresp.ttl = 120 s; return (hit_for_pass); } return (deliver); } sub vcl_deliver { return (deliver); } sub vcl_error { set obj.http.Content-Type = "text/html; charset=utf-8"; set obj.http.Retry-After = "5";> <hr> <p>Varnish cache server</p> </body> </html> "}; return (deliver); } sub vcl_init { return (ok); } sub vcl_fini { return (ok); }."; } }
SEE ALSO
o varnishd(1) o vmod_std(7)
HISTORY
VCL was developed by Poul-Henning Kamp in cooperation with Verdens Gang AS, Redpill Linpro and Varnish Software. This manual page was written by Dag-Erling Smorgrav and later edited by Poul-Henning Kamp and Per Buer.
This document is licensed under the same license as Varnish itself. See LICENSE for details. o Copyright (c) 2006 Verdens Gang AS o Copyright (c) 2006-2011 Varnish Software AS
AUTHOR
Dag-Erling Smorgrav, Poul-Henning Kamp, Kristian Lyngstol, Per Buer | http://manpages.ubuntu.com/manpages/precise/en/man7/vcl.7.html | CC-MAIN-2015-48 | refinedweb | 1,794 | 60.41 |
Question:
Disclaimer: I'm working on Euler Problem 9.
I'm adding up some pretty large numbers, all the primes from 1 to 2 000 000.
Summing those primes takes forever. I'm using the haskell built in function 'sum'.
as in:
sum listOfPrimes
Are there any other faster options?
--My prime generator was the slow link in my code.
Solution:1
It sounds like your problem is not summing the numbers, but generating them. What is your implementation of listOfPrimes?
This paper may be of interest:
Solution:2
I'm hoping you are using ghc -O2 and not ghci, right? Your problem will be in the generation, not the summation.
One faster way is to use stream fusion-based sequences, which optimize better. With regular lists: -- fuse: main = print (sum (takeWhile (<= 2000000) primes))
We get,
$ ghc -O2 --make A.hs $ time ./A 142913828922 ./A 9.99s user 0.17s system 99% cpu 10.166 total
Switching to streams, so sum . takeWhile fuses:
import qualified Data.List.Stream as S main = print (S.sum (S.takeWhile (<= 2000000) primes))
Saves some small time,
$ time ./A 142913828922 ./A 9.60s user 0.13s system 99% cpu 9.795 total
But your problem will be prime generation, as we can see if we discard the summation altogether, replacing sum with last:
$ time ./A 1999993 ./A 9.65s user 0.12s system 99% cpu 9.768 total
So find a better prime generator. :-)
Finally, there's a library on Hackage for fast prime generators:
Using it, our time becomes:
$ cabal install primes $ cabal install stream-fusion $ cat A.hs import qualified Data.List.Stream as S import Data.Numbers.Primes main = print . S.sum . S.takeWhile (<= 2000000) $ primes $ ghc -O2 -fvia-C -optc-O3 A.hs --make $ time ./A 142913828922 ./A 0.62s user 0.07s system 99% cpu 0.694 total
Solution:3
The slow part of your function is for sure the generating of the primes, not the
sum function. A nice way to generate primes would be:
isprime :: (Integral i) => i -> Bool isprime n = isprime_ n primes where isprime_ n (p:ps) | p*p > n = True | n `mod` p == 0 = False | otherwise = isprime_ n ps primes :: (Integral i) => [i] primes = 2 : filter isprime [3,5..]
I think it's very readable, though maybe a bit surprising that it works at all because it uses recursion and laziness of the
primes list. It's also rather fast, though one could do further optimizations at the expense of readability.
Solution:4
I wrote a "Sieve of Eratosthenes" here:
Using this, it takes about 25s to
print . sum $ takeWhile (<= 20000000) on my desktop. Could be better? Sure, it takes J under 1 second to run
+/p:i.p:^:_1]20000000 12272577818052
but it has a pretty optimized prime number generator.
Note:If u also have question or solution just comment us below or mail us on toontricks1994@gmail.com
EmoticonEmoticon | http://www.toontricks.com/2018/10/tutorial-haskell-faster-summation-of.html | CC-MAIN-2018-43 | refinedweb | 486 | 77.84 |
Today, JavaScript is a universal web language, one language which is supported by every browser, does not need any special installation. No web development today is possible without using JavaScript and JavaScript has moved its wings not only to client side development but as well as to server side development like NodeJS. But, ask any developer who is learning JavaScript or coming from an Object Oriented background and they would roll their eyes when asked about how easy it is to do programming in JavaScript. One of the most powerful features of JavaScript is its dynamic type wherein we can assign anything to any variable but this feature itself becomes a roadblock in large scale JavaScript applications if we are not very careful. JavaScript also does not have a good IntelliSense and you will very rarely find errors at compile time, especially the type errors.
As stated on official TypeScript website, “TypeScript is a typed superset of JavaScript that compiles to plain JavaScript”. TypeScript is not a replacement of JavaScript, it does not add any new features in JavaScript. TypeScript provides developers with features like type safety, compile time type checking, object oriented constructs on top of JavaScript, basically lets developers think in object oriented terms when writing JavaScript. The most amazing thing about TypeScript is, it compiles in JavaScript so we don’t have to support any new VM to run TypeScript which then can be referenced in a web page, used in server side like in NodeJS.
TypeScript is an open-source developed by Microsoft, but is in no way tied to the Microsoft platform and can be used anywhere in any environment where there is a need to write JavaScript. Though Microsoft does provide a great support in Visual Studio for TypeScript and we will be using Visual Studio for our examples, but we can use TypeScript compiler as well from command prompt to compile TypeScript in JavaScript (we will see a brief example of that as well).
TypeScript is included in Visual Studio 2013 Update 2 by default and can be installed for Visual Studio 2012 from installer provided on TypeScript website. Current version at time of writing this paper is 1.0. TypeScript also provides support for other editors like sublime, Emacs and Vim. TypeScript also has a package which can be used in Node.js. All these installations can be found on TypeScript
Once the TypeScript is installed for Visual Studio, it will add a new Project and Item templates. This enables us to create new project for TypeScript as well as add TypeScript files to existing applications.
TypeScript file template can be created in a similar way as other files are, by selecting “Add New Item” and then selecting a TypeScript file.
All TypeScript files have *.ts extension. Visual Studio plugin for TypeScript automatically creates and updates a corresponding JavaScript file with the same name whenever we modify the TypeScript file and save. This generated JavaScript file can then be used as a normal file and included in any web page.
When we create a TypeScript project, a default “app.ts” file gets created within the project which has a default code implementation. Visual Studio also provides a side by side view of TypeScript file with corresponding JavaScript file and every time we save TypeScript file, we can see the change in JavaScript file as well. TypeScript file achieves this feature by creating a corresponding “*.js.map” file. Visual Studio uses this map file to map the TypeScript code with the generated JavaScript code. Also, browser like Chrome uses these map files to help us debug TypeScript files directly instead of debugging JavaScript file.
In case you see that automatically compiling TypeScript does not create or update JavaScript, check the options under Tool menu as shown below. This problem is used to come normally in TypeScript versions before 1.0.
We can also generate JavaScript file from command line using “tsc.exe” command and passing <<filename>>.ts file, tsc.exe is available in “Microsoft SDK/TypeScript”.
TypeScript provides developers with object oriented concepts and compile time type checking on top of JavaScript which helps in writing more structured, maintainable and robust code. TypeScript introduces few of the standard object oriented terms like Classes, Interfaces, Module and Variables which in the end get converted into various different forms of JavaScript. The code structure for a typical TypeScript file is shown below.
Module is like a namespace in the .NET world and can contain classes and interfaces. Modules do not have any feature of their own, they just provide a container which can be used to structure code in a logical form. Look at module just as a container of business/logical entity.
Interfaces are exactly like interfaces in .NET which provide a contract for the classes to implement. TypeScript helps in providing compile time error checking for the classes implementing these interfaces. If all the methods have not been implemented properly (including method signature), TypeScript flags those at design time as well as compile time. Interesting thing about Interfaces is that they do not exist in JavaScript and hence when we compile a TypeScript file into JavaScript, Interfaces are omitted.
The concept of Classes is again very similar to the .NET/Java world. Classes contains variables, properties and methods which form one logical entity. TypeScript also allows to set scope of the variable and functions with keyword like “private” and “public” though that scope does not have any effect on the JavaScript generated.
private
public
Functions are methods where the logic is implemented. TypeScript provides compile time support to make sure anyone calling the said function agrees to the input argument and return value type.
Variables are the fields defined inside a class or a function. TypeScript allows us to define a variable using keyword “var” and assign a data type to it. Once a data type is assigned, any further usage of the variable has to be with same data type, else TypeScript will generate error on design and compile time. TypeScript is also smart enough to infer the type of a variable and then treat it as that type when a variable is declared and initialized. In cases where TypeScript is not able to infer the type, it will assign that variable type of “any”.
var
any
TypeScript provides some primitive types (shown below) as well as a dynamic type “any”. “Any” is like “dynamic” keyword in C# wherein we can assign any type of value to the variable. TypeScript will not flag any type errors for variable of type “any”.
Any
dynamic
In TypeScript, we define a variable with a type by just appending the variable name with colon followed by the type name as shown in the below example.
var num: number = 30; //variable num is of type number
Below is the list of primitive types available in TypeScript:
Number
number
float
double
Boolean
boolean
true
false
String
string
Null
null
Undefined
undefined
Array Type: TypeScript also allows developers to create array objects similar to that in .NET by just adding square brackets as shown in the below example:
TypeScript
JavaScript
var array: string[] = ['test', 'dummy'];
var first: string = array[0];
var array = ['test', 'dummy'];
var first = array[0];
This allows us to create a complex object in TypeScript with primitive types. In TypeScript, array is accessed with zero based index.
TypeScript also allows us to create complex variables as shown in the below example:
var name = { firstName: 'Homer', lastName: 'Simpson' };
name.firstName = 2; //This gives compile time error
var name = { firstName: 'Homer', lastName: 'Simpson' };
name.firstName = 2; //No Error in JavaScript
Also, in the above case, notice that we had not defined the type of name variable, but TypeScript is smart enough to infer that name is a complex object with “firstName” and “lastName” string variables and if we try to assign anything other than string to either of these variables, TypeScript will show a design time error with a red line below name variable.
name
As we saw above, TypeScript provides type inference wherein even if we don’t define a variable with a type, TypeScript infers the type with the value with which we would have initialized the variable. If we don’t initialize the variable nor we define a type when declaring a variable, TypeScript assigns “any” type to the variable. But, as JavaScript does not distinguish between any of these types, all the variables will be the same for JavaScript.
var dummy; //any type
var num = 10; //number
var str = 'Hello TypeScript'; //string
var bln = true; //boolean
var stringArray = ['Homer', 'Simpson']; //string[]
var dummy;
var num = 10;
var str = 'Hello TypeScript';
var bln = true;
var stringArray = ['Homer', 'Simpson'];
Types are only valid for TypeScript and have no role in JavaScript generated. Types are just used by TypeScript for compile time checking and enable developers to make sure correct values are passed to variables.
Type checking is also available in functions. We can define types when defining the input parameters but, if type is not mentioned, TypeScript takes it as “any”. In case of return type, if we don’t define the type, TypeScript will infer the type depending on the use of those variables.
TypeScript
var addFunction = function (n1: number, n2: number, n3: number) {
var sum = n1 + n2 + n3;
return sum;
};
var str1: string = addFunction(10, 20, 30); //Gives compile time error as
//return type of a function is number and is being assigned to a string
var sum: number = addFunction(10, 20, 30); // This works
var result = addFunction(10, 20, 30); // This also works
var addFunction = function (n1, n2, n3) {
var sum = n1 + n2 + n3;
return sum;
};
var str1 = addFunction(10, 20, 30);
var sum = addFunction(10, 20, 30);
var result = addFunction(10, 20, 30);
In the above table, we see that TypeScript uses type inference to determine that the “addFunction” has a “number” return type based on the input parameter types. When we try to assign the result of the function to a string variable, TypeScript gives a design and compile error. This does not happen in JavaScript which will compile properly. Also, if we would have not have defined type for variables “n1”, “n2”, “n3”, TypeScript would have assigned them “any” type and then it would have assigned return type as “any”. We can also explicitly define the return type by suffixing colon after the parameters and assigning the type, for example:
addFunction
n1
n2
n3
var addFunction = function (n1: number, n2: number, n3: number) : number {
var sum = n1 + n2 + n3;
return sum;
};
TypeScript also allows us to declare a variable in a function as optional so that anyone calling that function may or may not pass value for that variable. To make a parameter in a function as optional, we need to add “?” to the variable name. Again, optional parameters don’t exist in JavaScript and hence those will not be handled there.
?
var addFunction = function (n1: number, n2: number, n3?: number) : number {
var sum = n1 + n2 + n3;
return sum;
};
var sum: number = addFunction(10, 20);
var addFunction = function (n1, n2, n3) {
var sum = n1 + n2 + n3;
return sum;
};
var sum = addFunction(10, 20);
Optional parameter has to be the last parameter in the list and there cannot be a required parameter after the optional similar to C# convention. We can also use optional concept in variables/fields defined in classes, shown in the next chapter.
TypeScript classes are” keywords are only available in TypeScript, once it’s converted to JavaScript, there is no way to distinguish between the two and both can be called. TypeScript defines a constructor using keyword “constructor”.
class
constructor
class Student {
private firstName: string;
private lastName: string;
yearOfBirth: number; //Public scope by default
schoolName: string;
city: string;
//Constructor
constructor(firstName: string, lastName: string, schoolName: string,
city: string, yearOfBirth: number) {
this.firstName = firstName;
this.lastName = lastName;
this.yearOfBirth = yearOfBirth;
this.city = city;
this.schoolName = schoolName;
}
//Function
age() {
return 2014 - this.yearOfBirth;
}
//Function
printStudentFullName(): void {
alert(this.lastName + ',' + this.firstName);
}
}
var Student = (function () {
//Constructor
function Student(firstName, lastName, schoolName, city, yearOfBirth) {
this.firstName = firstName;
this.lastName = lastName;
this.yearOfBirth = yearOfBirth;
this.city = city;
this.schoolName = schoolName;
}
//Function
Student.prototype.age = function () {
return 2014 - this.yearOfBirth;
};
//Function
Student.prototype.printStudentFullName = function () {
alert(this.lastName + ',' + this.firstName);
};
return Student;
})();
In the above constructor defined in TypeScript, we have few input variables which are then mapped to local variables inside a class, we can modify this constructor to implement implicit variable declaration and mapping by defining scope along with variable name in constructor definition as shown below:
constructor(private firstName: string, private lastName: string, public schoolName: string,
public yearOfBirth: number) {}
In the above case, variables now are defined and declared inside the constructor argument only. The scope mentioned along with the argument becomes the scope of the variable for that class.
To consume the class, the behavior is similar to C# where we use “new” keyword to initialize the class object, pass any parameters if the constructor requires and then call the functions or access public variables of the class.
new
var student = new Student('Tom', 'Hanks', 'World Acting School',1950);
var age = student.age();
var fullName = student.printStudentFullName();
var schoolName = student.schoolName;
In TypeScript, we can also define any parameter with a default value, and then when calling that function, we are not required to pass the value of those parameters. If we don’t pass the value, then the function takes the default value assigned, else the value passed in the function call is taken as shown in the example below:
constructor(private firstName: string, private lastName: string, public schoolName: string,
public yearOfBirth: number = 1990){}
In the above example, we have assigned an optional value to “yearOfBirth” field and if in case the calling function does not pass any value for this field, constructor will initialize this with 1990.
yearOfBirth
1990
Similar is the case for Optional parameters, we can define a parameter by adding a “?” after the parameter name. In this case, when the function is called, we don’t need to pass the value for that parameter.
Subject(subjectList?: string[]) {
if (subjectList == null) {
alert('Oh, You have not subscribed to any course');
}
}
Here, if we call subject method without passing any parameter value, TypeScript will not show any error.
TypeScript offers support for Interfaces to use them as a contract for classes similar to C#. To declare an interface, we use keyword “interface” followed by the interface name. The important thing to know about interfaces is that when compiled in JavaScript, interface code is ignored and there is no corresponding JavaScript generated.
interface
interface IStudent {
yearOfBirth: number;
age : () => number;
}
Classes implement interfaces using keyword “implement” followed by interface name. As in C#, classes can implement multiple interfaces and TypeScript does a design time check to make sure that the class is implementing all the methods of that interface.
implement
class Student implements IStudent
Here, Student class now will have to implement “age” method and define property “yearOfBirth” else TypeScript will show design time error with the error mentioning which property/method has not been implemented in the class.
Student
age
Having classes and interface means TypeScript also supports inheritance which is a very powerful feature and aligns writing client side code to the way in which we write C# code. Using inheritance, we can extend classes, implement and extend interfaces and write code which very closely recognizes with OOPs. In TypeScript, when we extend a base class in child class, we use keyword “super” to call the constructor of base class or even the public methods of the base class.
super
To extend a class in TypeScript, we use “extend” keyword after the class name and then followed by the class through which we need to extend. We can also inherit interfaces on other interfaces.
extend
//Interface
interface IStudent {
yearOfBirth: number;
age : () => number;
}
//Base Class
class College {
constructor(public name: string, public city: string) {
}
public Address(streetName: string) {
return ('College Name:' + this.name + ' City: ' + this.city + ' Street Name: ' + streetName);
}
}
//Child Class implements IStudent and inherits from College
class Student extends College implements IStudent {
firstName: string;
lastName: string;
yearOfBirth: number;
//private _college: College;
//Constructor
constructor(firstName: string, lastName: string, name: string, city: string, yearOfBirth: number) {
super(name, city);
this.firstName = firstName;
this.lastName = lastName;
this.yearOfBirth = yearOfBirth;
}
age () {
return 2014 - this.yearOfBirth;
}
CollegeDetails() {
var y = super.Address('Maple Street');
alert(y);
}
printDetails(): void {
alert(this.firstName + ' ' + this.lastName + ' College is: ' + this.name);
}
}
Modules in TypeScript have similar purpose as namespaces in C#, it allows us to group together logical code. Modules help us to follow “separation of concerns” concept in client side code wherein each module can have a specific role. Modules provide us with the flexibility by allowing us to import other modules, export features outside modules.
Everything inside of a module is scoped to that module hence, the classes and interfaces placed inside a module cannot be accessed outside until we explicitly provide scope for them with keyword “export”.
export
Modules are declared using “module” keyword. We can nest one module inside another module which can help us provide better code maintainability.
module
module Movie {
class Comedy {
constructor(public actorName: string) { }
getAllMovies() {
return (["Alien", "Paul Bart", "Home Alone"]);
}
getMoviesbyActor() {
if (this.actorName === 'Seth Rogen') {
return (["Observe and Report"]);
}
else {
return (["Home Alone"]);
}
}
}
}
Here, we have declared a module which has a class defined in it. The interesting thing to note is that the class “Comedy” is not visible outside the module “Movie” but can be accessed inside the “Movie” module as it's in that scope. This helps in separation of code and provides relevant scope as per business needs.
Comedy
Movie
To be able to access classes, interfaces or variables outside a module, we need to mark those with keyword “export”. Once a class inside a module is marked with “export” keyword, all the public variables, functions are also accessible outside the module.
Hence, modules help us with minimizing the scope of classes and provides us with the more robust platform to manage client side code. In any application irrespective of the size of the application, we would want to access modules inside other modules, get a reference, call the methods inside classes and so forth. To achieve this, we would need to create dependencies between modules and make sure that the scripts are loaded on a page based on the dependencies, a module which is not dependent on any other module should be loaded first and so on.
In TypeScript, we have two ways to add dependencies between modules as explained in the following sections.
TypeScript provides a system (this mechanism is also available in Visual Studio for JavaScript) by which we can make modules available inside other modules and even throughout the program. This can be achieved using reference comments on top of the module in which we want to reference other module. The dependencies for any module are defined on top of that module in the form of comments.
/// <reference path="Sample.ts" />
var college = new Sample.College('My College', 'My City');
Above, we see that on adding a reference to Sample module (which is inside Sample.ts), we are able to access College class, provided College class is marked with “export” keyword. Reference comments can be added by just dragging and dropping a file, Visual Studio by default will add these comments on top of the file.
College
The reference comment informs compiler that the Sample module will be available before this module loads and hence compiler allows classes inside Sample module to be accessed here. By adding “reference” comment, we also get auto-completion and design time compile checking for the classes. We can add as many “reference” comments as we need to add dependencies for a module.
With this approach, we should be aware of the dependencies graph for each module and hence it is suitable for small or medium size applications. We have to make sure that all the dependencies are loaded before the specified module to make it work and this can cause major headache for large applications where there are numerous modules and their respective files.
AMD allows modules to be loaded asynchronously on need basis. RequireJS is one such library which provides the mechanism to load modules on demand. We just need to reference dependent modules using keyword “import” and RequireJS takes care of loading them at runtime. AMD manages the dependencies for each module and takes away the complexity of making sure all the dependent modules are loaded on the page before the specific module. RequireJS uses the concept of identifying who is dependent on whom and then loading them in that sequence.
import
This is a very useful technique for large scale web applications where we have lot of TypeScript/JavaScript files and it is a headache to maintain the dependency graph.
RequireJS handles module loading using configuration style programming. Rather than loading all the scripts in the specific order, using RequireJS, we just define a startup/bootstrap file in our HTML page. RequireJS reads that and navigates to the startup file which would then as required call other modules as and when we call other modules, RequireJS loads the dependencies on demand.
<!-- Below line initalizes requirejs and mentions the startup file. In this case main-->
<script data-</script>
Then in main.ts/main.js, we define the start method in this case run() which would be responsible for loading start page, in this case a dataService.
start
run()
dataService
require.config({
baseUrl : "."
});
require(["bootstrapper"], (bootstrap) => {
bootstrap.run();
});
import ds = require("DataService");
export function run() {
var service = new ds.DataService();
} alert(service.getMessage());
export interface IDataService {
msg: string;
getMessage(): string;
};
export class DataService implements IDataService {
msg = 'Data from API Call';
getMessage() { return this.msg; }
}
In summary, managing many TypeScript/JavaScript files is an important task which needs to be planned beforehand. For small scale applications, managing dependencies is not a major concern as we can just “reference” style comments and make sure we load scripts in the defined order. But, if we know that the application is going to grow to large number of files, then we should plan to have dependencies managed properly.
In the above chapters, we have seen how TypeScript wraps the ugliness of the JavaScript in object oriented goodness, now it’s time to extend this goodness and add a little sweetener in it.
In today’s day and age, it’s quite frequent to use external libraries for client side development, to name a few famous ones Jquery, Knockout, Toastr and most famous of them all Angular. TypeScript has a role to play with these libraries as well, it allows us to use these libraries as a reference in our TypeScript code using “Ambient Declarations”.
Ambient declaration is a way by which TypeScript provides all its features like autocompletion, type checking, design and compile time type safety for external libraries. TypeScript comes preloaded with definitions of DOM (document object model) and JavaScript APIs for example:
window.onload = function () {
var t: HTMLElement = document.getElementById('id'); }
Here, we see that in TypeScript, if we type in “document” we get intellisense for all the methods available with “document”. This helps us in writing native JavaScript in TypeScript and make sure that we are following the correct structure for each method call.
document
TypeScript also has support for other popular libraries using their respective definition files. These files have an extension of “*.d.ts” and are used by TypeScript to add design time support and compile time checking. These files just contain the definition of all the functions supported by the library and does not have any implementation. All the libraries are available at TypeScript Definitions. To access these libraries, we just need to include their corresponding “*.d.ts” using “reference comments”. Once we have included this file, we will have access to the libraries classes and function for example if we include Jquery.d.ts file, we will have access to “$” function and TypeScript will also provide intellisense for all the Jquery functions.
$
/// <reference path="typings/jquery.d.ts" />
document.title = 'Hello TypeScript';
$(document).ready(function() {
var v;
});
Similarly, we can use definition files of Angular to write client code in TypeScript, sample shown below:
/// <reference path="../scripts/typings/angularjs/angular.d.ts">
/// <reference path="../scripts/typings/angularjs/angular-route.d.ts">
export class DataService {
private videos: string[]
private moviesApiPath: string;
private categoriesApiPath: string;
private httpService: ng.IHttpService; //Note the type ng.IHttpService
private qService: ng.IQService; //Note the angular promises
getAllMovies(fetchFromService?: boolean): ng.IPromise<any> {
var self = this;
if (fetchFromService) {
return getMoviesFromService();
} else {
if (self.movies !== undefined) {
return self.qService.when(self.videos);
} else {
return getVideosFromService();
}
}
For each of the features in TypeScript, we have been relating to corresponding features in OOPs language such as C# and Java but, we need to keep in mind that TypeScript is not a OOPs language. In fact, TypeScript is just a tool on top of JavaScript which provides us with more robust code structure, type safety. TypeScript enhances the productivity of developers writing JavaScript code and it in itself does not provide any specific functionalities or features like libraries such as Jquery.
Most of the features which we use in TypeScript gets removed from the compiled JavaScript file and we are left with nothing more than a pure JavaScript code. TypeScript is open source and is no way tied to Microsoft or .NET technologies. Developers writing code in JavaScript can use TypeScript with various different IDEs apart from Visual Studio like Sublime Text, VIM. There are various forums/blogs available on TypeScript and the development community has started to use TypeScript with their respective libraries. Resharper is also providing support for TypeScript in its Version 8 release which allows developers easier path to refactor, create functions and other features which Visual Studio users are so accustomed to use.
Reference Websites for TypeScript
Reference Websites for RequireJS
Reference Website for TypeScript with Angular
Reference Website for Resharper
Contact Me: You can contact on twitter @ohri_s. | https://www.codeproject.com/Articles/802722/TypeScript-The-Basics | CC-MAIN-2018-30 | refinedweb | 4,342 | 60.24 |
Aggreg
generated by a Struts
Tag. The action tag (within the struts root node of ... Action interface
All actions may implement....
The Struts2 Action have introduced a simpler
implementation approach Dispatch Action Example
Struts Dispatch Action Example
Struts Dispatch Action... function. Here in this example
you will learn more about Struts Dispatch Action LookupDispatchAction Example
;
Struts LookupDispatch Action...;org.apache.struts.actions.LookupDispatchAction class. This class does not provide an implementation... in the action tag through struts-config.xml file). Then this matching key is mapped Articles
. The first section will provide an overview of both Struts and Web application security... application. The example also uses Struts Action framework plugins in order to initialize the scheduling mechanism when the web application starts. The Struts Action
struts tiles implementation
struts tiles implementation referred above site...
im implementing Tiles using Struts.. I have seen the Result Example posted by this site.
"Have...;%@ taglib uri="" prefix="tiles" %>
please
struts tiles implementation
struts tiles implementation im implementing Tiles using Struts.. I have seen the Result Example posted by this site.
**"Have a look...;
please help me
Login Action Class - Struts
Login Action Class Hi
Any one can you please give me example of Struts How Login Action Class Communicate with i-bat
Please provide option for full materials download...
Please provide option for full materials download... Please provide option for full materials download ... i dont have internet connection in my home. plz provide offline tutorial option
2
struts - Struts
; Hi,Please check easy to follow example at dispatchaction vs lookupdispatchaction What is struts
Struts - Struts
Struts Provide material over struts? Hi Friend,
Please visit the following link:
Thanks
struts <p>hi here is my code can you please help me to solve...;
<hr />
Please visit the following link:
Struts Login...;
<p><html>
<body></p>
<form action="login.do">
Struts2 Actions
is usually generated by a Struts
Tag.
Struts 2 Redirect Action
In this section, you will get familiar with struts 2 Redirect action...
Struts2 Actions
Struts2 Actions
struts - Struts
-config.xml
Action Entry:
Difference between Struts-config.xml...struts hi,
what is meant by struts-config.xml and wht are the tags...
2. wht is the difference b/w the web.xml and struts-config.xml
3. what Action
Struts Action Chaining
Struts Action Chaining Struts Action Chaining
compelete code.
thanks Hi friend,
Please give details with full...Struts Hello
I like to make a registration form in struts inwhich....
Struts1/Struts2
For more information on struts visit to :
Struts Books
application
Struts Action Invocation Framework (SAIF) - Adds features like Action interceptors and Inversion of Control (IoC) to Struts.
Struts BSF - A Struts Action implementation that uses BSF-compatible
Struts - Struts
Struts Is Action class is thread safe in struts? if yes, how it is thread safe? if no, how to make it thread safe? Please give me with good...://
Developing Struts Application
of the servlets or Struts Actions...
All data submitted by user are sent... it to a specified instance of Action
class.(as specified in struts-config.xml...,forms, action-forwards etc are given
in struts-config.xml
)
}
alert("Invalid E-mail Address! Please re-enter.")
return (false);
}
function validateForm(formObj){
if(formObj.userid.value.length==0){
alert("Please...;
}
if(formObj.password.value.length==0){
alert("Please enter password!");
formObj.password.focus
| 2 Redirect Action
Struts 2 Redirect Action
In this section, you will get familiar with struts 2 Redirect
action.... You can see a simple implementation
of this in the following struts 2 2.0.4 Released
standard Struts tags but
provide greater interactivity and flexibility....
Plexus Plugin - A new plugin that enables
Struts Actions, Interceptors... that allows you
to use existing Struts 1 Actions and ActionForms in Struts please send me a program that clearly shows the use of struts with jsp
struts
struts Hi,... please help me out how to store image in database(mysql) using
;Basically in Struts we have only Two types of Action classes.
1.BaseActions...Struts why in Struts ActionServlet made as a singleton what... only i.e ForwardAction,IncludeAction.But all these action classes extends Action
Struts-It
. The form-based editor supports full set of
struts elements and attributes. So you... to create all Struts artifacts
like Form-bean, Action, Exception, etc...
Action class
other Struts-related classes like configuration
Struts Alternative
of Struts to provide XML transformations based on the content produced... implementation of Struts, which was released as a 1.0 product approximately one year later...
Struts Alternative
Struts is very robust
Struts Roseindia
JavaBean is used to input properties in action class
Struts 2 actions can... application.
Struts 2 Tags contain output data and provide style sheet
driven markup... the execution of Action.
Features of Struts 2
Simple and easy web Guide
? -
- Struts Frame work is the implementation of Model-View-Controller
(MVC) design..., Action, ActionForm and struts-config.xml are the part of
Controller...
Struts Guide
Struts 2 Hello World Annotation Example
the action. The Convention Plugin was first added in the
Struts version 2.1... naming conventions for the action class location.
The Struts 2 Convention Plugin.... The Struts 2
Convention Plugin will configure the action.
This section | http://www.roseindia.net/tutorialhelp/comment/3616 | CC-MAIN-2013-48 | refinedweb | 860 | 60.51 |
Metafields Editorby Webify Technology
Store Metafield with your products, variants, collections.
Lowcostvitamins.net
Does not work with some themes. Spend months trying to get the code to work on a on a designed theme. Still no luck.
Developer reply
Using the metafields is a 2 part process. First is to use our app to create the metafields with the necessary namespace , key and value. Second part is to modify your theme to read the newly created metafields and display them in the right places. This will involve modifying your shopify theme. Depends on which themes you are using, the theme author might have already added support for metafields so you do not need to do additional changes. If not, you would need to work with a theme developer to do the necessary changes. For example, for product page related metafields, you would need to modify product related liquid files like product.liquid. Our app does not do any theme changes and the changes are made outside of the app. I hope this clears up how metafields should be used.! | https://apps.shopify.com/metafields-editor/reviews?page=1 | CC-MAIN-2020-34 | refinedweb | 179 | 74.59 |
Simple C++ Project Setup With CMake
As a Java developer, I wanted to dabble with a C++ project, something I’ve not done for a while. I was looking into how to structure my project with a view to getting started quickly, but ensuring that if I wanted to expand the project, I wouldn’t have to restructure it in order to keep things organised. This post talks about that project and the source is available from my blog post code repository.
Coming from a Java background, we are pretty lucky that Java has had multiple build tools that can help manage Java projects such as organising class and test sources, handling dependencies, running tests and so on. It was not always this way, originally, there were IDE specific builders that put the source in their own structures and knew how to build them. This was problematic if you wanted to build outside of the IDE, and wouldn’t work in today’s world of continuous integration. Ant came along and let us write build scripts that could take source code of any structure and produce class, source and javadoc jar files for them, and IDEs could take advantage of such scripts.
With Maven, and later Gradle, we were able to create projects with a predefined structure that could be compiled easily just adding a class or test class file in a specific directory. IDEs were able to provide plugins to support these frameworks creating a fairly rich environment where it is easy to get started with Java.
With C++ (and of course plain old C), we don’t have the same solutions, historically, tools like Make have provided Ant-like build systems where you can put your source code where you like and write a script to find and build it. C++ has many more options with regards to compilation for things like build types (Release, Debug etc), linking dynamic & static libs and the target platform to name a few. Some of this complexity was been wrapped up in a tool called CMake which is a higher level abstraction of the build process. Technically, its not a build system, it creates a configuration from which the project can be built.
So I wanted to come up with a basic template for a C++ project that can be built with CMake, and used as the basis for other projects. I wanted to be able to compile and run from the command line and also be able to run from an IDE such as Eclipse and/or KDevelop. I also wanted it to be structured such that it wasn’t overly verbose for small projects but flexible enough that I can expand it later on and refactor as needed. Longer term I want to add testing in, but we’ll start without for now.
I went for a container project which contains sub projects for the main application, and a static linked library. The library could always be spun out as a dynamic library and moved into its own project.
Sample Project
Our project will have the following directory structure:
Sample Project | | -- calcApp | | -- calcLib
CMake works by having
CMakeLists.txt files in each folder of the project and sub projects. In the top level folder we put a
CMakeLists.txt file that is for the project, and it includes the two subdirectories below it. CMake will drill down, find the
CMakeLists.txt file in the subdirectories and build elements in those folders.
cmake_minimum_required(VERSION 3.5) project(someProject) add_subdirectory(calcLib) add_subdirectory(calcApp)
The first line defines the minimum version of CMake required to build this project. The project() definition defines the project name. We then add the two subdirectories for our sub-projects.
calcLib
Our next step is to implement the
calcLib and set up the CMake files for it. We will have a simple
Calc class that just has a couple of methods.
C++ often has header files and source files separately, the header defines the interface to a method or class while the source implements it. Our header will go in the include directory of the lib, and goes in a subdirectory so we can make include file names more unique. The following is the content of
calcLib/include/calc:
#ifndef CALCLIB_INCLUDE_CALC_CALC_H_ #define CALCLIB_INCLUDE_CALC_CALC_H_ class Calc { public: int add(int a,int b); int sub(int a,int b); }; #endif /* CALCLIB_INCLUDE_CALC_CALC_H_ */
We can create an implementation of this in the
calcLib/src/Calc.cpp file:
#include "calc/Calc.h" int Calc::add(int a, int b) { return a+b; } int Calc::sub(int a, int b) { return a-b; }
We include our header file, and then proceed to complete the implementations of the interface defined in the header file. Note that we prefix our header file with
calc from the
calc subdirectory in our include directory.
Our last piece for the lib is the
calcLib/CMakeLists.txt:
add_library(calcLib src/Calc.cpp) message("CalcLib current source dir = ${CMAKE_CURRENT_SOURCE_DIR}") target_include_directories( calcLib PUBLIC ${CMAKE_CURRENT_SOURCE_DIR}/include)
Since C++ lets us build executables or libraries we need to tell CMake what we want our target to be. add_library is used to define a library target, give it a name, and define the source files for it. In this case, we are creating a library called
calcLib and adding our
src/Calc.cpp source file.
The message command is used to log messages back to the user. CMake internal or user defined variables can be used inside the string to log them as part of the message. This isn’t needed, I just included to show you how to create output.
target_include_directories is used to define the include directory for this lib. Note that we use the target name
calcLib to attach the include directory to the target. Because of the nature of C++ with regards to libraries and linking there are a few options on this command to define the visibility of the includes. We have defined ours as PUBLIC for now.
calcApp
Now we have our library all completed, we can work on our main app in the calcApp directory. This is a simple
main.cpp class which prints a message and invokes our calculator to show it is all hooked up.
calcApp/main.cpp:
#include <iostream> #include "calc/Calc.h" int main() { Calc calc; std::cout << "Hello World 12+7 = " << calc.add(12,7) << "\n"; }
Pretty simple stuff. We need to also add a CMakeLists.txt file which will define the executable and pull in our library.
calcApp/CMakeLists.txt
add_executable(calcApp main.cpp) target_link_libraries(calcApp calcLib)
add_executable is like add_library, it lets us define an executable we want to create, gives it a target name and lets us specify the source files to be added to it.
target_link_libraries lets us link our library into the executable when it gets built. In this case we are doing a static link and the library will be pulled into the executable.
Building It All
That should be everything, and we should be able to build and run our app. You can build the project by opening a command line, going to the project root folder and typing:
mkdir build cd build cmake .. make
That should build the project for you, and you can run the executable from
calcApp/calApp
That about wraps it up, you can get the source code from my blog post code git repository under cmake-basic.
There are a lot of knobs to play with with regards to building C++ projects, and how they are structured. With Java, while we have the option of adding jars to executables at runtime through the class path, its often the case that we just build a jar containing all the code we need. Similarly, we could bind java libraries at run time, but often choose not to. With C++ here we have a basic outline of a project that should cover simple cases.
Next I’m going to be looking at incorporating a test framework to give us a real base project to work with. | https://www.andygibson.net/blog/programming/from-java-to-c-project-setup/ | CC-MAIN-2021-31 | refinedweb | 1,343 | 69.62 |
This is the second of the two chapters in this book that covers the elements of interacting directly with the user, that is displaying information on the screen and accepting user input via the mouse or keyboard. In Chapter 9 we focused on Windows Forms, where we learnt how to display a dialog box or SDI or MDI window, and how to place various controls on it such as buttons, text boxes, and list boxes. In that chapter, the emphasis was very much on using the familiar predefined controls at a high level and relying on the fact that these controls are able to take full responsibility for getting themselves drawn on the display device. Basically, all you need to do is set the controls' properties and add event handlers for those user input events that are relevant to your application. The standard controls are powerful, and you can achieve a very sophisticated user interface entirely by using them. Indeed, they are by themselves quite adequate for the complete user interface for many applications, most notably dialog-type applications, and those with explorer style user interfaces.
However there are situations in which simply using controls doesn't give you the flexibility you need in your user interface. For example, you may want to draw text in a given font in a precise position in a window, or you may want to display images without using a picture box control, simple shapes or other graphics. A good example, is the Word for Windows program that I am using to write this chapter. At the top of the screen are various menus and toolbars that I can use to access different features of Word. Some of these menus and buttons bring up dialog boxes or even property sheets. That part of the user interface is what we covered in Chapter 9. However, the main part of the screen in Word for Windows is very different. It's an SDI window, which displays a representation of the document. It has text carefully laid out in the right place and displayed with a variety of sizes and fonts. Any diagrams in the document must be displayed, and if you're looking at the document in Print Layout view, the borders of the actual pages need to be drawn in too. None of this can be done with the controls from Chapter 9. To display that kind of output, Word for Windows must take direct responsibility for telling the operating system precisely what needs to be displayed where in its SDI window. How to do this kind of thing is subject matter for this chapter.
We're going to show you how to draw a variety of items including:
In all cases, the items can be drawn wherever you like within the area of the screen occupied by your application, and your code directly controls the drawing � for example when and how to update the items, what font to display text in and so on.
In the process, we'll also need to use a variety of helper objects including pens (used to define the characteristics of lines), brushes (used to define how areas are filled in � for example, what color the area is and whether it is solid, hatched, or filled according to some other pattern), and fonts (used to define the shape of characters of text). We'll also go into some detail on how devices interpret and display different colors.
The code needed to actually draw to the screen is often quite simple, and it relies on a technology called GDI+. GDI+ consists of the set of .NET base classes that are available for the purpose of carrying out custom drawing on the screen. These classes are able to arrange for the appropriate instructions to be sent to the graphics device drivers to ensure the correct output is placed on the monitor screen (or printed to a hard copy). Just as for the rest of the .NET base classes, the GDI+ classes are based on a very intuitive and easy to use object model.
Although the GDI+ object model is conceptually fairly simple we still need a good understanding of the underlying principles behind how Windows arranges for items to be drawn on the screen in order to draw effectively and efficiently using GDI+.
This chapter is broadly divided into two main sections. In the first two-thirds of the chapter we will explore the concepts behind GDI+ and examine how drawing takes place, which means that this part of the chapter will be quite theoretical, with the emphasis on understanding the concepts. There will be quite a few samples, almost all of them very small applications that display specific hard-coded items (mostly simple shapes such as rectangles and ellipses). Then for the last third of the chapter we change tack and concentrate on working through a much longer sample, called CapsEditor, which displays the contents of a text file and allows the user to make some modifications to the displayed data. The purpose of this sample, is to show how the principles of drawing should be put into practice in a real application. The actual drawing itself usually requires little code � the GDI+ classes work at quite a high level, so in most cases only a couple of lines of code are required to draw a single item (for example, an image or a piece of text). However, a well designed application that uses GDI+ will need to do a lot of additional work behind the scenes, that is it must ensure that the drawing takes place efficiently, and that the screen is updated when required, without any unnecessary drawing taking place. (This is important because most drawing work carries a very big performance hit for applications.) The CapsEditor sample shows how you'll typically need to do much of this background management.
The GDI+ base class library is huge, and we will scarcely scratch the surface of its features in this chapter. That's a deliberate decision, because trying to cover more than a tiny fraction of the classes, methods and properties available would have effectively turned this chapter into a reference guide that simply listed classes and so on. We believe it's more important to understand the fundamental principles involved in drawing; then you will be in a good position to explore the classes available yourself. (Full lists of all the classes and methods available in GDI+ are of course available in the MSDN documentation.) Developers coming from a VB background, in particular, are likely to find the concepts involved in drawing quite unfamiliar, since VB's focus lies so strongly in controls that handle their own painting. Those coming from a C++/MFC background are likely to be in more comfortable territory since MFC does require developers to take control of more of the drawing process, using GDI+'s predecessor, GDI. However, even if you have a good background in GDI, you'll find a lot of the material is new. GDI+ does actually sit as a wrapper around GDI, but nevertheless GDI+ has an object model which hides many of the workings of GDI very effectively. In particular, GDI+ replaces GDI's largely stateful model in which items were selected into a device context with a more stateless one, in which each drawing operation takes place independently. A Graphics object (representing the device context) is the only object that persists between drawing operations.
By the way, in this chapter we'll use the terms drawing and painting interchangeably to describe the process of displaying some item on the screen or other display device.
Before we get started we will quickly list the main namespaces you'll find in the GDI+ base classes. They are:
Almost all the classes, structs and so on. we use in this chapter will be taken from the System.Drawing namespace.
In this section, we'll examine the basic principles that we need to understand in order to start drawing to the screen. We'll start by giving an overview of GDI, the underlying technology on which GDI+ is based, and see how it and GDI+ are related. Then we'll move on to a couple of simple samples.
In general, one of the strengths of Windows � and indeed of modern operating systems in general � lies in their ability to abstract the details of particular devices away from the developer. For example, you don't need to understand anything about your hard drive device driver in order to programmatically read and write files to disk; you simply call the appropriate methods in the relevant .NET classes (or in pre-.NET days, the equivalent Windows API functions). This principle is also very true when it comes to drawing. When the computer draws anything to the screen, it does so by sending instructions to the video card telling it what to draw and where. The trouble is that there are many hundreds of different video cards on the market, many of them made by different manufacturers, and most of which have different instruction sets and capabilities. The way you tell one video card to draw, for example a simple line or a character string may involve different instructions from how you would tell a different video card to draw exactly the same thing. If you had to take that into account, and write specific code for each video driver in an application that drew something to the screen, writing the application would be an almost impossible task. Which is why the Windows Graphical Device Interface (GDI) has always been around since the earliest versions of Windows.
GDI hides the differences between the different video cards, so that you simply call the Windows API function to do the specific task, and internally the GDI figures out how to get your particular video card to do whatever it is you want drawn. However, GDI also does something else. You see, most computers have more than one device that output can be sent to. These days you will typically have a monitor, which you access through the video card and you will also have a printer. Some machines may have more than one video card installed, or you may have more than one printer. GDI achieves the remarkable feat of making your printer seem the same as your screen as far as your application is concerned. If you want to print something instead of displaying it, you simply inform the system that the device the output is being sent to is the printer and then call the same API functions in exactly the same way. That's the whole purpose of GDI � to abstract the features of the hardware into a relatively high level API.
Although GDI exposes a relatively high level API to developers, it is still an API that is based on the old Windows API, with C-style functions, and so is not as simple to use as it could be. GDI+ to a large extent sits as a layer between GDI and your application, providing a more intuitive, inheritance-based object model. Although GDI+ is basically a wrapper around GDI, Microsoft have been able through GDI+ to provide new features and claim to have made some performance improvements:
In GDI, the way that you identify which device you want your output to go to is through an object known as the device context (DC). The device context stores information about a particular device and is able to translate calls to the GDI API functions into whatever instructions need to be sent to that device. You an also query the device context to find out what the capabilities of the corresponding device are (for example, whether a printer print in color or only black and white), so you can adjust your output accordingly. If you ask the device to do something it's not capable of, the device context will normally detect this, and take appropriate action (which depending on the situation might mean throwing an error or it might mean modifying the request to get the closest match to what the device is actually capable of).
However, the device context doesn't only deal with the hardware device. It acts as a bridge to Windows, and is therefore, able to take account of any requirements or restrictions placed on the drawing by Windows. For example, if Windows knows that only a portion of your application's window needs to be redrawn (perhaps because you've minimized another window that had been hiding part of your application), the device context can trap and nullify attempts to draw outside that area. Due to the
device context's relationship with Windows, working through the device context can simplify your code in other ways. For example, hardware devices need to be told where to draw objects, and they usually want coordinates relative to the top left corner of the screen (or output device). Usually however, your application will be thinking of drawing something at a certain position within the client area of its own window. (The client area of a Window is the part of the window that's normally used for drawing � which normally means the window with the borders excluded; on many applications the client area will be the area that has a white background.) However, since the window might be positioned anywhere on the screen, and a user might move it at any time, translating between the two coordinates is potentially a difficult task. However, the device context always knows where your window is and as able to perform this translation automatically. This means that you can just ask the device context to get an item drawn at a certain position within your window, without needing to worry about where on the screen your application's window is currently located.
As you can see, the device context is a very powerful object and you won't be surprised to learn that under GDI all drawing had to be done through a device context. You even sometimes use the device context for operations that don't involve drawing to the screen or to any hardware device. For example, if you have an image such as a bitmap to which you are making some modifications (perhaps resizing it), it's more efficient to do so via a device context because the device context may be able to take advantage of certain hardware features of your machine in order to carry out such operations more quickly. Although modifying images is beyond the scope of this chapter, we'll note that device contexts can be used to prepare images in memory very efficiently, before the final result is sent to the screen.
With GDI+, the device context is still there, although it's now been given a more friendly name. It is wrapped up in the .NET base class, Graphics. You'll find that, as we work through the chapter, most drawing is done by calling methods on an instance of Graphics. In fact, since the System.Drawing.Graphics class is the class that is responsible for actually handling most drawing operations, very little gets done in GDI+ that doesn't involve a Graphics instance somewhere. Understanding how to manipulate this object is the key to understanding how to draw to display devices with GDI+.
We're going to start off with a short sample to illustrate drawing to an application's main window. The samples in this chapter, are all created in Visual Studio.NET as C# Windows applications. Recall that for this type of project the code wizard gives us a class called Form1, derived from System.Windows.Form, which represents the application's main window. Unless otherwise stated, in all samples, new or modified code means code that we've added to this class.
In .NET usage, when we are talking about applications that display various controls, the terminology form has largely replaced window to represent the rectangular object that occupies an area of the screen on behalf of an application. In this chapter, we've tended to stick to the term window, since in the context of manually drawing items it's rather more meaningful. We write Windows (capital W) when we are referring to the operating system, and windows (small w) to refer to windows on the screen. We'll also talk about the Form when we're referring to the .NET class used to instantiate the form/window.
The first sample, will simply create a form and draw to it in the InitializeComponent() method. I should say at the start that this is not actually the best way to draw to the screen � we'll quickly find that this sample has a problem in that it is unable to redraw anything when it needs to after starting up. However the sample will illustrate quite a few points about drawing without our having to do very much work.
For this sample, we start Visual Studio.NET, create a Windows application, and modify the code in the InitializeComponent() method as follows:
private void InitializeComponent() { this.components = new System.ComponentModel.Container(); this.Size = new System.Drawing.Size(300,300); this.Text = "Display At Startup"; this.BackColor = Color.White;
and we add the following code to the Form1 constructor:); }
Those are the only changes we make. This sample is the DisplayAtStartup sample from the code download.
We set the background color of the form to white � so it looks like a 'proper' window that we're going to display graphics in! We've put this line in the InitializeComponent() method, so that Visual Studio.NET recognizes the line and is able to alter the design view appearance of the form. Alternatively, we could have used the design view to set the background color, which would have resulted in the same statement appearing in InitializeComponent(). Recall that this method is the one used by Visual Studio.NET to establish the appearance of the form. If we don't set the background color explicitly, it will remain as the default color for dialog boxes � whatever color is specified in your Windows settings.
Next, we create a Graphics object using the Form's CreateGraphics() method. This Graphics object contains the Windows device context we need to draw with. The device context created is associated with the display device, and also with this window. Notice, that we've used the variable name dc for the Graphics object instance, reflecting the fact that it really represents a device context behind the scenes.
We then call the Show() method to display the window. This is really a fudge to force the window to display immediately, because we can't actually do any drawing until the window has been displayed � there's nothing to draw onto.
Finally, we display a rectangle, at coordinates (0,0), and with width and height 50, and an ellipse with coordinates (0, 50) and with width 80 and size 50. Note that coordinates (x, y) means x pixels to the right and y pixels down from the top left corner of the client area of the window � and these are the coordinates of the top left corner of the shape being displayed:
The notation (x,y) is standard mathematical notation and is very convenient for describing coordinates. The overloads that we are using of the DrawRectangle() and DrawEllipse()methods each take 5 parameters. The first parameter of each is an instance of the class System.Drawing.Pen. A Pen is one of a number of supporting objects to help with drawing � it contains information about how lines are to be drawn. Our first pen says that lines should be blue and with a width of 3 pixels, the second says that lines should be red and have a width of 2 pixels. The final four parameters are coordinates. For the rectangle, they represent the (x,y) coordinates of the top left hand corner of the rectangle, and its the width and height, all expressed in terms of numbers of pixels. For the ellipse these numbers represent the same thing, except that we are talking about a hypothetical rectangle that the ellipse just fits into, rather than the ellipse itself.
We'll go into more detail about these new structs and the methods of the Graphics object later in the chapter. For now, we'll just worry about getting something drawn!
Running this code gives this result:
I know � the book's printed in greyscale. As with all the screenshots in this chapter, you'll just have to take my word for it that the colors are correct. Or you can always try running the samples yourself!
This screenshot demonstrates a couple of points. First, you can see clearly what is meant by the client area of the window. It's the white area � the area that has been affected by our setting the BackColor property. And notice that the rectangle nestles up in the corner of this area, as you'd expect when we specified coordinates of (0,0) for it. Second, notice how the top of the ellipse overlaps the rectangle slightly, which you wouldn't expect from the coordinates we gave in the code. That results from where Windows places the lines that border the rectangle and ellipse. By default, Windows will try to centre
the line on where the border of the shape is � that's not always possible to do exactly, because the line has to be drawn on pixels (obviously), but the border of each shape theoretically lies between two pixels. The result is that lines that are 1 pixel thick will get drawn just inside the top and left sides of a shape, but just outside the bottom and right sides � which means that shapes that strictly speaking are next to each other will have their borders overlap by one pixel. We've specified wider lines, therefore the overlap is greater. It is possible to change the default behaviour by setting the Pen.Alignment property, as detailed in the MSDN documentation, but for our purposes the default behaviour is adequate.
The screenshot also looks like our code has worked fine. Seems like drawing couldn't be simpler! Unfortunately, if you actually run the sample you'll notice the form behaves a bit strangely. It's fine if you just leave it there, and it's fine if you drag it around the screen with the mouse. Try minimizing it then restoring it however and our carefully drawn shapes just vanish! The same thing happens if you drag another window across the sample. Even more interestingly, if you drag another window across it so that it only obscures a portion of our shapes, then drag the other window away again, you'll find the temporarily obscured portion has disappeared and you're left with half an ellipse or half a rectangle!
So what's going on? Well the problem arises, because if a window or part of a window gets hidden for any reason (for example, it is minimized or hidden by another window), Windows usually immediately discards all the information concerning exactly what was being displayed there. It has to � otherwise the memory usage for storing screen data would be astronomical. Think about it. A typical computer might be running with the video card set to display 1024 x 768 pixels, perhaps with 24-bit color mode. We'll cover what 24-bit color means later in the chapter, but for now I'll say that implies that each pixel on the screen occupies 3 bytes. That means 2.25MB to display the screen. However, it's not uncommon for a user to sit there working, with 10 or 20 minimized windows in the taskbar. Let's do a worst-case scenario: 20 windows, each one would occupy the whole screen if it wasn't minimized. If Windows actually stored the visual information those windows contained, ready for when the user restored them, you'd be talking about 45MB! These days, a good graphics card might have 64MB of memory and be able to cope with that, but it's only a couple of years ago that 4MB was considered generous in a graphics card � and the excess would need to be stored in the computer's main memory. A lot of people still have old machines (I still use a spare computer that has a 2 MB graphics card). Clearly it wouldn't be practical for Windows to manage its user interface like that.
The moment any part of a window gets hidden, those pixels get lost. What happens is that Windows just makes a note that the window (or some portion of the window) is hidden, and when it detects that that area is no longer hidden, it asks the application that owns the window to redraw its contents. There are a couple of exceptions to this rule � generally for cases in which a small portion of a window is hidden very temporarily (a good example is when you select an item from the main menu and that menu item drops down, temporarily obscuring part of the window below). In general however, you can expect that if part of your window gets hidden, your application will need to redraw it later.
That's a problem for our sample application. We placed our drawing code in the Form1 constructor, which is called just once when the application starts up, and you can't call the constructor again to redraw the shapes when required later on.
In Chapter 9, when we covered controls, we didn't need to know about any of that. This is because the standard controls are pretty sophisticated and they are able to redraw themselves correctly whenever Windows asks them to. That's one reason why when programming controls you don't need to worry about the actual drawing process at all. If we are taking responsibility for drawing to the screen in our application then we also need to make sure our application will respond correctly whenever Windows asks it to redraw all or part of its window. In the next section, we will modify our sample to do just that.
If the above explanation has made you worried that drawing your own user interface is going to be terribly complicated, don't worry. It isn't. I went into a lot of detail about the process, because it's important to understand what the issues you will face are, but getting your application to redraw itself when necessary is actually quite easy.
What happens, is that Windows notifies an application that some repainting needs to be done by raising a Paint event. Interestingly, the Form class has already implemented a handler for this event so you don't need to add one yourself. You can feed into this architecture by using the fact that the Form1 handler for the Paint event will some point in its processing calls up a virtual method OnPaint(), passing to it a single PaintEventArgs parameter. This means that all we need to do is override OnPaint()to perform our painting. We'll create a new sample, called DrawShapes to do this. As before, DrawShapes as a Visual Studio.NET-generated Windows application, and we add the following code to the Form1 class:
protected override void OnPaint( PaintEventArgs e ) { Graphics dc = e.Graphics; Pen BluePen = new Pen(Color.Blue, 3); dc.DrawRectangle(BluePen, 0,0,50,50); Pen RedPen = new Pen(Color.Red, 2); dc.DrawEllipse(RedPen, 0, 50, 80, 60); base.OnPaint( e ); }
Notice that OnPaint() is declared as protected. OnPaint() is normally internally within the class, so there's no reason for any other code outside the class to know about its existence.
PaintEventArgs is a class that is derived from the EventArgs class normally used to pass in information about events. PaintEventArgs has two additional properties, of which the most important is a Graphics instance, already primed and optimised to paint the required portion of the window. This means that you don't have to call CreateGraphics() to get a device context in the OnPaint() method � you've already been provided with one. We'll look at the other additional property soon � it contains more detailed information about which area of the window actually needs repainting.
In our implementation of OnPaint(), we first get a reference to the Graphics object from PaintEventArgs, then we draw our shapes exactly as we did before. At the end we call the base classes' OnPaint() method. This step is important. We've overridden OnPaint() to do our own painting, but it's possible that Windows may have some additional work of its own to do in the painting process � any such work will be dealt with in an OnPaint() method in one of the .NET base classes.
For this sample, you'll find that removing the call to base.OnPaint() doesn't seem to have any effect, but don't ever by tempted to leave this call out. You might be stopping Windows from doing its work properly and the results could be unpredictable.
OnPaint() will also be called when the application first starts up and our window is displayed for the first time, so there is no need to duplicate the drawing code in the constructor, though we still need to set the background color there along with any other properties of the form. Again we can do this either by adding the command explicitly or by setting the color in the Visual Studio.NET properties window:
private void InitializeComponent() { this.components = new System.ComponentModel.Container(); this.Size = new System.Drawing.Size(300,300); this.Text = "Draw Shapes"; this.BackColor = Color.White; }
Running this code gives the same results initially as for our previous sample � except that now our application behaves itself properly when you minimize it or hide parts of the window.
Our DrawShapes sample from the last section illustrates the main principles involved with drawing to a window, however it's not very efficient. The reason is that it attempts to draw everything in the window, irrespective of how much needs to be drawn. Consider the situation shown in this figure. I ran the DrawShapes sample, but while it was on the screen I opened another window and moved it over the DrawShapes form, so it hid part of it. The other window here happens to be the Windows 2000 Task Manager but it doesn't matter what the other window is; the principle is the same:
So far so good. What will happen however when I move the overlapping window (in this case the task manager) so that the DrawShapes window is fully visible again? Well, Windows will as usual send a Paint event to the form, asking it to repaint itself. The rectangle and ellipse both lie in the top left corner of the client area, and so were visible all the time therefore, there's actually nothing that needs to be done in this case apart from repaint the white background area. However, Windows doesn't know that. As far as Windows is concerned, part of the window needs to be redrawn, and that means we need to raise the Paint event, resulting in our OnPaint() implementation being called. OnPaint() will then unnecessarily attempt to redraw the rectangle and ellipse.
In this case, the shapes will not get repainted. The reason is to do with the device context. Remember that I said that the device context inside the Graphics object passed to OnPaint() will have been optimized by Windows to the particular task at hand? What this means, is that Windows has pre-initialized the device context with information concerning what area actually needed repainting. This is the rectangle that was covered with the Task Manager window in the screenshot above. In the days of GDI, the region that is marked for repainting used to be known as the invalidated region, but with GDI+ the terminology has largely changed to clipping region. The device context knows what
this region is therefore, it will intercept any attempts to draw outside this region, and not pass the relevant drawing commands on to the graphics card. That sounds good, but there's still a potential performance hit here. We don't know how much processing the device context had to do before it figured out that the drawing was outside the invalidated region. In some cases it might be quite a lot, since calculating which pixels need to be changed to what color can be very processor-intensive (although a good graphics card will provide hardware acceleration to help with some of this). A rectangle is quite easy. An ellipse is harder because the position of the curve needs to be calculated. Displaying text takes a lot of work � the information in the font needs to be processed to figure out the shape of each letter, and each letter will be composed of a number of lines and curves which need to be drawn individually. If, like most common fonts, it's a variable width font, that is, each letter doesn't take up a fixed size, but takes up however much space it needs, then you can't even work out how much space the text will occupy without doing quite a few calculations first.
The bottom line to this is that asking the Graphics instance to do some drawing outside the invalidated region is almost certainly wasting processor time and slowing your application down. In a well architectured application, your code will actively help the device context out by carrying out a few simple checks, to see if the usual drawing work is actually needed, before it calls the relevant Graphics instance methods. In this section we're going to code up a new sample � DrawShapesWithClipping � by modifying the DisplayShapes sample to do just that. In our OnPaint() code, we'll do a simple test to see whether the invalidated region intersects the area we need to draw in, and only call the drawing methods if it does.
First, we need to obtain the details of the clipping region. This is where an extra property on the PaintEventArgs comes in. The property is called ClipRectangle, and it contains the coordinates of the region to be repainted, wrapped up in an instance of a struct, System.Drawing.Rectangle. Rectangle is quite a simple struct � it contains 4 properties of interest: Top, Bottom, Left, and Right. These respectively contain the vertical co-ordinates of the top and bottom of the rectangle, and the horizontal coordinates of the left and right edges.
Next, we need to decide what test we'll use to determine whether drawing should take place. We'll go for a simple test here. Notice, that in our drawing, the rectangle and ellipse are both entirely contained within the rectangle that stretches from point (0,0) to point (80,130) of the client area, actually, point (82,132) to be on the safe side since we know that the lines may stray a pixel or so outside this area. So we'll check whether the top left corner of the clipping region is inside this rectangle. If it is, we'll go ahead and draw. If it isn't, we won't bother.
The code to do this looks like this:
protected override void OnPaint( PaintEventArgs e ) { Graphics dc = e.Graphics; if (e.ClipRectangle.Top < 132 && e.ClipRectangle.Left < 82) { Pen BluePen = new Pen(Color.Blue, 3); dc.DrawRectangle(BluePen, 0,0,50,50); Pen RedPen = new Pen(Color.Red, 2); dc.DrawEllipse(RedPen, 0, 50, 80, 60); } base.OnPaint(e); }
Note that what gets displayed is exactly the same as before � but performance is improved now by the early detection of some cases in which nothing needs to be drawn. Notice, also that we've chosen a fairly crude test of whether to proceed with the drawing. A more refined test might be to check separately, whether the rectangle needs to be drawn or whether the ellipse needs to be redrawn, or both. There's a balance here. You can make your tests in OnPaint() more sophisticated � as you do, you'll improve performance, but you'll also make your own OnPaint() code more complex and create more work for yourself. How far you go is up to you. It's almost always worth putting some test in however, simply because you've got the benefit of understanding the broad picture of what it is you are drawing (for example, in our example we have the advance knowledge that nothing we draw will ever go outside the rectangle (0,0) to (82,132)). The Graphics instance doesn't have that understanding � it blindly follows drawing commands. That extra knowledge means you may be able to code up more useful or efficient tests than the Graphics instance could possibly do.
In our last example, we encountered the base struct, Rectangle, which is used to represent the coordinates of a rectangle. GDI+ actually uses several similar structures to represents coordinates or areas, and we're at a convenient point in the chapter to go over the main ones. We'll look at the following structs, which are all defined in the System.Drawing namespace:
Note that many of these objects have a number of other properties, methods, or operator overloads not listed here. In this section we'll just discuss the most important ones.
We'll look at Point first. Point is conceptually the simplest of these structs. Mathematically, it's completely equivalent to a 2D vector. It contains two public integer properties, which represent how far you move horizontally and vertically from a particular location (perhaps on the screen). In other words, look at this diagram:
In order to get from point A to point B, you move 20 units across and 10 units down, marked as x and y on the diagram as this is how they are commonly referred to. We could create a Point struct that represents that as follows:
Point AB = new Point(20, 10); Console.WriteLine("Moved {0} across, {1} down", AB.X, AB.Y); //X and Y are read-write properties, which means you can also set the values //in a Point like this: Point AB = new Point(); AB.X = 20; AB.Y = 10; Console.WriteLine("Moved {0} across, {1} down", AB.X, AB.Y);
Note that although conventionally, horizontal and vertical coordinates are referred to as x and y coordinates (lowercase), the corresponding Point properties are X and Y (uppercase) because the usual convention in C# is for public properties to have names that start with an uppercase letter.
PointF is essentially identical to Point, except that X and Y are of type float instead of int. PointF is used when the coordinates are not necessarily integer values. Casts have been defined for these structs, so that you can implicitly convert from Point to PointF and explicitly from PointF to Point � this last one is explicit, because of the risk of rounding errors:
PointF ABFloat = new PointF(20.5F, 10.9F); Point AB = (Point)ABFloat; PointF ABFloat2 = AB;
One last point about the coordinates. In this discussion of Point and PointF, I've deliberately been a bit vague about the units. Am I talking 20 pixels across, 10 pixels down, or do I mean 20 inches or 20 miles? The answer is that how you interpret the coordinates is up to you.
By default, GDI+ will interpret units as pixels along the screen (or printer, whatever the graphics device is) � so that's how the Graphics object methods will view any coordinates that they get passed as parameters. For example, the point new Point(20,10) represents 20 pixels across the screen and 10 pixels down. Usually these pixels will be measured from the top left corner of the client area of the window, as has been the case in our examples up to now. However, that won't always be the case � for example, on some occasions you may wish to draw relative to the top left corner of the whole window (including its border), or even to the top left corner of the screen. In most cases however, unless the documentation tells you otherwise, you can assume you're talking pixels relative to the top left corner of the client area.
We'll have more to say on this subject later on, after we've examined scrolling, when we mention the three different coordinate systems in use, world, page, and device coordinates.
Like Point and PointF, sizes come in two varieties. The Size struct is for when you are using ints, SizeF is available if you need to use floats. Otherwise Size and SizeF are identical. We'll focus on the Size struct here.
In many ways the Size struct is identical to the Point struct. It has two integer properties that represent a distance horizontally and a distance vertically � the main difference is that instead of X and Y, these properties are named Width and Height. We can represent our earlier diagram by:
Size AB = new Size(20,10); Console.WriteLine("Moved {0} across, {1} down", AB.Width, AB.Height);
Although strictly speaking, a Size mathematically represents exactly the same thing as a Point; conceptually it is intended to be used in a slightly different way. A Point is used when we are talking about where something is, and a Size is used when we are talking about how big it is.
As an example, think about the rectangle we drew earlier, with top left coordinate (0,0) and size (50,50):
Graphics dc = e.Graphics; Pen BluePen = new Pen(Color.Blue, 3); dc.DrawRectangle(BluePen, 0,0,50,50);
The size of this rectangle is (50,50) and might be represented by a Size instance. The bottom right corner is also at (50,50), but that would be represented by a Point instance. To see the difference, suppose we drew the rectangle in a different location, so it's top left coordinate was at (10,10).
dc.DrawRectangle(BluePen, 10,10,50,50);
Now the bottom right corner is at coordinate (60,60), but the size is unchanged � that's still (50,50).
The addition operator has been overloaded for points and sizes, so that it is possible to add a size to a point giving another point:
static void Main(string[] args) { Point TopLeft = new Point(10,10); Size RectangleSize = new Size(50,50); Point BottomRight = TopLeft + RectangleSize; Console.WriteLine("TopLeft = " + TopLeft); Console.WriteLine("BottomRight = " + BottomRight); Console.WriteLine("Size = " + RectangleSize); }
This code, running as a simple console application, produces this output:
Notice that this output also shows how the ToString() method of Point and Size has been overridden to display the value in {X,Y} format.
Similarly, it is also possible to subtract a Size from a Point to give a Point, and you can add two Sizes together, giving another Size. It is not possible however, to add a Point to another Point. Microsoft decided that adding Points doesn't conceptually make sense, so they chose not supply any overload to the + operator that would have allowed that.
You can also explicitly cast a Point to a Size and vice versa:
Point TopLeft = new Point(10,10); Size S1 = (Size)TopLeft; Point P1 = (Point)S1;
With this cast S1.Width is assigned the value of TopLeft.X, and S1.Height is assigned the value of TopLeft.Y. Hence, S1 contains (10,10). P1 will end up storing the same values as TopLeft.
These structures represent a rectangular region (usually of the screen). Just as with Point and Size, we'll just consider the Rectangle struct here. RectangleF is basically identical except that those of its properties that represent dimensions all use float, whereas those of Rectangle use int.
A Rectangle can be thought of as composed of a point, representing the top left corner of the rectangle, and a Size, which represents how large it is. One of its constructors actually takes a Point and a Size as its parameters. We can see this by rewriting our earlier code to draw a rectangle:
Graphics dc = e.Graphics; Pen BluePen = new Pen(Color.Blue, 3); Point TopLeft = new Point(0,0); Size HowBig = new Size(50,50); Rectangle RectangleArea = new Rectangle(TopLeft, HowBig); dc.DrawRectangle(BluePen, RectangleArea);
This code also uses an alternative override of Graphics.DrawRectangle(), which takes a Pen and a Rectangle struct, as its parameters.
You can also construct a Rectangle by supplying the top left horizontal coordinate, top left vertical coordinate, width and height separately and in that order, as individual numbers:
Rectangle RectangleArea = new Rectangle(0, 0, 50, 50);
Rectangle makes quite a few read-write properties available to set or extract its dimensions in different combinations:
Note that these properties are not all independent � for example setting Width will also affect the
value of Right.
We'll mention the existence of the System.Drawing.Region class here, though we don't have space to go details in this book. Region represents an area of the screen that has some complex shape. For example the shaded area in the diagram could be represented by Region:
As you can imagine, the process of initializing a Region instance is itself quite complex. Broadly speaking, you can do it by indicating either what component simple shapes make up the region or what path you take as you trace round the edge of the region. If you do need to start working with areas like this, then it's worth looking up the Region class.
We're just about ready to do some more advanced drawing now. First however, I just want to say a few things about debugging. If you have a go at setting break points the samples in this chapter you will quickly notice that debugging drawing routines isn't quite a simple as debugging other parts of your program. This is because the very fact of entering and leaving the debugger often causes Paint messages to be sent to your application. The result can be that setting a breakpoint in your OnPaint override simply causes your application to keep the painting itself over and over again, so it's unable to do anything else.
A typical scenario is this. You want to find out why your application is displaying something incorrectly, so you set a break point in OnPaint. As expected, the application hits the break point and the debugger comes in, at which point your developer environment MDI window comes to the foreground. If you're anything like, me you probably have the developer environments set to full screen display so you can more easily view all the debugging information, which means it always completely hides the application you are debugging.
Moving on, you examine the values of some variables and hopefully find out something useful. Then you hit F5 to tell the application to continue, so that you can go on to see what happens when the application displays something else, after it's done some processing. Unfortunately, the first thing that happens is that the application comes to the foreground and Windows efficiently detects that the form is visible again and promptly sends it a Paint event. This means, of course, that your break point gets hit again straight away. If that's what you want fine, but more commonly what you really want is to hit the breakpoint later, when the application is drawing something more interesting, perhaps after you've selected some menu option to read in a file or in some other way change what is displayed. It looks like you're stuck. Either you don't have a break point in OnPaint at all, or your application can never get beyond the point where it's displaying its initial startup window.
There are a couple of ways around this problem.
If you have a big enough screen the easiest way is simply to keep your developer environment window restored rather than maximized and keep it well away from your application window � so your application never gets hidden in the first place. Unfortunately, in most cases that is not a practical solution, because that would make your developer environment window too small.;
This means your application can never be hidden by other windows (except other topmost windows). It always remains above other windows even when another application has the focus. This is how the task manager behaves.
Even with this technique you have to be careful, because you can never quite be certain when Windows might decide for some reason to raise a Paint event. If you really want to trap some problem in that occurs in OnPaint for some specific circumstance (for example, the application draws something after you select a certain menu option, and something goes wrong at that point), than the best way to do this is to place some dummy code in OnPaint that tests some condition, which will only be true in the specified circumstances � and then place the break point inside the if block, like this:
protected override void OnPaint( PaintEventArgs e ) { // Condition() evaluates to true when we want to break if ( Condition() == true) { int ii = 0; // <-- SET BREAKPOINT HERE!!! }
This is a quick-and-easy way of putting in a conditional break point.
Our earlier DrawShapes sample worked very well, because everything we needed to draw fitted into the initial window size. In this section we're going to look at what we need to do if that's not the case.
We shall expand our DrawShapes sample to demonstrate scrolling. To make things a bit more realistic, we'll start by creating a sample BigShapes, in which we will make the rectangle and ellipse a bit bigger. Also, while we're at it we'll demonstrate how to use the Point, Size and Rectangle structs by using them define the drawing areas. With these changes, the relevant part of the Form1 class looks like this:
// member fields private Point rectangleTopLeft = new Point(0, 0); private Size rectangleSize = new Size(200,200); private Point ellipseTopLeft = new Point(50, 200); private Size ellipseSize = new Size(200, 150); private Pen bluePen = new Pen(Color.Blue, 3); private Pen redPen = new Pen(Color.Red, 2); private void InitializeComponent() { this.components = new System.ComponentModel.Container(); this.Size = new System.Drawing.Size(300,300); this.Text = "Scroll Shapes"; this.BackColor = Color.White; } #endregion protected override void OnPaint( PaintEventArgs e ) { Graphics dc = e.Graphics; if (e.ClipRectangle.Top < 350 || e.ClipRectangle.Left < 250) { Rectangle RectangleArea = new Rectangle (RectangleTopLeft, RectangleSize); Rectangle EllipseArea = new Rectangle (EllipseTopLeft, EllipseSize); dc.DrawRectangle(BluePen, RectangleArea); dc.DrawEllipse(RedPen, EllipseArea); } base.OnPaint(e); }
Notice, that we've also turned the Pen objects into member fields � this is more efficient than creating a new Pen every time we need to draw anything, as we have been doing up to now.
The result of running this sample looks like this:
We can see a problem instantly. The shapes don't fit in our 300x300 pixel drawing area.
Normally, if a document is too large to display, an application will add scroll bars to let you scroll the window and look at a chosen part of it at a time. This is another area in which, with the kind of user interface that we were dealing with in Chapter 9, we'd let the .NET runtime and the base classes handle everything. If your form has various controls attached to it than the Form instance will normally know where these controls are and it will therefore know if its window becomes so small that scroll bars become necessary. The Form instance will also automatically add the scroll bars for you, and not only that, but it's also able to correctly draw whichever portion of the screen you've scrolled to. In that case there is nothing you need to explicitly do in your code. In this chapter however, we're taking responsibility for drawing to the screen therefore, we're going to have to help the Form instance out when it comes to scrolling.
In the last paragraph we said, if a document is too large to display. This probably made you think in terms of something like a Word or Excel document. With drawing applications, however, it's better to think of the document as whatever data the application is manipulating, which it needs to draw. For our current example, the rectangle and ellipse between them constitute the document.
Getting the scrollbars added is actually very easy. The Form can still handle all that for us � the reason it hasn't in the above ScrollShapes sample is that it doesn't know they are needed � because it doesn't know how big an area we will want to draw in. How big an area is that? More accurately, what we need to figure out is the size of a rectangle that stretches from the top left corner of the document (or equivalently, the top left corner of the client area before we've done any scrolling), and which is just big enough to contain the entire document. In this chapter, we'll refer to this area as the document area. Looking at the diagram of the 'document' we can see that for this example the document area is (250, 350) pixels.
Telling the form how big the document is it is quite easy. We use the relevant property, Form.AutoScrollMinSize. Therefore we write this:
private void InitializeComponent() { this.components = new System.ComponentModel.Container(); this.Size = new System.Drawing.Size(300,300); this.Text = "Scroll Shapes"; this.BackColor = Color.White; this.AutoScrollMinSize = new Size(250, 350); }
Notice, that here we've MinScrollSize in the InitializeComponent method. That's a good place in this particular application, because we know that is how big the screen area will always be. Our 'document' never changes size while this particular application is running. Bear in mind however, that if your application does things like display contents of files or something else for which the area of the screen might change, that will need to set this property at other times.
Setting MinScrollSize is a start, but it's not yet quite enough. To see, that let's look at what Scroll Shapes looks like now. Initially we get the screen that correctly displays the shapes:
Notice, that not only has the form correctly set the scrollbars, but it's even correctly sized them to indicate what proportion of the document is currently displayed. You can try resizing the window while the sample is running � you'll find the scroll bars respond correctly, and even disappear if we make the window big enough that they are no longer needed.
However, now look what happens however if we actually use one of the scroll bars and scroll down a bit:
Clearly something has gone wrong!
In fact, what's gone wrong, is that we haven't taken into account the position of the scrollbars in the code in our OnPaint() override. We can see this very clearly if we force the window to completely repaint itself by minimizing and restoring it. The result looks like this:
The shapes have been painted, just as before, with the top left corner of the rectangle nestled into the top left corner of the client area � just as if we hadn't moved the scroll bars at all.
Before we go over how to correct this problem, we'll take a closer look at precisely what is happening in these screenshots. Doing so is quite instructive, both because it'll help us to understand exactly how the drawing is done in the presence of scroll bars and because it'll be quite good practice: If you start using GDI+, I promise you that sooner or later, you'll find yourself presented with a strange drawing like one of the ones above, and having to try to figure out what has gone wrong.
We'll look at the last screenshot first since that one is easy to deal with. The ScrollShapes sample has just been restored so the entire window has just been repainted. Looking back at our code it instructs the graphics instance to draw a rectangle with top left coordinates (0,0) � relative to the top left corner of the client area of the window � which is what has been drawn. The problem is, that the graphics instance by default interprets coordinates as relative to the client window � it doesn't know anything about the scroll bars. Our code as yet does not attempt to adjust the coordinates for the scrollbar positions. The same goes for the ellipse.
Now, we can tackle the earlier screenshot, from immediately after we'd scrolled down. We notice that here the top two-thirds or so of the window look fine. That's because these were drawn when the application first started up. When you scroll windows, Windows doesn't ask the application to redraw what was already on the screen. Windows is smart enough to figure out for itself which bits of what's currently being displayed on the screen can be smoothly moved around to match where the scrollbars now are. That's a much more efficient process, since it may be able to use some hardware acceleration to do that too. The bit in this screenshot that's wrong is the bottom roughly one-third of the window. This part of the window didn't get drawn when the application first appeared since before we started scrolling it was outside the client area. This means that Windows asks our ScrollShapes application to draw this area. It'll raise a Paint event passing in just this area as the clipping rectangle. And that's exactly what our OnPaint() override has done. This rather strange screenshot results from the application having done exactly what we told it to do!
One way of looking at the problem is that we are at the moment expressing our coordinates relative to the top left corner of the start of the 'document' � we need to convert them to express them relative to the top left corner of the client area instead. The diagram should make this clear. In the diagram the thin rectangles mark the borders of the screen area and of the entire document (to make the diagram clearer we've actually extended the document further downwards and to the right, beyond the boundaries of the screen, but this doesn't change our reasoning. We've also assumed a small horizontal scroll as well as a vertical one). The thick lines mark the rectangle and ellipse that we are trying to draw. P marks some arbitrary point that we are drawing, which we're going to take as an example. When calling the drawing methods we've supplied the graphics instance with the vector from point B to (say) point P, this vector expressed as a Point instance. We actually need to give it the vector from point A to point P.
The problem is, that we don't know what the vector from A to P is. We know what B to P is � that's just the coordinates of P relative to the top left corner of the document � the position where we want to draw point P in the document. We also know what the vector from B to A is � that's just the amount we've scrolled by; this is stored in a property of the Form class called AutoScrollPosition. However, we don't know the vector from A to P. Now, if you were good at math at school, you might remember what the solution to this is � you just have to subtract vectors. Say, for example, to get from B to P you move 150 pixels across and 200 pixels down, while to get from B to A you have to move 10 pixels across and 57 pixels down. That means to get from A to P you have to move 140 (=150 minus 10) pixels across and 143 (=200 minus 57) pixels down. In computer terms we just have to do this calculation.
However it's actually a bit easier than that. I've gone through the process in detail, so you know exactly what's going on, but the Graphics class actually implements a method that will do these calculations for us. It's called TranslateTransform. How it works is that you pass it the horizontal and vertical coordinates that say where the top left of the client area is relative to the top left corner of the document, (our AutoScrollPosition property, that is the vector from B to A in the diagram). Then the Graphics device will from then on work out all its coordinates taking into account where the client area is relative to the document.
After all that explanation, all we need to do is add this line to our drawing code:
dc.TranslateTransform(this.AutoScrollPosition.X, this.AutoScrollPosition.Y);
In fact in our sample, it's a little more complicated because we also are separately testing whether we need to do any drawing by looking at the clipping region. We need to adjust this test to take the scroll position into account too. When we've done that, the full drawing code for the sample (downloadable from the Wrox Press website as the ScrollShapes) looks like this:
protected override void OnPaint( PaintEventArgs e ) { Graphics dc = e.Graphics; Size ScrollOffset = new Size(this.AutoScrollPosition); if (e.ClipRectangle.Top+ScrollOffset.Width < 350 || e.ClipRectangle.Left+ScrollOffset.Height < 250) { Rectangle RectangleArea = new Rectangle (RectangleTopLeft+ScrollOffset, RectangleSize); Rectangle EllipseArea = new Rectangle (EllipseTopLeft+ScrollOffset, EllipseSize); dc.DrawRectangle(BluePen, RectangleArea); dc.DrawEllipse(RedPen, EllipseArea); } base.OnPaint(e); }
Now, we have our scroll code working perfectly, we can at last obtain a correctly scrolled screenshot!
The distinction between measuring position relative to the top-left corner of the document and measuring it relative to the top-left corner of the screen, is so important that GDI+ has special names for them:
Developers familiar with GDI will note that, World Coordinates correspond to what in GDI were known as logical coordinates. Page coordinates correspond to what used to be known as device coordinates. Those developers should also note, that the way you code up conversion between logical and device coordinates has changed in GDI+. In GDI, conversions took place via the device context, using the LPtoDP() and DPtoLP() Windows API functions. In GDI+, it's the Form object that maintains the information needed to carry out the conversion.
GDI+ also distinguishes a third coordinate, which is now known as device coordinates. Device coordinates are similar to page coordinates, except that we do not use pixels as the unit of measurement � instead we use some other unit that can be specified by the user by calling the Graphics.PageUnit property. Possible units, besides the default of pixels, include inches and millimeters. Although we won't use the PageUnit property in this chapter, it can be useful as a way of getting around the different pixel densities of devices. For example, 100 pixels on most monitors will occupy something like an inch. However, laser printers can have anything up to thousands of dpi (dots per inch) � which means that a shape specified to be 100 pixels wide will look a lot smaller when printed on such a laser printer. By setting the units to, say, inches � and specify that the shape should be 1 inch wide, you can ensure that the shape will look the same size on the different devices.
In this section, we're going to look at the ways that you can specify what color you want something to be drawn in.
Colors in GDI+ are represented by instances of the System.Drawing.Color struct. Generally, once you've instantiated this struct, you won't do much with the corresponding Color instance � just pass it to whatever other method you are calling that requires a Color. We've encountered this struct once before � when we set the background color of the client area of the window in each of our samples. The Form.BackColor property actually returns a Color instance. In this section, we'll look at this struct in more detail. In particular, we'll examine several different ways that you can construct a Color.
The total number of colors that can be displayed by a monitor is huge � over 16 million. To be exact the number is 2 to the power 24, which works out at 16777216. Obviously we need some way of indexing those colors so we can indicate which of these is the color we want to display at a given pixel.
The most common way of indexing colors, is by dividing them into the red green and blue components. This idea is based on the principle that any color that the human eye can distinguish can be constructed from a certain amount of red light, a certain amount of the green light and a certain amount of blue light. These lights are known as components. In practice, it's found that if we divide the amount of each component light into 256 possible intensities that gives a sufficiently fine gradation to be able to display images that are perceived by the human eye to be of photographic quality. We therefore, specify colors by giving the amounts of these components on a scale of an 0 to 255 where 0 means that the components is not present and 255 means that it is at its maximum intensity.
We can now see where are quoted figure of 16,777,216 colors comes from since that number is just 256 cubed.
This gives us our first way of telling GDI+ about a color. You can indicate a color's red, green and blue values by calling the static function Color.FromArgb(). Microsoft has chosen not to supply a constructor to do this task. The reason is that there are other ways, besides the usual RGB components, to indicate a constructor. Because of this, Microsoft felt that the meaning of parameters passed to any constructor they defined would be open to misinterpretation: also allow you to specify something called an alpha-blend (that's the A in the name of the method, FromArgb()!) Alpha blending is beyond the scope of this chapter, and allows you paint a color semi-transparently by combining it with whatever color was already on the screen. This can give some beautiful effects and is often used in games.
Constructing a Color using FromArgb() is the most flexible technique, since it literally means you can specify any color that the human eye can see. However, if you want a simple, standard, well-known color such as red or blue, it's a lot easier to just be able to name the color you want. Hence Microsoft have also provided a large number of static properties in Color, each of which returns a named color. It is one of these properties that we used when we set the background color of our windows to white in our samples:
this.BackColor = Color.White; // has the same effect as: // this.BackColor = Color.FromArgb(255, 255 , 255);
There are several hundred such colors. The full list is given in the MSDN documentation. They include all the simple colors: Red, White, Blue, Green, Black and so on. as well as such delights as MediumAquamarine, LightCoral, and DarkOrchid.
Incidentally, although it might look that way, these named colors have not been chosen at random. Each one represents a precise set of RGB values, and they were originally chosen many years ago for use on the Internet. The idea was to provide a useful set of colors right across the spectrum whose names would be recognized by web browsers � thus, saving you from having to write explicit RGB values in your HTML code. A few years ago these colors were also important because early browsers couldn't necessarily display very many colors accurately, and the named colors were supposed to provide a set of colors that would be displayed correctly by most browsers. These days that aspect is less important since modern web browsers are quite capable of displaying any RGB value correctly.
Although we've said that in principle monitors can display any of the over 16 million RGB colors, in practice this depends on how you've set the display properties on your computer. You're probably aware that by right-clicking on the backdrop in Windows and selecting Settings from the resultant property sheet, you get the option to choose the display color resolution. There are traditionally three main options here (though some machines may provide other options depending on the hardware): true color (24-bit), high color (16-bit) and 256 colors. (On some graphics cards these days, true color is actually marked as 32-bit for reasons to do with optimizing the hardware, though in that case only 24 bits of the 32 bits are used for the color itself).
Only true-color mode allows you to display all of the RGB colors simultaneously. This sounds the best option, but it comes at a cost: 3 bytes are needed to hold a full RGB value which means 3 bytes of graphics card memory are needed to hold each pixel that is displayed. If graphics card memory is at a premium (a restriction that's less common now than it used to be) you may choose one of the other modes. High color mode gives you two bytes per pixel. That's enough to give 5 bits for each RGB component. So instead of 256 gradations of red intensity you just get 32 gradations; the same for blue and green, which gives a total of 65536 colors. That is just about enough to give apparent photographic quality on a casual inspection, though areas of subtle shading tend to be broken up a bit.
256-color mode gives you even fewer colors. However, in this mode, you get to choose which colors. What happens is that the system sets up something known as a palette. This is a list of 256 colors chosen from the 16 billion RGB colors. Once you've specified the colors in the palette, the graphics device will be able to display just those colors. The palette can be changed at any time � but the graphics device can still only display 256 different colors on the screen at any one time. 256-color mode is only really used when high performance and video memory is at a premium. Most games will use this mode � and they can still achieve decent looking graphics because of a very careful choice of palette.
In general, if a display device is in high color or 256-color mode and it is asked to display a particular RGB color, it will pick the nearest mathematical match from the pool of colors that it is able to display. It's for this reason that it's important to be aware of the color modes. If you are drawing something that involves subtle shading or photographic quality images, and the user does not have 24-bit color mode selected, s/he may not see the image the same way you intended it. So if you're doing that kind of work with GDI+, you should test your application in different color modes. (It is also possible for your application to programmatically set a given color mode, though we won't go into that in this chapter.)
For reference, we'll quickly mention the safety palette here. It is a very commonly used default palette. The way it works is that we set six equally spaced possible values for each color component. Namely, the values 0, 51, 102, 153, 204, 255. In other words, the red component can have any of these values. So can the green component. So can the blue component. So possible colors from the safety palette include (0,0,0) (black), (153,0,0) (a fairly dark shade of red), (0, 255, 102) (green with a smattering of blue added), and so on.. Because the safety palette used to be widely used however, you'll still find a fair number of applications and images exclusively use colors from the safety palette.
If you set Windows to 256-color mode, you'll find the default palette you get is the safety palette, with 20 Windows standard colors added to it, and 20 spare colors.
In this section, we'll review two helper classes that are needed in order to draw shapes. We've already encountered the Pen class, used to tell the graphics instance how to draw lines. A related class is System.Drawing.Brush, which tells it how to fill regions. For example, the Pen is needed to draw the outlines of the rectangle and ellipse in our previous samples. If we'd needed to draw these shapes as solid, it would have been a brush that would have been used.
We will look at brushes first, then pens.
Incidentally, if you've programmed using GDI before you have noticed from the first couple samples that pens are used in a different way in GDI+. In GDI the normal practice was to call a Windows API function, SelectObject(), which actually associated a pen with the device context. That pen was then used in all drawing operations that required a pen until you informed of the device context otherwise, by calling SelectObject()again. The same principle held for brushes and other objects such as fonts or bitmaps. With GDI+, as mentioned earlier, Microsoft has instead gone for a stateless model in which there is no default pen or other helper object. Rather, you simply specify with each method call the appropriate helper object to be used for that particular method.
GDI+ has several different kinds of brush � more than we have space to go into this chapter, so we'll just explain the simpler ones to give you an idea of the principles. Each type of brush is represented by an instance of a class derived from System.Drawing.Brush (this class is abstract so you can't instantiate Brush objects themselves � only objects of derived classes). The simplest brush simply indicates that a region is to be filled with solid color. This kind of brush is represented by an instance of the class System.Drawing.SolidBrush, which you can construct as follows:
Brush solidBeigeBrush = new SolidBrush(Color.Beige); Brush solidFunnyOrangyBrownBrush = new SolidBrush(Color.FromArgb(255,155,100));
Alternatively, if the brush is one of the Internet named colors you can use construct the brush more simply using another class, System.Drawing.Brushes. Brushes is one of those classes that you never actually instantiate (it's got a private constructor to stop you doing that). It simply has a large number of static properties, each of which returns a brush of a specified color. You'd use Brushes like this:
Brush solidAzureBrush = Brushes.Azure; Brush solidChocolateBrush = Brushes.Chocolate;
The next level of complexity is a hatch brush, which fills a region by drawing a pattern. This type of brush is considered more advanced so it's in the Drawing2D namespace, represented by the class System.Drawing.Drawing2D.HatchBrush. The Brushes class can't help you with hatch brushes � you'll need to construct one explicitly, by supplying the hatch style and two colors � the foreground color followed by the background color (but you can omit the background color in which case it defaults to black). The hatch style comes from an enumeration, System.Drawing.Drawing2D.HatchStyle. There are a large number of HatchStyle values available, so it's easiest to refer to the MSDN documentation for the full list. To give you an idea, typical styles include ForwardDiagonal, Cross, DiagonalCross, SmallConfetti, and ZigZag.:
We won't go into these brushes in this chapter. We'll note though that both can give some spectacular effects if used carefully. The Bezier sample in Chapter 9 uses a linear gradient brush to paint the background of the window.
Unlike brushes, pens are represented by just one class � System.Drawing.Pen. The pen is however, actually slightly more complex than the brush, because it needs to indicate how thick lines should be (how many pixels wide) and, for a wide line, how to fill the area inside the line. Pens can also specify a number of other properties which are beyond the scope of this chapter, but which include the Alignment property that we mentioned earlier, which may contain a reference to a Brush instance. This is quite powerful though, as it means you can draw lines colored using hatching or linear shading. There are four different ways that you can construct a Pen instances. (It needs to be a float in case we are using non-default units such as millimeters or inches for the Graphics object that will do the drawing � so we can for example specify fractions of an inch.) So for example, you can construct pens like this: simply contains a number of stock pens. These pens all have width one pixel and come in the usual sets of Internet named colors. This allows you to construct pens in this way:
Pen SolidYellowPen = Pens.Yellow;
We've almost finished the first part of the chapter, in which we've covered all the basic classes and objects required in order to draw specified shapes and so on. to the screen. We'll round off by reviewing some of the drawing methods the Graphics class makes available, and presenting a short sample that illustrates several brushes and pens.
System.Drawing.Graphics has a large number of methods that allow you to draw various lines, outline shapes and solid shapes. Once again there are too many to provide a comprehensive list here, but the following table gives the main ones and should give you some idea of the variety of shapes you can draw.
Before we leave the subject of drawing simple objects, we'll round off with a simple sample that demonstrates the kinds of visual effect you can achieve by use of brushes. The sample is called ScrollMoreShapes, and it's essentially a revision of ScrollShapes. Besides the rectangle and ellipse, we'll add a thick line and fill the shapes in with various custom brushes. We've already explained the principles of drawing so we'll present the code without too many comments. First, because of our new brushes, we need to indicate we are using the System.Drawing.Drawing2D namespace:
using System; using System.Drawing; using System.Drawing.Drawing2D; using System.Collections; using System.ComponentModel; using System.Windows.Forms; using System.Data;
Next some extra fields in our Form1 class which contain details of the locations where the shapes are to be drawn, as well as various pens and brushes we will use:
private Rectangle rectangleBounds = new Rectangle(new Point(0,0), new Size(200,200)); private Rectangle ellipseBounds = new Rectangle(new Point(50,200), new Size(200,150)); private Pen BluePen = new Pen(Color.Blue, 3); private Pen RedPen = new Pen(Color.Red, 2); private Brush SolidAzureBrush = Brushes.Azure; private Brush CrossBrush = new HatchBrush(HatchStyle.Cross, Color.Azure); static private Brush BrickBrush = new HatchBrush(HatchStyle.DiagonalBrick, Color.DarkGoldenrod, Color.Cyan); private Pen BrickWidePen = new Pen(BrickBrush, 10);
The BrickBrush field has been declared as static, so that we can use its value in the initializor for BrickWidePen that follows. C# won't let us use one instance field to initialize another instance field, because it's not defined which one will be initialized first, but declaring the field as static solves the problem, since only one instance of the Form1 class will be instantiated, it is immaterial whether the fields are static or instance fields.
Here is the OnPaint() override:
protected override void OnPaint( PaintEventArgs e ) { Graphics dc = e.Graphics; Point scrollOffset = this.AutoScrollPosition; dc.TranslateTransform(scrollOffset.X, scrollOffset.Y); if (e.ClipRectangle.Top+scrollOffset.X < 350 || e.ClipRectangle.Left+scrollOffset.Y < 250) { dc.DrawRectangle(BluePen, rectangleBounds); dc.FillRectangle(CrossBrush, rectangleBounds); dc.DrawEllipse(RedPen, ellipseBounds); dc.FillEllipse(SolidAzureBrush, ellipseBounds); dc.DrawLine(BrickWidePen, rectangleBounds.Location, ellipseBounds.Location+ellipseBounds.Size); } base.OnPaint(e); }
Now the results:
Notice that the thick diagonal line has been drawn on top of the rectangle and ellipse, because it was the last item to be painted.
One of the most common things you may. It's also possible to perform some manipulations on the image, such as stretching it or rotating it, and you can choose to display only a portion of it.
In this section, we'll reverse the usual order of things in this chapter: We'll present the sample, then we'll discuss some of the issues you need to be aware of when displaying images. We can do this, because the code needed to display an image really is so simple.
The class we need is the .NET base class, System.Drawing.Image. An instance of Image represents one image � if you like, one picture. Reading in an image takes one line of code:
Image MyImage = Image.FromFile("FileName");
FromFile() is a static member of Image and is the usual way of instantiating an image. The file can be any of the commonly supported graphics file formats, including .bmp, .jpg, .gif, and .png.
Displaying an image also takes just one line of code, assuming you have a suitable Graphics instance to hand:
dc.DrawImageUnscaled(MyImage, TopLeft);
In this line of code, dc is assumed to be a Graphics instance, MyImage is the Image to be displayed, and TopLeft is a Point struct that stores the device coordinates of where you want the image to be placed.
It could hardly be easier could it!
Images are probably the area in which developers familiar with GDI will notice the biggest difference with GDI+. In GDI, the API for dealing with images was arcane to say the least. Displaying an image involved several nontrivial steps. If the image was a bitmap, loading it was reasonably simple, but if it were any other file type loading it would involve a sequence of calls to OLE objects. Actually, getting a loaded image onto the screen involved getting a handle to it, selecting it into a memory device context.
We'll illustrate the process of displaying an image with a sample called DisplayImage. The sample simply displays a .jpg file in the application's main window. To keep things simple, the path of the .jpg file is hard coded into the application (so if you run the sample you'll need to change it to reflect the location of the file in your system). The .jpg file we'll display is a group photograph of attendees from a recent COMFest event.
As usual for this chapter, the DisplayImage project is a standard C# Visual Studio.NET generated windows application. We add the following field to our Form1 class:
Image Piccy;
We then load the file in our InitializeComponent routine
private void InitializeComponent() { this.components = new System.ComponentModel.Container(); this.Size = new System.Drawing.Size(600, 400); this.Text = "Display COMFest Image"; this.BackColor = Color.White; Piccy = Image.FromFile( @"c:\ProCSharp\Chapter21\DisplayImage\CF4Group.jpg"); this.AutoScrollMinSize = Piccy.Size; }); }
The choice of this.AutoScrollPosition as the device coordinate ensures that the window will scroll correctly, with the image located starting at the top left corner of the client area before any scrolling has taken place.
Finally, we'll take particular note of the modification made to the code wizard generated Form1.Dispose() method:
public override void Dispose() { base.Dispose(); if(components != null) components.Dispose(); Piccy.Dispose(); }
Disposing of the image when it's no longer needed is important, because images generally eat a lot of memory while in use. After Image.Dispose() has been called the Image instance no longer refers to any actual image, and so can no longer be displayed (unless you load a new image).
Running this code produces these results:
By the way if you're wondering, COMFest () is an informal group of developers in the United Kingdom who meet to discuss latest technologies, and swap ideas etc. The picture includes all the attendees at COMFest 4 with the exception of the author of this chapter who was (conveniently) taking the picture!
Although displaying images is very simple, it still pays to have some understanding of the underlying technology.
The most important point to understand about images is that they are always rectangular. That's not just a convenience for people, it's because of the underlying technology. It's because all modern graphics cards have hardware built in that can very efficiently copy blocks of pixels from one bit of memory to another bit of memory. Provided that the block of pixels represents a rectangular area. This hardware accelerated operation can occur virtually as one single operation, and as such is extremely fast. Indeed, it is the key to modern high performance graphics. This operation is known as a bitmap block transfer (or BitBlt � usually pronounced something like 'BITblert' or 'BITblot'). Image.DrawImageUnscaled() internally uses a BitBlt, which is why you can see a huge image, perhaps containing as many as a million pixels (the photo in our example has 104975 pixels) appearing, apparently, instantly. If the computer had to manually copy the image to the screen pixel by pixel, you'd see the image gradually being drawn over a period of up to several seconds.
BitBlts are so efficient, therefore almost all drawing and manipulation of images is carried out using them. Even some editing of images will be done by BitBlting portions of images between device contexts that represent areas of memory. In the days of GDI, the Windows 32 API function BitBlt() was arguably the most important and widely used function for image manipulation, though with GDI+ the BitBlt operations are largely hidden by the GDI+ object model.
It's not possible to BitBlt areas of images that are not rectangular, however similar effects can be easily simulated.. Such operations are supported by hardware acceleration, and can be used to give a variety of subtle effects. We're not going to go into details of this here. We'll remark however, that the Graphics object implements another method, DrawImage(). This is similar to DrawImageUnscaled(), but comes in a large number of overloads which allow you to specify more complex forms of BitBlt to be used in the drawing process. DrawImage() also allows you to draw (BitBlt) only a specified part of the image, or to perform certain other operations on it such as scaling it (expanding or reducing it in size) as it is drawn.
We've left the very important topic of displaying text till later in the chapter because drawing text to the screen is in general more complex than drawing simple graphics. Actually I ought to qualify that statement. Just displaying a line of two of text when you're not that bothered about the appearance is extremely easy � it takes one single call to one method of the Graphics instance, Graphics.DrawString(). However if you are trying to display a document that has a fair amount of text in it, you rapidly find that things become a lot more complex. This is for two reasons:
Having said all that, I don't want to scare you off too much. Good quality text processing is not impossible � it just tricky to get right. As we've mentioned, the actual process of putting a line of text on the screen, assuming you know the font and where you want it to go is very simple. Therefore, the next thing we'll do is present a quick sample that shows how to display a couple of pieces of text. After that, the plan for the rest of the chapter is to review some of the principles of Fonts and Font Families before moving on to our more realistic text processing sample, the CapsEditor sample, which will demonstrate some of the issues involved when you're trying to layouts text on-screen and also showed how to handle user input.
The sample is our usual Windows Forms effort. This time we've overridden OnPaint() as follows:
protected override void OnPaint(PaintEventArgs e) { Graphics dc = e.Graphics; Brush blackBrush = Brushes.Black; Brush blueBrush = Brushes.Blue; Font haettenschweilerFont = new Font("Haettenschweiler", 12); Font boldTimesFont = new Font("Times New Roman", 10, FontStyle.Bold); Font italicCourierFont = new Font("Courier", 11, FontStyle.Italic | FontStyle.Underline); dc.DrawString("This is a groovy string", haettenschweilerFont, blackBrush, 10, 10); dc.DrawString("This is a groovy string " + "with some very long text that will never fit in the box", boldTimesFont, blueBrush, new Rectangle(new Point(10, 40), new Size(100, 40))); dc.DrawString("This is a groovy string", italicCourierFont, blackBrush, new Point(10, 100)); base.OnPaint(e); }
Running this sample produces this:
The sample demonstrates the use of the Graphics.DrawString() method to draw items of text. DrawString() comes in a number of overloads of which we demonstrate three. The different overloads all however, require parameters that indicate the text to be displayed, the font that the string should be drawn in, and the brush that should be used to construct the various lines and curves that make up each character of text. There are a couple of alternatives for the remaining parameters. In general however, it is possible to specify either a Point (or equivalently, two numbers), or a Rectangle. If you specify a Point, the text will start with its top left corner at that Point and simply stretch out to the right. If you specify a Rectangle, then the Graphics instance will lay the string out inside that rectangle. If the text doesn't fit in to bounds of the rectangle, then it'll be cut off, as you see from the screenshot. Passing a rectangle to DrawString() means that the drawing process will take longer, as DrawString() will need to figure out where to put line breaks, but the result may look nicer (if the string fits in the rectangle!)
This sample also shows a couple of ways of constructing fonts. You always need the name of the font, and its size (height). You can also optionally pass in various styles that modify how the text is to be drawn (bold, underline and so on.)
We all think intuitively that we have a fairly good understanding of fonts. After all we look at them almost all the time. A font describes exactly how each letter should be displayed, and selection of the appropriate font as well as providing a reasonable variety of fonts within a document is an important factor in improving readability of that document. You just have to look at the pages of this book to see how many fonts have been used to present you with the information. In general, you will need to choose your fonts carefully � because a poor choice of font can badly damage both the attractiveness and the usability of your applications.
Oddly, our intuitive understanding usually isn't quite correct. Most people, if asked to name a font, will say things like 'Arial' or 'Times New Roman' or 'Courier'. In fact, these are not fonts at all � they are font families. The font would be something like, say, Arial 9-point italic. Get the idea? The font family tells you in generic terms the visual style of the text. The font family is a key factor in the overall appearance of your application, and most of us will have become used to recognizing the styles of the most common font families, even if we're not consciously aware of this. In casual speech, font families are often mistakenly described simply as fonts. More correctly, a font adds more information by specifying the size of the text and also whether any of certain modifications have been applied to the text. For example, whether it is bold, italic, underlined, or displayed in small caps or as a subscript. Such modifications are technically referred to as styles, though in some ways the term is misleading, since as we've just noted the visual appearance is determined as much by the font family.
The way the size of the text is measured is by specifying its height. The height is measured in points � a traditional unit, which represents 1/72 of an inch (or for people living outside the UK and the USA, a point is 0.351 mm). So for example, letters in a 10-point font are 10/72 of an inch, (or roughly 1/7'' or 3.5 mm) high. You might think from that at that means you'd get seven lines of text that has a font size 10 into one inch of vertical screen or paper space. In fact, you get slightly less than this, because you need to allow for the spacing between the lines as well.
Strictly speaking, measuring the height isn't quite as simple as that, since there are several different heights that you need to consider. For example, there is the height of tall letters like the A or f (this is the measurement that we really mean when we talk about the height), the additional height occupied by any accents on letters like � or � (the internal leading), and the extra height below the base line needed for the tails of letters like y and g (the descent). However, for this chapter we won't worry about that. Once you specify the font family and the main height, these subsidiary heights are determined automatically � you can't independently choose their values.
Incidentally, when you're dealing with fonts you may also encountered some other terms that are commonly used to describe certain font families.
Microsoft has provided two main classes that we need to deal with when the selecting or manipulating fonts. These are System.Drawing.Font and System.Drawing.FontFamily. We have already seen the main use of the Font class. When we wish to draw text we instantiate an instance of Font and pass it to the DrawString method to indicate how the text should be drawn. A FontFamily instance is used (surprisingly enough) to represent a family of fonts.
One use of the FontFamily class is if you know you want a font of a particular type (Serif, SansSerif or Monospace), but don't mind which font. The static properties GenericSerif, GenericSansSerif, and GenericMonospace return default fonts that satisfy these criteria:
FontFamily sansSerifFont = FontFamily.GenericSansSerif;
Generally speaking, however, if you're writing a professional application, you will want to choose your font in a more sophisticated way than this. Most likely, you will implement your drawing code, so that it checks what font families are actually installed on the computer and hence what fonts are available. Then it will select the appropriate one. Perhaps by taking the first available one on a list of preferred fonts. And if you want your application to be very user friendly, the first choice on the list will probably be the one that the user selected last time they ran your software. Usually, if you're dealing with the most popular fonts families, such as Arial and Times New Roman, you'll be safe. However, if you do try to display text using a font that doesn't exist the results aren't always predictable and you're quite likely to find the Windows just substitutes the standard system font, which is very easy for the system to draw but it doesn't look very pleasant and if it does appear in your document it's likely to give the impression of very poor quality software.
You can find out what fonts are available on your system using a class called InstalledFontCollection, which is in the System.Drawing.Text namespace. This class represents implements a property, Families, which is an array of all the fonts that are available to use on your system:
InstalledFontCollection insFont = new InstalledFontCollection(); FontFamily [] families = insFont.Families; foreach (FontFamily family in families) { // do processing with this font family }
In this section, we will work through a quick sample, EnumFontFamilies, which lists all the font families available on the system and illustrates them by displaying the name of each family using an appropriate font (the 10-point regular version of that font family). When the sample is run it
looks like this:
Note, however that, depending on what fonts you have installed on your computer, you may get different results when you run it. For this sample we have as usual created a standard C# windows application named EnumFontFamilies. We then add the following constant to the Form1 class:
const int margin = 10;
The margin is the size of the left and top margin between the text and the edge of the document � it stops the text from appearing right at the edge of the client area.
This is designed as a quick-and-easy way of showing off font families therefore, the code is crude and in many cases doesn't do things the way you really ought to in a real application. For example, I've just hard coded in a guessed value for the document size instead of calculating how much space we actually need to display the list of font families. (We'll use a more correct approach in the next sample). Hence, our InitializeComponent() method looks like this.
private void InitializeComponent() { this.components = new System.ComponentModel.Container(); this.Size = new System.Drawing.Size(300,300); this.Text = "EnumFontFamilies"; this.BackColor = Color.White; this.AutoScrollMinSize = new Size(200,500); }
And here is the OnPaint() method:
protected override void OnPaint(PaintEventArgs e) { int verticalCoordinate = margin; Point topLeftCorner; InstalledFontCollection insFont = new InstalledFontCollection(); FontFamily [] families = insFont.Families; e.Graphics.TranslateTransform(AutoScrollPosition.X, AutoScrollPosition.Y); foreach (FontFamily family in families) { if (family.IsStyleAvailable(FontStyle.Regular)) { Font f = new Font(family.Name, 10); topLeftCorner = new Point(margin, verticalCoordinate); verticalCoordinate += f.Height; e.Graphics.DrawString (family.Name, f, Brushes.Black,topLeftCorner); f.Dispose(); } } base.OnPaint(e); }
In this code we start off by using an InstalledFontCollection object to obtain an array that contains details of all the available font families. For each family, we instantiate a font of that family of size 10 point. We use a simple constructor for Font � there are many more that allow more options to be specified. The constructor we've picked takes two parameters: the name of the family and the
size of the font:
Font f = new Font(family.Name, 10);
This constructor constructs a font that has the regular style (that is, it is not underlined, italic or struck-through). To be on the safe side however, we first check that this style is available for each font family before attempting to display anything using that font. This is done using the FontFamily.IsStyleAvailable() method, and this check is important, because not all fonts are available in all styles:
if (family.IsStyleAvailable(FontStyle.Regular))
FontFamily.IsStyleAvailable() takes one parameter, a FontStyle enumeration. This enumeration contains number of flags that may be combined with the bitwise OR operator. The possible flags are Bold, Italic, Regular, Strikeout and Underline.
Finally, we note that we use a property of the Font class, Height, which returns the height needed to display text of that font, in order to work out the line spacing,
Font f = new Font(family.Name, 10); topLeftCorner = new Point(margin, verticalCoordinate); verticalCoordinate += f.Height;
Again to keep things simple, our version of OnPaint()reveals some bad programming practices. For a start, we haven't bothered to check what area of the document actually needs drawing � we just try to display everything. Also, instantiating a Font is, as remarked earlier, a computationally intensive process, so we really ought to save the fonts rather than instantiating new copies every time OnPaint() is called. As a result of the way the code has been designed, you may notice that this sample actually takes a noticeable time to paint itself. In order to try to conserve memory and help the garbage
collector out we do however, call Dispose() on each font instance after we have finished with it. If we didn't, then after 10 or 20 paint operations, there'd be a lot of wasted memory storing fonts that
are no longer needed.
We now come to our larger sample of the chapter. The CapsEditor sample is designed to illustrate how the principles of drawing that we've learned up till now need to be applied in a more realistic example. The sample won't require any new material, apart from responding to user input via the mouse, but it will show how to manage the drawing of text so the application maintains performance while ensuring that the contents of the client area of the main window are always kept up to date.
The CapsEditor program is functionally quite simple. It allows the user to read in a text file, which is then displayed line by line in the client area. If the user double-clicks on any line, that line will be changed to all uppercase. That's literally all the sample does. Even with this limited set of features, we'll find that the work involved in making sure everything gets displayed in the right place while considering performance issues (such as only displaying what we need to in a given OnPaint() call) is quite complex. In particular, we have a new element here, that the contents of the document can change � either when the user selects the menu option to read a new file or when s/he double clicks to capitalize a line. In the first case we need to update the document size, so the scroll bars still work correctly, and redisplay everything. In the second case, we need to check carefully whether the document size is changed, and what text needs to be redisplayed.
We'll start by reviewing the appearance of CapsEditor. When the application is first run, it has no document loaded, and displays this:
The File menu has two options: Open, and Exit. Exit exits the application, while Open brings up the standard OpenFileDialog and reads in whatever file the user selects. This screenshot shows CapsEditor being used to view its own source file, Form1.cs. I've also randomly double-clicked on a couple of lines to convert them to uppercase:
The sizes of the horizontal and vertical scrollbars are by the way correct. The client area will scroll just enough to view the entire document. (It's a long program, and there are a couple of extremely long code-wizard-generated lines in it, hence the shortness of both scrollbars). CapsEditor doesn't try to wrap lines of text � the sample is already complicated enough without doing that. It just displays each line of the file exactly as it is read in.There are no limits to the size of the file, but we are assuming it is a text file and doesn't contain any non-printable characters.
We'll start off by adding in some fields to the Form1 class that we'll need:
#region constant fields private const string standardTitle = "CapsEditor"; // default text in titlebar private const uint margin = 10; // horizontal and vertical margin in client area #endregion #region Member fields private ArrayList documentLines = new ArrayList(); // the 'document' privateuint lineHeight; // height in pixels of one line private Size documentSize; // how big a client area is needed to // displaydocument private uint nLines; // number of lines in document private Font mainFont; // font used to display all lines private Font emptyDocumentFont; // font used to display empty message private Brush mainBrush = Brushes.Blue; // brush used to display document text private Brush emptyDocumentBrush = Brushes.Red; // brush used to display empty document message private Point mouseDoubleClickPosition; // location mouse is pointing to when double-clicked private OpenFileDialog fileOpenDialog = new OpenFileDialog(); // standard open file dialog private bool documentHasData = false; // set to true if document has some data in it #endregion
Most of these fields should be self-explanatory. The documentLines field is an ArrayList which contains the actual text of the file that has been read in. In a real sense, this is the field that contains the data in the 'document'. Each element of DocumentLines contains information for one line of text that has been read in. It's an ArrayList, rather than a plain C# array, so that we can dynamically add elements to it as we read in a file. You'll notice I've also liberally used #region preprocessor directives to block up bits of the program to make it easier to edit.
I said each documentLines element contains information about a line of text. This information is actually an instance of another class I've defined, TextLineInformation:
class TextLineInformation { public string Text; public uint Width; }
TextLineInformation looks like a classic case where you'd normally use a struct rather than a class since it's just there to group together a couple of fields. However its instances are always accessed as elements of an ArrayList, which expects its elements to be stored as reference types, so declaring TextLineInformation as a class makes things more efficient by saving a lot of boxing and unboxing operations.
Each TextLineInformation instance stores a line of text � and that can be thought of as the smallest item that is displayed as a single item. In general, for each such item in a GDI+ application, you'd probably want to store the text of the item, as well as the world coordinates of where it should be displayed and its size. Note world coordinates, not page coordinates. The page coordinates will change frequently, whenever the user scrolls, whereas world coordinates will normally only change when other parts of the document are modified in some way. In this case we've only stored the Width of the item. The reason is because the height in this case is just the height of whatever our selected font is. It's the same for all lines of text so there's no point storing it separately for each one. It's instead stored once, in the Form1.lineHeight field. As for the position � well in this case the x-coordinate is just equal to the margin, and the y-coordinate is easily calculated as:
Margin + LineHeight*(however many lines are above this one)
If we'd been trying to display and manipulate, say, individual words instead of complete lines, then the x-position of each word would have to be calculated using the widths of all the previous words on that line of text, but I wanted to keep it simple here, which is why we're treating each line of text
as one single item.
Let's deal with the main menu now. This part of the application is more the realm of Windows forms � the subject of Chapter 9, than of GDI+. I added the menu options using the design view in Visual Studio.NET, but renamed them as menuFile, menuFileOpen, and menuFileExit. I then modified the code in InitializeComponent() to add the appropriate event handlers, as well as perform some other initialization:
private void InitializeComponent() { // stuff added by code wizard this.menuFileOpen = new System.Windows.Forms.MenuItem(); this.menuFileExit = new System.Windows.Forms.MenuItem(); this.mainMenu1 = new System.Windows.Forms.MainMenu(); this.menuFile = new System.Windows.Forms.MenuItem(); this.menuFileOpen.Index = 0; this.menuFileOpen.Text = "Open"; this.menuFileExit.Index = 3; this.menuFileExit.Text = "Exit"; this.mainMenu1.MenuItems.AddRange(new System.Windows.Forms.MenuItem[] {this.menuFile}); this.menuFile.Index = 0; this.menuFile.MenuItems.AddRange(new System.Windows.Forms.MenuItem[] {this.menuFileOpen, this.menuFileExit}); this.menuFile.Text = "File"; this.menuFileOpen.Click += new System.EventHandler(this.menuFileOpen_Click); this.menuFileExit.Click += new System.EventHandler(this.menuFileExit_Click); this.AutoScaleBaseSize = new System.Drawing.Size(5, 13); this.BackColor = System.Drawing.Color.White; this.Size = new Size(600,400); this.Menu = this.mainMenu1; this.Text = standardTitle; CreateFonts(); FileOpenDialog.FileOk += new System.ComponentModel.CancelEventHandler( this.OpenFileDialog_FileOk); }
We've added event handlers for the File and Exit menu options, as well as for the FileOpen dialog that gets displayed when the user selects Open. CreateFonts() is a helper method that sorts out the fonts we intend to use:
private void CreateFonts() { mainFont = new Font("Arial", 10); lineHeight = (uint)mainFont.Height; emptyDocumentFont = new Font("Verdana", 13, FontStyle.Bold); }
The actual definitions of the handlers are pretty standard stuff:
protected void OpenFileDialog_FileOk(object Sender, CancelEventArgs e) { this.LoadFile(fileOpenDialog.FileName); } protected void menuFileOpen_Click(object sender, EventArgs e) { fileOpenDialog.ShowDialog(); } protected void menuFileExit_Click(object sender, EventArgs e) { this.Close(); }
We'll examine the LoadFile() method now. It's the method that handles the opening and reading in of a file (as well as ensuring a Paint event gets raised to force a repaint with the new file):
private void LoadFile(string FileName) { StreamReader sr = new StreamReader(FileName); string nextLine; documentLines.Clear(); nLines = 0; TextLineInformation nextLineInfo; while ( (nextLine = sr.ReadLine()) != null) { nextLineInfo = new TextLineInformation(); nextLineInfo.Text = nextLine; documentLines.Add(nextLineInfo); ++nLines; } sr.Close(); documentHasData = (nLines>0) ? true : false; CalculateLineWidths(); CalculateDocumentSize(); this.Text = standardTitle + " - " + FileName; this.Invalidate(); }
Most of this function is just standard file-reading stuff, as covered in Chapter 14. Notice how as the file is read in, we progressively add lines to the documentLinesArrayList, so this array ends up containing information for each of the lines in order. After we've read in the file, we set the documentHasData flag to indicate whether there is actually anything to display. Our next task is to work out where everything is to be displayed, and, having done that, how much client area we need to display the file � the document size that will be used to set the scroll bars. Finally, we set the title bar text and call Invalidate(). Invalidate() is an important method supplied by Microsoft, so we'll break for a couple of pages to explain its use, before we examine the code for the CalculateLineWidths() and CalculateDocumentSize() methods.
Invalidate() is a member of System.Windows.Forms.Form that we've not met before. It's an extremely useful method for when you think something needs repainting. Basically it marks an area of the client window as invalid and, therefore, in need of repainting, and then makes sure a Paint event is raised. There's a couple of overrides to Invalidate(): you can pass it a rectangle that specifies (in page coordinates) precisely which area of the window needs repainting, or if you don't pass any parameters it'll just mark the entire client area as invalid.
You may wonder why we are doing it this way. If we know that something needs painting, why don't we just call OnPaint() or some other method to do the painting directly? Occasionally, if there's some very precise small change you want made to the screen you might do that, but generally calling painting routines directly is regarded as bad programming practice � if your code decides it wants some painting done, in general you should normally call Invalidate().
There are a couple of reasons for this:
The bottom line from all this, is that it is good practice to keep all your painting in the OnPaint() routine, or in other methods called from that method. Try not to have lots of other places in your code that call up methods to do odd bits of painting, though all aspects of program design have to be balanced against various considerations. If, say, you want to replace just one character or shape on the screen � or add an accent to a letter � and you know perfectly well that it won't effect anything else that you've drawn, then you may decide that it's not worth the overhead of going through Invalidate(), and just write a separate drawing routine.
In a very complicated application, you may even write a full class that takes responsibility for drawing to the screen. A few years ago when MFC was the standard technology for GDI-intensive applications, MFC followed this model, with a C++ class, C<ApplicationName>View that was responsible for this. However, even in this case, this class had one member function, OnDraw() which was designed to be the entry point for most drawing requests.
We'll return to the CapsEditor sample now and examine the CalculateLineWidths() and CalculateDocumentSize() methods that are called from LoadFile():
private void CalculateLineWidths() { Graphics dc = this.CreateGraphics(); foreach (TextLineInformation nextLine in documentLines) { nextLine.Width = (uint)dc.MeasureString(nextLine.Text, mainFont).Width; } }
This method simply runs through each line that has been read in and uses the Graphics.MeasureString() method to work out and stores how much horizontal screen space the string requires. We store the value, because MeasureString() is very computationally intensive. It's not the sort of method we want to call any more times than necessary if we want to keep performance up. If we hadn't made the CapsEditor sample so simple that we can easily work out the height and location of each item, this method would almost certainly have needed to be implemented in such a way as to compute all those quantities too.
Now we know how big each item on the screen is, and we can calculate whereabouts each item goes, we are in a position to work out the actual document size. The height is basically the number of lines times the height of each line. The width will need to be worked out by looking through each line to see which one is the longest, and taking the width of that one. For both height and width, we also will want to make an allowance for a small margin around the displayed document, to make the application look more attractive. (We don't want text squeezed up against any corner of the client area).
Here's the method that calculates the document size:
private void CalculateDocumentSize() { if (!documentHasData) { documentSize = new Size(100, 200); } else { documentSize.Height = (int)(nLines*lineHeight) + 2*(int)margin; uint maxLineLength = 0; foreach (TextLineInformation nextWord in documentLines) { uint tempLineLength = nextWord.Width + 2*margin; if (tempLineLength > maxLineLength) maxLineLength = tempLineLength; } documentSize.Width = (int)maxLineLength; } this.AutoScrollMinSize = documentSize; }
This method first checks whether there is any data to be displayed. If there isn't we cheat a bit and use a hard-coded document size, which I happen to know is big enough to display the big red <Empty Document> warning. If we'd wanted to really do it properly, we'd have used MeasureString() to check how big that warning actually is.
Once we've worked out the document size, we tell the Form instance what the size is by setting the Form.AutoScrollMinSize property. When we do this, something interesting happens behind the scenes. In the process of setting this property, the client area is invalidated and a Paint event is raised, for the very sensible reason that changing the size of the document means scroll bars will need to be added or modified and the entire client area will almost certainly be repainted. Why do I say that's interesting? It perfectly illustrates what I was saying earlier about using the Form.Invalidate() method. You see, if you look back at the code for LoadFile() you'll realize that our call to Invalidate() in that method is actually redundant. The client area will be invalidated anyway when we set the document size. I left the explicit call to Invalidate() in the LoadFile() implementation to illustrate how in general you should normally do things. In fact in this case, all calling Invalidate() again will do is needlessly request a duplicate Paint event. However, this in turn illustrates what I was saying about how Invalidate() gives Windows the chance to optimize performance. The second Paint event won't in fact get raised: Windows will see that there's a Paint event already sitting in the queue and will compare the requested invalidated regions to see if it needs to do anything to merge them. In this case both Paint events will specify the entire client area, so nothing needs to be done, and Windows will quietly drop the second Paint request. Of course, going through that process will take up a little bit of processor time, but it'll be an negligible amount of time compared to how long it takes to actually do some painting.
Now we've seen how CapsEditor loads the file, it's time to look at how the painting is done:
protected override void OnPaint(PaintEventArgs e) { Graphics dc = e.Graphics; int scrollPositionX = this.AutoScrollPosition.X; int scrollPositionY = this.AutoScrollPosition.Y; dc.TranslateTransform(scrollPositionX, scrollPositionY); if (!documentHasData) { dc.DrawString("<Empty document>", emptyDocumentFont, emptyDocumentBrush, new Point(20,20)); base.OnPaint(e); return; } // work out which lines are in clipping rectangle int minLineInClipRegion = WorldYCoordinateToLineIndex(e.ClipRectangle.Top � scrollPositionY); if (minLineInClipRegion == -1) minLineInClipRegion = 0; int maxLineInClipRegion = WorldYCoordinateToLineIndex(e.ClipRectangle.Bottom � scrollPositionY); if (maxLineInClipRegion >= this.documentLines.Count || maxLineInClipRegion == -1) maxLineInClipRegion = this.documentLines.Count-1; TextLineInformation nextLine; for (int i=minLineInClipRegion; i<=maxLineInClipRegion ; i++) { nextLine = (TextLineInformation)documentLines[i]; dc.DrawString(nextLine.Text, mainFont, mainBrush, this.LineIndexToWorldCoordinates(i)); } base.OnPaint(e); }
At the heart of this OnPaint() override, is a loop that goes through each line of the document, calling Graphics.DrawString() to paint each one. The rest of this code is mostly to do with optimizing the painting � the usual stuff about figuring out what exactly what needs painting instead of rushing in and telling the graphics instance to redraw everything.
We start off by checking if there is any data in the document. If there isn't, we draw a quick message saying so, call the base classes' OnPaint() implementation, and exit. If there is data, then we start looking at the clipping rectangle. The way we do this is by calling another method that we've written, WorldYCoordinateToLineIndex(). We'll examine this method next, but essentially it takes a given y-position relative to the top of the document, and works out what line of the document is being displayed at that point.
The first time we call the WorldYCoordinateToLineIndex() method, we pass it the coordinate value e.ClipRectangle.Top-scrollPositionY. This is just the top of the clipping region, converted to world coordinates. If the return value is �1, we'll play safe and assume we need to
start at the beginning of the document (as would be the case if the top of the clipping region was
in the top margin).
Once we've done all that, we essentially repeat the same process for the bottom of the clipping rectangle, in order to find the last line of the document that is inside the clipping region. The indices of the first and last lines are respectively stored in minLineInClipRegion and maxLineInClipRegion, so then we can just run a for loop between these values to do our painting. Inside the painting loop, we actually need to do roughly the reverse transformation to the one performed by WorldYCoordinateToLineIndex(): We are given the index of a line of text, and we need to check where it should be drawn. This calculation is actually quite simple, but we've wrapped it up in another method, LineIndexToWorldCoordinates(), which returns the required coordinates of the top left corner of the item. The returned coordinates are world coordinates, but that's fine, because we have already called TranslateTransform() on the Graphics object so that we need to pass it world, rather than page, coordinates when asking it to display items.
In this section, we'll examine the implementation of the helper methods that we've written in the CapsEditor sample to help us with coordinate transforms. These are the WorldYCoordinateToLineIndex() and LineIndexToWorldCoordinates() methods that we referred to in the last section, as well as a couple of other methods.
First, LineIndexToWorldCoordinates() takes a given line index, and works out the world coordinates of the top left corner of that line, using the known margin and line height:
private Point LineIndexToWorldCoordinates(int index) { Point TopLeftCorner = new Point( (int)margin, (int)(lineHeight*index + margin)); return TopLeftCorner; }
We also used a method that roughly does the reverse transform in OnPaint(). WorldYCoordinateToLineIndex() works out the line index, but it only takes into account a vertical world coordinate. This is because it is used to work out the line index corresponding to the top and bottom of the clip region.
private int WorldYCoordinateToLineIndex(int y) { if (y < margin) return -1; return (int)((y-margin)/lineHeight); }
There are three more methods, which will be called from the handler routine that responds to the user double-clicking the mouse. First, we have a method that works out the index of the line being displayed at given world coordinates. Unlike WorldYCoordinateToLineIndex(), this method takes into account the x and y positions of the coordinates. It returns �1 if there is no line of text covering the coordinates passed in:
private int WorldCoordinatesToLineIndex(Point position) { if (!documentHasData) return -1; if (position.Y < margin || position.X < margin) return -1; int index = (int)(position.Y-margin)/(int)this.lineHeight; // check position isn't below document if (index >= documentLines.Count) return -1; // now check that horizontal position is within this line TextLineInformation theLine = (TextLineInformation)documentLines[index]; if (position.X > margin + theLine.Width) return -1; // all is OK. We can return answer return index; }
Finally, we also on occasions, need to convert between line index and page, rather than world, coordinates. The following methods achieve this:
private Point LineIndexToPageCoordinates(int index) { return LineIndexToWorldCoordinates(index) + new Size(AutoScrollPosition); } private int PageCoordinatesToLineIndex(Point position) { return WorldCoordinatesToLineIndex(position - new Size(AutoScrollPosition)); }
Although these methods by themselves don't look particularly interesting, they do illustrate a general technique that you'll probably often need to use. With GDI+, we'll often find ourselves in a situation where we have been given some coordinates (for example the coordinates of where the user has clicked the mouse) and we'll need to figure out what item is being displayed at that point. Or it could happen the other way round � given a particular display item, whereabouts should it be displayed? Hence, if you are writing a GDI+ application, you'll probably find it useful to write methods that do the equivalent of the coordinate transformation methods illustrated here.
So far, with the exception of the File menu in the CapsEditor sample, everything we've done in this chapter has been one way: The application has talked to the user, by displaying information on the screen. Almost all software of course works both ways: The user can talk to the software as well. We're now going to add that facility to CapsEditor.
Getting a GDI+ application to respond to user input, is actually a lot simpler than writing the code to draw to the screen, and indeed we've already covered how handle user input in Chapter 9. Essentially, you override methods from the Form class that get called from the relevant event handler � in much the same way that OnPaint() is called when a Paint event is raised.
For the case of detecting when the user clicks on or moves the mouse the functions you may
wish to override include:
If you want to detect when the user types in any text, then you'll probably want to override these methods.
Notice that some of these events overlap. For example, if the user presses a mouse button this will raise the MouseDown event. If the button is immediately released again, this will raise the MouseUp event and the Click event. Also, some of these methods take an argument that is derived from EventArgs, and so can be used to give more information about a particular event. MouseEventArgs has two properties X and Y, which give the device coordinates of the mouse at the time it was pressed. Both KeyEventArgs and KeyPressEventArgs have properties that indicate which key or keys the event concerns.
That's all there is to it. It's then up to you to think about the logic of precisely what you want to do. The only point to note, is that you'll probably find yourself doing a bit more logic work with a GDI+ application than you would have with a Windows.Forms application. That's because in a Windows.Forms application you are typically responding to quite high-level events (TextChanged for a text box, for example). By contrast with GDI+, the events tend to be more basic � user clicks the mouse, or hits the key h. The action your application takes is likely to depend on a sequence of events rather than a single event. For example, in Word for Windows, in order to select some text the user will normally click the left mouse button, then move the mouse, then release the left mouse button. If the user simply hits, then releases the left mouse button Word doesn't select any text, but simply moves the text caret to the location where the mouse was. So at the point where the user hits the left mouse button, you can't yet tell what the user is going to do. Your application will receive the MouseDown event, but assuming you want your application to behave in the same way that Word for Windows does, there's not much you can do with this event except record that the mouse was clicked with the cursor in a certain position. Then, when the MouseMove event is received, you'll want to check from the record you've just made whether the left button is currently down, and if so highlight text as the user selects it. When the user releases the left mouse button, your corresponding action (in the OnMouseUp() method) will need to check whether any dragging took place, while the mouse was down and act accordingly. Only at this point is the sequence complete.
Another point to consider, is that because certain events overlap, you will often have a choice of which event you want your code to respond to.
The golden rule really is to think carefully about the logic of every combination of mouse movement or click and keyboard event which the user might initiate, and ensure that your application responds in a way that is intuitive and in accordance with the usual expected behavior of applications in every case. Most of your work here will be in thinking rather than in coding, though the coding you do will be quite fiddly, as you may need to take into account a lot of combinations of user input. For example, what should your application do if the user starts typing in text while one of the mouse buttons is held down? It might sound like an improbable combination, but sooner or later some user is going to try it!
For the CapsEditor sample, we are keeping things very simple, so we don't really have any combinations to think about. The only thing we are going to respond to is when the user double clicks � in which case we capitalize whatever line of text the mouse is hovering over.
This should be a fairly simple task, but there is one snag. We need to trap the DoubleClick event, but the table above shows that this event takes an EventArgs parameter, not a MouseEventArgs parameter. The trouble is that we'll need to know where the mouse is when the user double clicks, if we are to correctly identify the line of text to be capitalized � and you need a MouseEventArgs parameter to do that. There are two workarounds. One is to use a static method that is implemented by the Form1 object, Control.MousePosition to find out the mouse position, like so:
protected override void OnDoubleClick(EventArgs e) { Point MouseLocation = Control.MousePosition; // handle double click
In most cases this will work. However, there could be a problem if your application (or even some other application with a high priority) is doing some computationally intensive work at the moment the user double clicks. It just might happen in that case that the OnDoubleClick() event handler doesn't get called until perhaps half a second later. You don't really want delays like that, because they annoy users really quickly, but even so, such situations do come up occasionally. Half a second is easily enough for the mouse to get moved halfway across the screen � in which case you'll end up executing OnDoubleClick() for completely the wrong location!
A better way here, is to rely on one of the many overlaps between mouse event meanings. The first part of double clicking a mouse involves pressing the left button down. This means that if OnDoubleClick() is called then we know that OnMouseDown() has also just been called, with the mouse at the same location. We can use the OnMouseDown() override to record the position of the mouse, ready for OnDoubleClick(). This is the approach we take in CapsEditor:
protected override void OnMouseDown(MouseEventArgs e) { base.OnMouseDown(e); this.mouseDoubleClickPosition = new Point(e.X, e.Y); }
Now let's look at our OnDoubleClick() override. There's quite a bit more work to do here:
protected override void OnDoubleClick(EventArgs e) { int i = PageCoordinatesToLineIndex(this.mouseDoubleClickPosition); if (i >= 0) { TextLineInformation lineToBeChanged = (TextLineInformation)documentLines[i]; lineToBeChanged.Text = lineToBeChanged.Text.ToUpper(); Graphics dc = this.CreateGraphics(); uint newWidth =(uint)dc.MeasureString(lineToBeChanged.Text, mainFont).Width; if (newWidth > lineToBeChanged.Width) lineToBeChanged.Width = newWidth; if (newWidth+2*margin > this.documentSize.Width) { this.documentSize.Width = (int)newWidth; this.AutoScrollMinSize = this.documentSize; } Rectangle changedRectangle = new Rectangle( LineIndexToPageCoordinates(i), new Size((int)newWidth, (int)this.lineHeight)); this.Invalidate(changedRectangle); } base.OnDoubleClick(e); }
We start off by calling PageCoordinatesToLineIndex() to work out which line of text the mouse was hovering over when the user double-clicks. If this call returns �1 then we weren't over any text, so there's nothing to do (except, of course, call the base class version of OnDoubleClick() to let Windows do any default processing. You wouldn't ever forget to do that, would you?)
Assuming we've identified a line of text, we can use the string.ToUpper() method to convert it to uppercase. That was the easy part. The hard part, is figuring out what needs to be redrawn where. Fortunately, because we kept the sample so simplistic, there aren't too many combinations. We can assume for a start, that converting to uppercase will always either leave the width of the line on the screen unchanged, or increase it. Capital letters are bigger than lowercase letters therefore, the width will never go down. We also know that since we are not wrapping lines, our line of text won't overflow to the next line and push out other text below. Our action of converting the line to uppercase won't therefore, actually change the locations of any of the other items being displayed. That's a big simplification!
The next thing the code does is use Graphics.MeasureString() to work out the new width of the text. There are now just two possibilities:
In either case, we need to get the screen redrawn, by calling Invalidate(). Only one line has changed therefore, we don't want to have the entire document repainted. Rather, we need to work out the bounds of a rectangle that contains just the modified line, so that we can pass this rectangle to Invalidate(), ensuring that just that line of text will be repainted. That's precisely what the above code does. Our call to Invalidate() will result in OnPaint() being called, when the mouse event handler finally returns. Bearing in mind our comments earlier in the chapter about the difficulty in setting a break point in OnPaint(), if you run the sample and set a break point in OnPaint() to trap the resultant painting action, you'll find that the PaintEventArgs parameter to OnPaint() does indeed, contain a clipping region that matches the specified rectangle. And since we've overloaded OnPaint() to take careful account of the clipping region, only the one required line of text will be repainted.
In this chapter we've focused entirely on drawing to the screen. Often, you will also want your application to be able to produce a hard copy of the data too. Unfortunately, in this book we don't have space to go into the details of this process, but we'll briefly review the issues you'll face if you do wish to implement the ability to print your document.
In many ways printing is just the same as displaying to a screen: You will be supplied with a device context (Graphics instance) and call all the usual display commands against that instance. However, there are some differences: Printers cannot scroll � instead they have pages. You'll need to make sure you find a sensible way of dividing your document into pages, and draw each page as requested. Also, beware � most users expect the printed output to look very similar to the screen output. This is actually very hard to achieve if you use page coordinates. The problem is that printers have a different number of dots per inch (dpi) than the screen. Display devices have traditionally maintained a standard of around 96 dpi, although some newer monitors have higher resolutions. Printers can have over a thousand dpi. That means, for example, that if you draw shapes or display images, sizing them by number of pixels, they will appear too small on the printer. In some cases the same problem can affect text fonts. Luckily, GDI+ allows device coordinates to address this problem. In order to print documents you will almost certainly need to use the Graphics.PageUnit property to carry out the painting using some physical units such as inches or millimeters.
.NET does have a large number of classes designed to help with the process of printing. These classes typically allow you to control and retrieve various printer settings and are found mostly in the System.Drawing.Printing namespace. There are also predefined dialogs, PrintDialog and PrintPreviewDialog available in the System.Windows.Forms namespace. The process of printing will initially involve calling the Show() method on an instance of one of these classes, after setting
some properties.
In this chapter, we've covered the area of drawing to a display device, where the drawing is done by your code rather than by some predefined control or dialog � the realm of GDI+. GDI+ is a powerful tool, and there are a large number of .NET base classes available to help you draw to a device. We've seen that the process of drawing is actually relatively simple � in most cases you can draw text or sophisticated figures or display images with just a couple of C# statements. However, managing your drawing � the behind the scenes work involving working out what to draw, where to draw it, and what does or doesn't need repainting in any given situation � is far more complex and requires careful algorithm design. For this reason, it is also important to have a good understanding of how GDI+ works, and what actions Windows takes in order to get something drawn. In particular, because of the architecture of Windows, it is important that where possible, drawing should be done by invalidating areas of the window and relying on Windows to respond by issuing a Paint event.
There are many more .NET classes concerned with drawing than we've had space to cover in this chapter, but if you've worked through and understood the principles involved in drawing, you'll be in an excellent position to explore them, by looking at their lists of methods in the documentation and instantiating instances of them to see what they do. In the end, drawing, like almost any other aspect of programming, requires logic, careful thought and clear algorithms. Apply that and you'll be able to write sophisticated user interfaces that don't depend on the standard controls. Your software will benefit hugely in both user-friendliness and visual appearance: There are many applications out there that rely entirely on controls for their user interface. While this can be effective, such applications very quickly end up looking just like each other. By adding some GDI+ code to do some custom drawing you can mark out your software as distinct and make it appear more original � which can only help your sales!
This chapter is written by Simon Robinson, Burt Harvey, Craig McQueen, Christian Nagel, Morgan Skinner, Jay Glynn, Karli Watson, Ollie Cornes, Jerod Moemeka,and taken from "Professional C#" published by Wrox Press Limited in June 2001; ISBN 1861004990; copyright � Wrox Press Limited 2001;.
General
News
Question
Answer
Joke
Rant
Admin | http://www.codeproject.com/KB/books/1861004990.aspx | crawl-002 | refinedweb | 22,870 | 60.24 |
Join the community to find out what other Atlassian users are discussing, debating and creating.
Hi,
If someone could please help, a thanks in advance. :)
I wanted to display an HTML report into a new tab on Bamboo Job Link (Default job). I have implemented the new tab using xwork and web-item in atlassian-xml. The class "XYZ" defined in action tab is extended from "PlanResultsAction". I am using a free-marker template which would display the artifact HTML report.
I was going through a similar question in link
""
Question 1:
How can I initialize BuildContextImpl and BuildContextHelper in class "XYZ" to get the Build Working Directory and hence the artifacts?
My task runs on the agent, it would then publish artifacts generated on the agent using the below,
"artifactManager.publish(taskContext.getBuildLogger(), taskContext.getBuildContext().getPlanResultKey(),
143 fileWorkingDir, artifact, new Hashtable<String, String>(), 1);"
Question 2:
Does the shared artifact get copied on the server?
I have created an artifact definition too, so I can see 2 artifacts (image)
Question 3:
I am always publishing an artifact (using publish API). Is my assumption correct that it always gets copied on the server from an agent?
What is the right way to display the content of the artifact in the new tab?
I did a bit of investigation. Now I am extending my class from "ViewBuildResults" instead of "PlanResultsAction".
I now use the function "getSharedArtifactPath" and "getArtifactPath" to get the artifacts. But now I am getting the below error
ERROR [http-nio-6990-exec-1] [BambooStrutsUnknownHandler] There is no Action mapped for namespace [/] and action name [XYZ] associated with context path [/bamboo].. | https://community.atlassian.com/t5/Bamboo-questions/How-to-access-job-artifacts-regular-shared-in-a-new-tab-in-the/qaq-p/786469 | CC-MAIN-2019-43 | refinedweb | 272 | 55.03 |
An introduction to creating source code faster with less typing using NetBeans 4.0.
Did you know that the keys on a standard
keyboard are deliberately laid out in a way which slows down your typing? They
are. This layout, called the QWERTY layout, was originally chosen for the first
typewriter in 1872 because it kept people from typing so fast the mechanical type
bars would collide.
This is just one of many reasons why you
should stop typing your source code, or rather, why you should let NetBeans create
the source code for you. This article contains a series of tips on how you can
write source code faster with NetBeans 4.0 using as little keyboard input as
possible.
Once you have typed a word in a source file
you can retype this word anywhere by entering the first few letters of it followed
by pressing Ctrl+K. This feature is called word matching and it works by
looking for words which start with the letters you have just typed. Ctrl+K
looks upwards in your source file and Ctrl+L looks downwards.
Once you have written the for loop shown below
for(Person person : personSet) {
}
which iterates through a set of persons, you
can place your cursor between the curly braces and type "pe" followed by pressing
Ctrl+K. NetBeans will immediately suggest the word "personSet" as it is the
nearest word which starts with "pe", and pressing Ctrl+K a second time will
make NetBeans suggest "person" which is the second nearest word.
In fact, in this special case you do not
even have to type "pe" � just pressing Ctrl+K inside the loop will also automatically
type the identifiers "personSet" and "person"; this works because these two identifiers
are the first words encountered when NetBeans searches upwards in the source
file. However, had you written ten words between the top of the loop and the
cursor, just pressing Ctrl+K would make NetBeans cycle through all those ten words
before finally suggesting "personSet" and then "person". In this case, typing "pe"
before pressing Ctrl+K helps by making NetBeans skip all words which do not
start with "pe".
Should you accidentally press Ctrl+K too
many times and reach a word which is placed before the word you want, just
press Ctrl+L a few times and NetBeans will move onwards to the word you missed.
Finally take note that this feature works everywhere in all kinds of files � in
fact it is quite handy in XML files where you can use it to type the names of
tags already present in the file.
Writing long identifier names makes code
more readable. Remembering that an identifier called "i" actually identifies an
instance of type "Invoice" and not an "Iterator" or an "ItemListener" can be hard,
especially in lengthy methods; if it had not been for all the typing involved, a
name like "invoice" would have been a better choice. Now, if only we could make
NetBeans write "invoice" for us, there would be no reason to choose
"i". This section presents a macro which does just that.
Imagine the cursor is placed right after "Invoice" (denoted with a "|") in the
code below:
public class Order {
public void setInvoice(Invoice |) {
}
}
When triggering
the macro NetBeans will automatically
type "invoice" by using the type
name "Invoice" already present in your code. The macro simply retypes the
previous word and changes the first letter of the retyped word to lower case. Now,
once you have made NetBeans type the first occurrence of the word "invoice"
remember to use the word matching feature (Alt+K) explained in "Tip 1" to
retype the identifier.
Creating the macro is done by following these
five steps:
selection-previous-word copy-to-clipboard
caret-next-word " " paste-from-clipboard caret-previous-word
selection-forward to-lower-case caret-next-word
Finally click the OK button of the "Macros"
window to close it but keep the "Options" window open - you need it in
order to create a shortcut key combination for your new macro:
When creating macros you would normally use the
macro recording feature which let you record mouse and keyboard gestures and
save them as a macro. However, the macro shown here contains the
"to-lower-case" action and this action is not yet represented by a mouse or
keyboard gesture, so you need to add this action manually.
This is one of my favourite macros! Usually
when you define a constructor you need to assign the value of the constructor
parameters to class variables as in this example:
public Person(String name, String emailAddress) {
this.name = name;
this.emailAddress = emailAddress;
}
If you define the following macro and bind
it to Alt+=, instead of typing "this.emailAddress = emailAddress;" you� can
simply type "this.e" followed by Ctrl+K to get "this.emailAddress" followed by
Alt+= which will type the rest.
See "Tip 2" for instructions on how to create
the macro. Here is the macro code:
selection-previous-word copy-to-clipboard caret-next-word " = " paste-from-clipboard ";"
Once you get into the habit of using the
Alt+= macro you will find it to be way faster than anything else.
NetBeans has a long range of built in
abbreviations for typing well-known words or even multi-line code constructs.
You can try this out by typing "sout" followed by pressing the Space bar � this
will make NetBeans type a complete standard output statement for you, i.e.
System.out.println("|");
NetBeans does not type a "|" character as
shown, rather the IDE places your cursor at that location after the
abbreviation has been replaced.
The built in abbreviations are a real time saver,
but you can benefit even more from the abbreviations feature by defining your
own custom abbreviations.
A good way to start is by looking through your
existing code files and locating code constructs which you use frequently. A
typical example of a frequently used construct could be the if/throw
construct found in this method:
public void remove(int lower, int upper) {
if(lower < 0)
throw new IllegalArgumentException(
"Parameter \"lower\" has to be a " +
"non-negative number."
);
}
This method starts with a guardian
statement which asserts that the first parameter is a legal argument � if not,
an IllegalArgumentException is thrown.
Since placing such guardian statement is good,
common practice, you can benefit from adding an abbreviation which NetBeans
will expand to the if/throw construct. This can be done as follows:
if(|)
throw new IllegalArgumentException("");
Note that NetBeans understands the use of
the "|" character to indicate where the cursor should be placed after the
expansion.
Now try typing "ill" and pressing Space - voila!
Of course you could create a macro which
types exactly the same thing, but an important benefit of abbreviations is the
fact that they are easy to remember because you can give them mnemonic names
like "ill" or even "illarg" instead of cryptic keyboard combinations like
"Ctrl+Alt+I". The downside to abbreviations is they cannot trigger a macro � at
least not yet...
The really easy-to-use template feature of
NetBeans is completely overlooked by many developers. If you need to create
many source files which have many aspects in common you should define a
template containing these common aspects.
For instance, in many systems you have
rules for how an exception class should be defined or rules which say that all
classes extending class X should contain certain methods, variables,
documentation or the like.
When writing a new class, what many
developers do is to start out from the empty "Java Class" template and write
everything by hand; often this involves typing code which you have written
before. Instead you should find a class which is a good starting point,
right-click its tree node and select "Save As Template...". Now, whenever you
need to create a similar class you just right-click the package in which the
new class should be placed, select the menu item "New > File/Folder..." and
choose the template.
Defining a template from an existing source
file might actually give you more content than needed, but once the template
has been defined you can customize it to remove any unnecessary code by
selecting the menu item "Tools > Options" and locating the template under
"Source Creation and Managent". You simply right-click its tree node and select
"Edit", edit the file, and save it.
If you provide an API to your co-workers in
which they need to implement the same interface quite often, you could help them
by providing a source template along with your API. Sharing a template is as
simple as copying it from one machine to the next. The templates are stored in
the folder
"... [user home dir]\.netbeans\4.0\config\Templates\[your template folder]"
If you use versioning, I warmly recommend
creating a set of project templates and sharing them through your versioning
system.
If you write Java beans by hand a huge
amount of time might be wasted on writing and maintaining the trivial getter and setter
methods. Fortunately, NetBeans can both generate these methods for you, and the
IDE even has a built in mechanism for maintaining them efficiently. Since
NetBeans 4.0 the refactoring mechanism has supported generating default getter
and setter methods from a class' variables. To create the getName and setName
methods shown here
public class Customer {
private String name;
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
}
all you have to do is to declare the name
property, right-click it, and select the popup menu item "Refactor... >
Encapsulate Fields...".
Once the methods have been generated, you
might want to change the name of the property "name". This could potentially
involve quite a lot of typing, since the word "name" or "Name" is now scattered
all over your source file. Luckily, NetBeans has a really nice feature called
"Bean Patterns" which solves this problem. If you locate your class in the
Project tree and expand its tree node, NetBeans displays a node called "Bean
Patterns" which contains a subnode for each bean property of the class. This
node can be renamed by hitting either F2 or by right-clicking and selecting "Rename...",
and if you rename your property this way, NetBeans will make all the necessary
changes for you.
The Bean Patterns feature also allows you to change the
property's type efficiently. If you right-click the property's tree node and
select the popup menu item "Properties", this opens a window containing
information about the property. Here you can change the type and NetBeans will
alter your source code accordingly.
In brief: You should never write import
statements. NetBeans 4.0 has introduced a really brilliant feature called "Fix
Imports" accessible by the keyboard combination Alt+Shift+F. This feature analyses
your source code and looks for class names which are not yet imported. If a
class X has not been imported yet and only one class in your whole project is
called X, NetBeans will import this class automatically. If two classes in two
different packages are named X, NetBeans will bring up a dialog asking which
class you prefer to import. "Fix Imports" works extremely well, and
it automatically removes imports which are no longer used.
By using these tips you will soon find yourself producing
better code with less syntactical errors in less time. Also, if you
are among the many developers who have experienced aching fingers in
coding intensive projects, making NetBeans write the code for you
is definitely the way to go.
The more you customize the IDE to fit your project's specific needs
the faster you can implement your system. NetBeans 4.0 is like the
perfect racing car - fast and fun to ride. But just
like the racing car, tuning it is essential. Tune the IDE by
customizing the abbreviations, macros, and templates to
fit your project's specific needs and unleash the true
power of NetBeans 4.0!
.... (Photo)
is
the founder of the Danish software company
ROCK IT, and the mastermind behind
the groundbreaking, all-Java,
component oriented content management system
Puls, which is
developed entirely with NetBeans (to be released in the spring 2005).
Bookmark this page | http://www.netbeans.org/kb/articles/stop-typing-source-code.html | crawl-002 | refinedweb | 2,058 | 57.5 |
Ever since optional static typing was added to Python 3.5+, the question of using type annotations keeps creeping back everywhere I work. Some see them as a step forward for the Future of Python™, but to me and many others it's a step back for what coding with Python fundamentally is. I've been in a number of debates over type annotations at work and so decided to compile some of the recurring points of discussion here.
Static typing will protect you
This is the argument universally put forward about type annotations. They'll save us from ourselves. Someone already did a study of this concept about Typescript, but I think looking at code will suffice.
Let's take some of the examples of type annotations you can easily find out there:
def concat(a: int, b: int) -> str: return str(a) + str(b)
Okay so you've written a custom concat that only operates on integers. But does it really? Python's
str() will work with anything that supports it, not just
int, so really this function will work with any two arguments that can be cast to strings. Here's the way this function should be written so that typing is enforced at runtime:
def concat(a: int, b: int) -> str: "Raises TypeError" # <- Type annotations don't support this if type(a) != int or type(b) != int: raise TypeError() return str(a) + str(b)
The parameter types aren't checked at runtime, therefore it is essential to check them yourself. This is particularly true if the code you're writing will be used as a library. And given python's modular nature, any code can be imported and re-used. Therefore relying on type annotations isn't sufficient to ensure that your code is safe and honours the contract outlined by the type annotations.
Readability
Another claim I see often is that type hints improve readability. Let's take a look.
def concat(a, b): ... def concat(a: int, b: int) -> str: ...
Okay at face value this is actually clearer. Or at the very least the impact of typing doesn't affect readability. Now let's look at real life.
def serialize(instance, filename, content, **kwargs): ... def serialize(instance: Instance, filename: str, content: Optional[Dict[str, Any]] = None, **kwargs: Any) -> bool: ...
Now that's becoming hairy. Don't laugh, this is inspired by real code I see daily.
So we have a function that serializes, god knows what, then it takes an instance, filename and some content. If we have the type annotated version, we can tell that the instance is an
Instance confusingly, the filename is a
str, and
content is a horrible optional mess, it probably goes deeper which is why the author gave up and just put
Any. It returns a boolean, but we have no idea what the boolean value means.
So in this case, the type hints just let us ask more questions, which could be a good thing. However let's be honest, this function wouldn't pass code review in either case.
Here's a slightly better one:
def serialize_foo_on_instance(instance, filename, content, **kwargs): ... class Foo: data: dict[str, Any] = {} ... def serialize_foo_on_instance(instance: Instance, filename: str, content: Optional[Foo], **kwargs: Any) -> bool: ...
Okay that's slightly better. The secret sauce here was just to improve our naming to make the function's role more explicit -- a best practice.
Note that to get rid of the lengthy type annotation we had to define a new class in the bottom option. This is the recommended way I've found. However there are times where adding abstraction layers isn't the right approach. They divorce the code from the original data and have a certain performance impact.
It's also possible to alias the type; but I still feel the typing is pushing me towards more abstraction.
Self-documenting code?
Let's have one more go to see if we can improve readability further:
def serialize_foo_on_instance(instance, filename, content, **kwargs): """ Serializes foo on a specific instance of bar. Takes a foo data, serializes it and saves it as ``filename`` on an instance of bar. :instance: instance to serialize the foo on :filename: file name to serialize to :content: foo data, just creates the file if None :returns: True on success, False on error """ ...
Okay, now we know what the function does, and what the parameters are supposed to be. Let's see how that looks with type annotations:
def serialize_foo_on_instance(instance: Bar, filename: str, content: Optional[Foo], **kwargs: Any) -> bool: """ Serializes foo on a specific instance of bar. Takes a foo data, serializes it and saves it as ``filename`` on an instance of bar. :instance Bar: instance to serialize the foo on :filename str: file name to serialize to :content Optional[Foo]: foo data, just creates the file if None :returns bool: True on success, False on error """ ...
Right so we're a bit more verbose and specified the types each parameters take. We've introduced docstrings for both our definitions, and they explain what the function does, the role of the parameters, what happens to optional ones and what the values of bool in the return means.
Could we do away with the docstring and solely rely on "self-documentation" through type annotations? Not a chance:
-> bool doesn't say anything about what it means to receive either
True or
False. In the same way,
Optional[Foo] doesn't give us a clue about what happens when the value is
None.
Write generic code = reuse
Python is magnificent by how reusable it is. Every file you write is a module and can be reused for any purpose. Ages ago I wrote a software forge for Bazaar just by reusing modules from Bazaar itself even though they were never intented to be used that way. This transpires through the entire language, including the function definitions.
By clamping down on types, are we making our code less reusable? Possibly, let's experiment. Let's assume that instance is an object obtained from a string ID, and that we'd really like to use some kind of string generator for filename. Let's have a look:
class FilenameGenerator: def __str__(self): return "blah.txt" def serialize_foo_on_instance(instance, filename, content, **kwargs): if type(instance) == str: instance = Bar.by_name(instance) ... filename_gen = FilenameGenerator() serialize_foo_on_instance("bob", filename_gen, content) serialize_foo_on_instance("bob", filename_gen, content)
Pretty straight-forward here. Now let's annotate this.
def serialize_foo_on_instance(instance: Union[Bar, str], filename: Union[str, ??], content: Foo, **kwargs: Any): if type(instance) == str: instance = Bar.by_name(instance)
Wow, that's already more involved. But is it even truly generic? In other languages we'd use interfaces or abstract types and inheritance to make functions generic. I couldn't find the type name for any object that can be cast to a
str so I put
?? for now.
Without type annotations, our code is generic right-off the bat. Possibly overly so, so we need to do some pre-flight checks. With annotations, our code is "specific" by default and we work hard to make it generic. Note the quotes around "specific" as this is only enforced by linting tools like mypy and so you still need to do your pre-flight checks. This is a fundamental shift in the nature of the language.
Python vs the world
A lot of developers like to claim that type hints are the best thing for the language since sliced bread. However I get the feeling those people only look at Python in a vacuum.
Python's selling points as a language are the readability and the ease of coding, which results in speed for the developers. Adding type annotations reduces those advantages greatly. And Python becomes less attractive in the programming world.
Below is a fibonacci implementation in Python:
from typing import List def fibonacci(previous: List[int], length: int, depth: int = 0) -> List[int]: if depth == length: return previous previous.append(previous[-1] + previous[-2]) return fibonacci(previous, length, depth + 1) if __name__ == "__main__": start: List[int] = [1, 2] print(fibonacci(start, 100))
And the same with Rust:
fn fibonacci(previous: &mut Vec<u128>, length: u32, depth: u32) -> &Vec<u128> { if depth == length { return previous; } previous.push(previous[previous.len() - 2] + previous[previous.len() - 1]); fibonacci(previous, length, depth + 1) } fn main() { let mut start = vec![1, 2]; println!("sequence: {:?}", fibonacci(&mut start, 100, 0)) }
Here we see that the difference between Python and a more advanced, powerful language has become minimal. Rust is much faster and allows me to do more than Python, so the question becomes: why choose Python?
Please don't pick on my choice of Rust, this argument would also work with Go, Java, C#, etc...
Conclusion
From my perspective, type annotations provide little benefit at the cost of extra work. They can lead to a false sense of security that the parameter types you're getting in a function are guaranteed, but there is no such check performed at runtime.
There's also a false sense that type annotation provides documentation, but they never explain what a function does and how the data within the types is effected during a function call. So it's no substitute for good docstrings.
Given this, I prefer to not use them, and to keep clean, well documented code.
Top comments (4)
I disagree with pretty much every point you made:
Dict[str, Any]and
strtypes you can typically improve this quite a bit. This also typically changes the architecture a bit to move validation of data to an earlier stage. Hence the usage of type annotations leads to a better architecture and thus more readable/maintainable code.
In case you want to learn more about type annotations, have a look at medium.com/analytics-vidhya/type-a...
Good programmers always write pseudocode first, then implement that in the language of choice. A while back I wrote pseudocode for a function I was explaining to someone, and realized it was syntactically valid Python code. So Python's huge advantage is that it is pseudocode that runs. This makes development much easier as we can start interacting with the pseudocode instead of waiting for it to be implemented. Putting types into Python completely strips this use case.
I will go further and say that anyone who thinks Python should have static types has no idea what static types really do. I am all for statically typed languages, the compile time guarantee and the machine sympathy that they allow. But putting types into Python is giving programmers the worst of both worlds: the slow iteration time of formal languages, and the slow run time of scripted languages. None of the benefits of either.
I do totally agree with these arguments, what I like about python is that I write code as fast as I think, I do both in the same time, having to handle constraints of type when my code is not stable yet, it's removing from my freedom to express myself as I want
Okay, you should check out prototypes and PEP 544
It is the way to implement duck typing in type annotations, so anything which can be cast to a str could be defined by
class CastableToStr(typing.Protocol):
def str(self) -> str:
pass
Now all the other points are overly personal, and the comparison with rust is funny because rust is one of those languages which proove that a good compiler with static typing etc prevents mistakes.
And the final question why choose python? Well, I never understood why people would chose python, using that language because "python is easier to use" is just downright shooting yourself in the foot. An economy of one month of coaching for your new developpers and ending up with more bugs and issues does not seem like a great choice to me anyways | https://dev.to/etenil/why-i-stay-away-from-python-type-annotations-2041 | CC-MAIN-2022-40 | refinedweb | 1,979 | 63.19 |
Feedback
Getting Started
Discussions
Site operation discussions
Recent Posts
(new topic)
Departments
Courses
Research Papers
Design Docs
Quotations
Genealogical Diagrams
Archives
Why is there still the trend to create languages where most functionality is implemented in some standard-libraries und not in the language itself?
Quite often lots of time and effort are invested to make a language as extensible as possible and to create every feature (even standard datatypes, like arrays, lists etc) in the library and not as part of the languages itself. I think that's really antiquated thinking from the early years of CS where the usual usage patterns of programming languages where not so well known and an extensible language seemed much more powerful. But this has changed a lot: Today most usage patterns are commonly known and while there are lots of them, the total number of those patterns seems quite manageable.
Often it's unneccessary difficult to use those 'library implemented' features. Think of using lists in a language without native list-support (like Java, C++ etc) and compare that to a language with native list-support. It's so much easier and much more readable in the latter.
But why stop with those 'low-level' features? Why not try to build a language with ALL common used features directly in the language: From strings to maps to general graphs. From loops to threading to database-access. From visitor to observer to MVC. A language without much of a 'standard-lib' because it's mostly integrated in the language itself. Instead of inventing Systesm to create DSL why not simply create and integrate all those 'DSLs' in a single language?
Sure, such a language would be less 'cute' than a minimal one. But I suspect we could gain lots of productivity, safeness and performance.
The smaller the initial language, the less code for it's implementation and the easier it is to 'prove' that it is correct. This takes us more toward the mathematical idea of Axiomatic reasoning, and also makes real world implementation more possible.
You're writing a language that is hopefully 'better' than what is already available, so why not show that you can write all of these common patterns in the new language itself. That serves a doulbe purpose of showing that standard patterns can be handled in the new language and thus it should be able to handle any other non-standard pattern that you can throw at it. Implementing such features in the development language gives greater efficiency, but shows little of what you can do with your new toy; putting everything in libraries does.
Putting other things in libraries also allows for them to be replaced easily (hopefully) or at least overridden during the development process if on finds something that is more suited to your needs. A large static language is just that, and although such a language might suit the needs of it's initial developer, you can be sure that others won't find it so suitable.
This question is a bit like how old mainframe opereating systems were designed vs. unix. The big iron systems were huge and integrated and as monolithic as they came, including kernel, drivers, shells and applications (from what I know, i wasn't there). Unix was small and disconnected, pieces could be replaced and the flexibility of the system won out over the staticness of big iron.
But the idea of where to draw the line for langauge vs. library is never really known; libraries show the power and decoupling of the la nguage, putting features in makes it static and at risk for being called bloated. There's always an opinion and usually the most correct one is somewhere on both sides of the line.
--
kruhft
To me, the core language with little in it is an old concept, certainly not one that's been the driving force behind language design over the last 20 years. Some counterexamples I can think of:
* Q (the latest revision of K) has database access built-in.
* Iverson's J has vectors and matrices and complex numbers built-in.
* Felix has regular expressions built-in.
* Perl has regular expressions and file I/O built-in (there's an operator for "read line from file").
* REBOL has support for dates/times, html/xml tags, and filenames built-in (filenames start with the percent symbol).
But why stop at those features? And why not combine them all in one language?
Why not have database access, vectors, matrices and complex numbers, regular expressions, file I/O, xml, dates, times and lots more, like all commonly used ADTs, support for all common usage patterns, gui programming, server programming, scanner/parser generators and lots more build into the language instead implementing them via libs or external code-generators?
That seems to be the approach taken by Perl. ;-)
What does it gain you? Easy syntax? Code-generators can get that for you. Interoperability with other language features? Libraries can do that, and then you don't need to carefully consider the interaction between every pair of features.
I'm trying to imagine what a language like this would look like, and what I come up with isn't pretty. I like being able to load a database library and think in terms of "okay, to query, I call this function" or load a GUI and think "to display a button, create a Button object". It means I don't have to memorize new syntax for every single new task. If they were language features, I'd have to memorize a whole lot more irrelevant details to get my job done.
The problem of Perl is it's horrible syntax not to many features. Perl don't offer more then for example Ruby, but the latter has a much more readable syntax.
If you have to use something, you have to remember something. Thats always true, if you use a lib - or a new syntax. But a syntax can made more expressive (if the language designer has done his job well).
For example the gui-example: If the gui is implemented via a lib, you need to know lots about the internal workings of the lib. But if you put it into the syntax of the language, the compiler can infer lots of things you don't need to specify explicitly anymore. In the case of a gui, the compiler creates call to some runtime-library which can then replaced to fit it to different underlying systems. But for the user of the language this would be totally transparent.
Some other big problem with lib ist, that they are easily replaceable. Many see that as an advantage, but is it really? Why use different gui-libs or even ADT-libs? It only reduces maintainability and readability of the code because everybody can use another lib, even for standard-stuff.
This is exactly my thinking and why I raised the idea of our testing alternate surface structures with identical semantics.
If replacing Perl's syntax with something better could reduce errors and coding time, you would have a solid case for the value of putting more design effort into this sweet (pun intended) topic.
Most compilers are implemented via chains of tree-rewriters which transform the original source in multiple steps to executable (or VM) code. In a 'fat' language, most of the 'fat' language features will be processed at the first stages of the compilation process (similar to syntactic sugar). If the later stages of the compiler use a solid model you can prove losts of interesting stuff on those later levels without increasing the complexity compared to a simpler language.
Sure, the creation of usefull libraries and compilers in a language is a common way to check how usefull the language will be in the future. But you can only see how usefull the language is in the domain of library programming. And it's not neccessary true, that a language which is good in programming libs and ADTs is also good for common application programming tasks - but the latter is a much more common task in every (productive) language and the real goal someone should tackle if he designs a new programming language.
Today we know very much of the usage patterns of programming languages. There are simply no big surprises anymore. In the earlier days that was different and because to this, 'minimal languages' which implement as much as possible in extensible libraries, were a good idea. But is this still true? I doubt that.
And I don't think that the Unix/Mainframe comparison is really applicable here. Even a 'fat' language would be able to create new things by combining existing features. I simply suspect that those 'new things' are not very common in the live of most programmers and a language should support them by making those common things as usable and as safe as possible.
Think of the huge amount of mistakes and lost opportunities for clever automatic optimisation by putting SQL-statements into strings (like lots of languages do). If the language whould have a similar feature directly integrated the compiler could do lots of checking and optimisation - and the programm would be propably much more readable.
Or look at the success of those 'new' 'dynamic' languages like Ruby and Python. I suspect that the primary reason why many people like them is not the dynamism, but simply the fact that they have more well build in features like lists, maps etc compared to a language like Java or C++. A huge percentage of all those "look how easy it's in Ruby/Python"-examples are simply demonstrations of the expressiveness of those buildins.
I think 'bloat' is a fear lots of language designers have. But why is 'bloat' in a language so problematic and 'bloat' in the standard-libs not? Every programmer needs a certain amount of 'bloat' if he don't want to reinvent the wheel oder and over again. So it's simply unavoidable, why not try to make it as usable as possible?
Bloat in the standard libs will at most swallow up namespace - and with a reasonable system to start off with, that's a fairly minor problem.
Language bloat starts to swallow up possible design space - existing features rule out new ones, whether by breaking invariants the new feature requires kept or just by hogging all the sensible bracket characters.
So you think it's impossible?
Or is it simply more difficult to do compared to designing minimal language?
Many features are easily reusable: A standard looping construct is reusable for all kinds of iterations, build in types can often be discriminated via access patterns or hints (i.e. if a compiler uses a tree-based map or a hash-based map).
Sure, it's more difficult. But who said that language design has to be simple?
It's exponentially more difficult once the features have any kind of side-effects that can interact. And if you've got a pure language to start with you may as well just go the libraries+sugar route, or look at making the sugar definable.
But won't this mean that all the effort in the area of DSLs is totally wasted? And what about Lisp which seems to be able to integrate lots of different features and be also extendable? Why shouldn't a carefully designed language be able to accomplish the same, maybe with a bit more 'mainstream-friendly' syntax?
A 'fat' language would have lots of features, but not as much that it would be impossible to create. Of course it needs a totally different approach to design as the more common minimal languages.
Sufficiently good purist languages make great substrates for DSLs, no? I'm not arguing against purist DSLs, either. Lisp effectively goes the library route, it's not easy to tell a built-in special form from a macro and some externally-implemented code.
Lisp shows that it's possible to create a language with the potential to be really 'fat'. Simply imagine a well designed, huge macro-based standard-lib.
The problem with Lisp is simply that everybody can create it's own extensions and that there is no common ground anymore. Thats not a problem for individuals or small teams, but for an 'industrial strength' language you need something more fixed.
I think you have some misconceptions about the nature of Lisp. Lisp allows us to put our syntactic extensions in an ordinary code library, so your statement simply amounts to, "An 'industrial strength' language requires a set of standard libraries."
I'm surprised that you mention Lisp. I would have thought that Lisp macros would give you exactly what you are asking for. (If you don't mind parentheses!) They allow us to write libraries containing both syntax and semantics.
Indeed, Paul Graham argues that macros give Lisp much of its power. He writes that the collection of functions and macros used to solve a problem form, in effect, a Domain Specific Language tailored to that particular problem domain. He has referred to this as "Language Oriented Programming".
When you call Lisp "fat", I think you are referring to something like Common Lisp, but this is an anomoly. Common Lisp could, in principle, easily shed a few pounds by shifting some code from the standard prelude into libraries. Check out Scheme to see what is possible in a lightweight dialect.
if_then_else True consequent alternative = consequent
if_then_else False consequent alternative = alternative
Its true, that Lisp can do all of this - in principle. In practice its different, simply because there is not such thing a a big comprehensive library. Creating something like this in Lisp would be quite similar to create a new language itself.
And the problem with easy extensible language remains: If everybody can extend it, some will and in the result you have more complexity then in a fat language where all features are carefully designed as a whole in the beginning.
And its really important to have 'common grounds'. The more, the better. Each user-definiable thing has to be understood if you have to maintain the code, someone else wrote (or even your own code after some time). Also with 'common ground' code reuse is simpler, because parts will simply fit better, if they are created on the same foundation. The more you 'fix' a language and its frequently used features, the more of those 'common grounds' you will get.
At the IEUC, we are thinking in terms of a set of standard dialects to capture the preferred nomenclature of key discourse communities and then map those to a set of standard computational models along the lines presented by Van Roy and Haridi.
When a new dialect is created you could use a formal Description Logic to classify it and automatically handle most of the implementation if its semantics are subsumed by an existing implementation. (i.e. we can decompose a dialect into a choice of computational model which would in most cases - outside the realm of programming language design itself - already exist as part of the standard, a set of support functions which could be written in any previously defined dialect, and a parsing expression grammar to supply the novel syntax).
The problem with most languages that take the library approach is that there is no structure to the set of libraries that would help an End User leverage prior knowledge to reason about them, so each library is more akin to learning a new DSL from scratch rather than reasoning by analogy (eg. this DSL is just like the 2-d subset of the matrix library without statistical operation, but in this domain we call matrices 'zoidbergs' and add a new 'bender' function to automatically generate hyperlinks). It would be a higher level approach than just saying DSL 2 imports DSL 1 as part of its implementation or that it is just a big ball of macros expanding to DSL 1 source.
*ahem* Strict evaluation doesn't preclude manual laziness; see: closures.
That is true, however his point still stands. In order to delay binding using closures, you have to create the closure at the point of calling the function (assuming you don't have something like macros, which could be considered a form of lazyness, since they don't evaluate their arguments). You couldn't just do something like this (Scheme code, and again, if-then-else is a procedure, not a macro, because macros introduce a type of lazyness outside of what closures offer), because Scheme will evaluate the arguments before passing them to the if-then-else expression.
(if-then-else condition
(do-consequent)
(do-alternative))
That is true, however his point still stands. In order to delay binding using closures, you have to create the closure at the point of calling the function
That would be the difference between lazy and eager, yes. In any event, I wasn't trying to refute his point, just saying that you can do it in an eager language, too.
because macros introduce a type of lazyness outside of what closures offer
Macros offer a simplified syntax for manipulating code; nothing more, nothing less. Manipulation (generally) is outside the scope of closures, although such cases are generally rather rare and exotic.
Let’s step back for a moment and reread Peter's comment:
Haskell's lazy, pure-functional semantics also eliminate much of the need for new syntax; for instance, consider the following, which would be impossible in a language with eager-evalutaion:
to which you replied:
Strict evaluation doesn't preclude manual laziness; see: closures..
Macros offer a simplified syntax for manipulating code; nothing more, nothing less..
Oh, oops. Sorry for the mix up, I must've skipped past the "new syntax" part :/.
I know. Manipulating code implies lazy evaluation--you can't manipulate code that's already evaluated ;-).
No, that's just not true. If your strict language doesn't have any constructs with implicit laziness, then the use of "blocks" or anonymous procedures to inject laziness simply does not look distinctly different. To make it practical, you just need a sufficiently light syntax (even lighter than in Haskell) for anonymous procedures. AFAIK, in Smalltalk (I've never programmed in Smalltalk), a strict language, conditionals are implemented as procedures and use of conditionals and other control constructs looks exactly like procedure calls.
Vesa Karvonen: AFAIK, in Smalltalk (I've never programmed in Smalltalk), a strict language, conditionals are implemented as procedures and use of conditionals and other control constructs looks exactly like procedure calls.
Yes, Smalltalk conditional syntax is the same as syntax to send a message sent to a block, which block executes or not depending on the test. Typically a Smalltalk compiler treats such block messages as special, the same way Lisp uses special forms to express conditionals for code that might execute or not; the fact code looks like a Smalltalk message or a Lisp function call doesn't require this occur at runtime.
It's best to think of these messages and functions as being processed at compile time. Syntax doesn't usually express when syntax gets processed. The same thing also occurs older languages like C.
For example, in C the syntax for 'if' and 'while' looks a function call taking one argument. But it's understood a compiler emits inline code at compile time for these bits of syntax; you can't define a function named if() or while() and get them to replace a standard runtime dispatch to such functions, because they aren't really functions.
Similarly, there are some messages sent to blocks in Smalltalk you can't override in a subclass, because the compiler intends to statically inline the standard implementation. (Well, technically you could override such block methods and a compiler might emit a runtime message dispatch; but this would evaluate args before dispatch, defeating lazy evaluation of code as in the base version.)
Most Smalltalk semantics are expressed as a messages sent to some object. An implementation could -- but most don't -- express method definition as syntax using a message to "the compiler" saying "here's some code to compile; please associate it with this class and this method name". (I did this once in a Smalltalk compiler; works fine of course.)
Some languages use different syntax for compile time (delarations and definitions) and runtime (executable code). But in principle the same syntax can be used for each all the time, as long as it's understood which parts occur when. There are pros and cons for each. Special compile time syntax increases the apprehension that results are magic effected by the language as a system. But you could just see compile time syntax as messages to the compiler and runtime.
What is needed is pretty easy : an expressive type system so that all the actual implementation can be done as a library, and some additional syntactic sugar in the compiler to integrate it into the language. But I think there is a limit to the amount of syntax an user can handle.
It's more of a design choice. Is the among of work required to add syntactic sugar worth it compared to the time needed by the user to learn it and the number of time the feature will be used ? Does that makes such a big difference compared to using the feature as a library ?
I guess that a good language will start with a lot of things in the standard library and slowly and carefully add syntactic sugar for the most interesting features without breaking compatibility or turning the syntax into something like Perl.
"What is needed is pretty easy: [...]"
... but if it is really that easy, why hasn't it worked yet?
The time to create the 'sugar' (I won't call it this, because it's not all sugar) would be comparable to writing a solid and usable lib - but it's much more flexible. Just compare something like "new ArrayList(new int[]{1, 2, 3})" with something like "[1, 2, 3]"..
[1,2,3] vs 1:2:3:[] is entirely sugar in Haskell, your point? Lists're pretty fundamental to lazy functional programming due to them encoding most loop-like things, yet they're still just a datatype like any other.
Philippa's right. Also:
Just compare something like "new ArrayList(new int[]{1, 2, 3})" with something like "[1, 2, 3]".
Yes, let's compare them. Are they the same? You can define them to be equal, of course, and then every list will be an array. But then you will not be able to use the latter syntax for, say, linked list implementations of lists. Indeed, in Haskell the latter syntax denotes a linked list, and you would need to apply some function to [1,2,3] in order to convert it to a value of an array-based implementation. So, conversely, the constructor new ArrayList is not superfluous in Java, but extra information which chooses one implementation among many.
[1,2,3]
new ArrayList
What about the int part? Again, we can define things so that brackets always introduce lists with integer elements, but then we need another syntax for lists with character or floating-point elements and so on. Or we could try to infer the types of the elements... and following this path gets you to type inference.
int
To try to add features incrementally to a language would be inevitable, but I think it neccessary to start with an already quite comprehensive feature-set to prevent running into extensibility problems.
Yes, you are quite right, and that is why modern languages come equipped with features like type inference and recursive datatypes. But merely adding syntax does not buy you much; in this case, for example, it only addresses a tiny set of issues, which are not the real issues at all.
The reasons for making languages "minimal" are the same reasons that we make abstract datatype and class implementations minimal. You are reducing a large set of rules which are difficult to keep in your head at one time to a small set of rules which are easy to reason about. Also, by reducing the size of the language, it is easier to replace one implementation of it with another, since you only need to implement a generating subset. If things become more inconvenient by reducing them this way, then there is a deficiency in the abstraction capabilities of the language; anything which can be defined and put in a library should behave just as if it were built into the language.
In Java, before generics, it was impossible to define safe collections which could have elements of any type. This was such a deficiency, and generics addressed it. In functional languages parametric polymorphism is the analagous feature.
Or we could try to infer the types of the elements... and following this path gets you to type inference.
And also to a numeric tower, or some other form of subtyping/coercions for numeric types. Or maybe different syntax for literals of different numeric types.
In Java, before generics, it was impossible to define safe collections which could have elements of any type.
It is still impossible. Generics are more like a convenience for cooperating programmers than a protection from malicious or dumb parties.
But more to the point - I also find it useful to have a definition of language minimalistic when it comes to meta-programming of all sorts, including, but not limited to, code generators, validity checkers, compilers, refactoring tools, and (even meta-circular) interpreters.
[on edit: we had a similar discussion here (look for "humane")
I guess it all boils down to - the simpler the PL, the simpler it is to talk about it, including: formal papers, informal chats, other PLs, or the PL itself]
There is of course a difference between both. But thats my point: I don't want a language with some fixed 'syntaxified lib', but a language where the often used things can used in a compact, readable and safe way.
I've not yet talked about concrete ideas, how to build such a language, but of course it's not a simple task, because you need to integrate all those features into a common framework instead of simply defining some standard functions or classes.
The [1, 2, 3] in the above example could only say 'an unspecified container with entries 1, 2 and 3'. If the language is statically typed, the interpretation would depend on the context: If you assign the [1, 2, 3] to an int-array, it's initialising the int-array with those values. If you assign it to an string-array, you get an error. If you assign it to a int-list, it's initialising the list. etc.
But merely adding syntax does not buy you much; in this case, for example, it only addresses a tiny set of issues, which are not the real issues at all.
Thats true in principle - but not in practice, where you have to use those things very frequently. And of course I don't want to stop with the above example, there are lots and lots of those things.
In the last years much effort went into designing languages which are as extensible as possible. Reflection, Macros, DSLs etc. Why not stop wasting lots of effort and simply creating the language right of the bat instead inventing complex extensibility schemes which only work to a certain degree and complicate the language tremendously?
I assume you mean it could only say that in your hypothetical language, as the meaning in Haskell is clearly not that. Funny thing is, if you had it "pre-fold" by returning something like Sequence a => a where Sequence has methods cons and nil (and then the instance for [] would have cons as (:) and nil as [], to give the obvious example) then you'd have exactly what you're requesting only using libs and a tiny speck of syntactic sugar again. Your talk of "assigning it to an int-list" is, er, revealing though.
It's quite possible that the above mentioned is possible in some existing languages. In Lisp it certainly is by using some macros.
Most things are already implemented in some existing language. But not all those things together. Some languages have build in complex numbers, others have lists and hashes, others can directly create xml trees, define guis or create scanners and parsers. But where is the language who trys to do all those things (and lots more)?
And even better: Why not build abstractions based on those things? Have some kind of 'collection language'. Haskell for example has it with it's list comprehensions. But why stop there? Why not also a syntax for folds and zips? Could be even more easy to use.
Most languages do thiose things. Haskell for example has a special syntax for easy use of monads. And why not, it really makes things easier. So why stop there?
We do all these things in Haskell already, we just don't define new concrete syntax[1] to do so - we go straight to an abstract syntax tree. In the case of scanners and parsers, the resulting code is as easy to read as that of any specialised tool I've seen. Lists have sugar, hashes benefit from some additional generalised sugar to pattern-matching. XML's very much a matter of taste, personally I'm happy using the ordinary function syntax for it. GUIs tend to be much the same, assuming you're not going to just let a designer app do the job.
Sometimes the extra sugar's good, but it's best when it's been designed with generality in mind! How do we get this? With a minimalistic core and an insistance on sugar with a high power-to-weight ratio.
You can always ask "why stop there?". "Because we can't do this in a sufficiently useful manner" tends to be a good answer... what you propose is a language full of everybody's favourite kludges of the day, all conflicting so if it turns out not to do things the right way you can't do them the right way without considerable effort and pain. We've got plenty of languages like that already...
[1] This may be a slight lie, there're some creative abuses of operator overloading and type classes floating around
A language should allow tuples as first class entities in order to be able to offer syntactic sugar like the one you mention. For example, it would be nice if in Java I could do this:
Object list1 = List({1, 2, 3});
The List class would have a constructor like this:
List(tuple t) {
for(int i = 0; i < t.length; ++i) {
add(t.member(i));
}
}
Nested tuples could be a replacement for S-expressions:
Object myData = Tree({"a", {"b", "c"}});
S-expressions declaration could spawn a "new" paradigm for writing complex trees (for example UI trees or XML trees):
Form loginForm = new Form({
"My Form",
{new Row({new Label("username:"), new TextBox(usernameModel)},
{new Row({new Label("password:"), new TextBox(passwordModel)},
{new Row({new Button("ok", new ActionListener() {void exec() { loginForm.close()}} ))},
{new Row({new Button("Cancel", new ActionListener() {void exec() { loginForm.cancel()}} ))},
}
});
Like Frank told, in order to get this working nicely, you need at least some type inference. First class syntactic tuples is less useful than first class tuple types.
For instance in your List constructor :
List<T>(tuple<T> t) {
for(int i = 0; i < t.length; ++i) {
add(t.member(i));
}
}
tuple<T> being defined as (T*)
(T*)
I think it is worth noting that the next revision of C++ should include a "sequence constructor" just for this. Sadly I don't think the design committee has relased an explanation on how that is supposed to work.
For example, a function that accepts a pair of ints could be written like:
void someFunction(tuple t);
All languages have the concept of tuple, at least at function/procedure/method level: the parameters of a function are a tuple, usually constructed on the stack..
What is necessary to prevent running into extensibility problems (or at least minimizing them) is not a comprehensive "feature-set", but an expressive language. As soon as you start needing to define features of greater expressiveness than your language you massively lose extensibility*. If you have a highly expressive language then in all likelihood (and as evidenced in practice) you will be able to easily express this "comprehensive feature-set" within the language; nice syntax then merely requires a localized syntactic sugar (something like Camlp4); "common ground" simply requires defining a large standard library (as others have implied). The point you have remaining is the compiler being able to make assumptions about language constructs that it would not be able to make about libraries. In my opinion, a much more interesting route to achieve this is to be able to specify properties about your code and perhaps have some further support to guide compilation (particularly optimization). Even today this can be partially done in some "widely-used" (in the LtU sense, i.e. including Haskell but not Epigram) languages.
That seems to cover all of your objections, but may bring up a new one: the highly expressive features may be complicated to understand and unnecessary if we had language support for everything that programmer's "need". If you take this view, you are essentially explicitly giving up some extensibility and stating that we can bake in support for the vast majority of the tasks programmer's do.
* Haskell suffers from this, but it is mitigated by a (de facto) fairly uniform and structured method of extension.
[On Edit]
A problem along the lines of the last paragraph (not the "footnote"), is that the expressive features may not play well together; this is part of why I added the "(or at least minimizing them)" to the above. This suggests that it would further be prudent to be able to control and compartmentalize the features. Haskell's approach strongly lends itself to this which is probably a further reason why many people don't find Haskell problematic to use despite it's relative lack of expressiveness.
Again, I find research into how best to go about this much more interesting than some particular solution, which in all likelihood would be based on this research anyway.
When you're asking an extremely general design question, perhaps the first thing to do is find an appropriate audience. 2 questions I would ask are
1. what problems was the design intended to remedy
2. what resources was the designER (perhaps a team) given
so IMHO "why deosn't so and ao include database access functionality?" is not a very helpful question to ask of EVERYONE.
It would be better to find someone who was tasked with designing a laguage and a database access scheme, and chose not to include anything in the language to aid the database access.
Has anyone been in a position where they've been required to design a language that integrated
regexes
database
hash tables
HTML/XML
and so on ....
Maybe your beef is more with those who set specifications than those who actually do the designs? (I admit this may be the same people wearing different hats).
I think that most people who are interested in language design are reading this blog. And I've read here about so many clever schemes to create minimal languages that I asked myself: Doesn't really anybody consider the possibility that is in fact the wrong way if you want to create a really usable language? How many people complain that languages like Haskell or Lisp are so seldom used, while ugly languages like Java or C++ dominate the market?
The most frequent design imperative seems to be: How can we create a language by using the smallest number of abstractions. While thats in interesting task (and neccessary if you want to explore the usefulness of new abstractions), computer languages are also tools to get some work done. But this 'getting the work done' part of a languages seems to be considered a mere distraction from the arcane task of creating type systems and new abstractions.
But identifying all commonly required usage patterns and integrating them into a single language could be an interesting and challenging task too. So I'm curious, why only few people see to recognise that.
Because what Edsger Dijkstra sain in his Turing Award Lecture 33 years ago is still true.
"I absolutely fail to see how we can keep our growing programs firmly within our intellectual grip when by its sheer baroqueness the programming language -our basic tool, mind you!- already escapes our intellectual control."
You're raising the same question the thread originator did.
Say you need a language plus a library to solve a problem.
Suppose that instead you included the library in the language, and in the combining you could change both language and library such that the whole is simpler than the parts.
karsten_w assumes that this combining will yield a simplification (I'm not so sanguine - if you contaminate the base language with complications meant to simplify libraries additions, the base language eventually must become more complex - the term "parasitic complications" comes to mind).
Preventing those "parasitic complications" would of course be a cenral point in designing such a language. But those complications exists even in normal libraries - people are simply asccustomed in ignoring them, because it's 'only the lib' and not the language itself.
Lets look for example to standard ADTs: Sets(implemented via trees, arrays, hashmaps), Lists (single/double-linked), Arrays, Maps (misc. implementations), Stacks. And then there are slight modifications of those: A maps could have a LRU feature, use weak-refs as keys, arrays could be sparse, etc. Of course it would be stupid to create a new Syntax for each of those ADT.
Instead use some kind of atrributing/hinting-system and let the compiler figuring out the rest. Ok, this would be difficult with dynamically typed languages, but in a static typed language, you can simply extends the declaration. Maybe declare a map like this:
map: Int[lru:100, weak-key:String]
and a list like this:
list: Int[list, double-linked]
And if you assign to it, you could use
list = [1, 2, 3]
map = ["a" -> 1, "b" -> 4]
without a need to remember what the real implementation of 'list' and 'map' is.
And if someone uses:
list.count
the compiler recognises that and maybe creates a count-field for the list. And if you use
list[10]
the compiler can warn that this operation in inefficient for a linked-list - or even use a array list automatically unless you specifiy the wanted implementation explicitly.
Choosing the implementation depending on usage patterns is starting to become more common with dynamic compilation and vm-based execution. Why don't extend it a bit? But by defining everything explicit in a simple lib, the compiler don't have those optimizing opportunities, because he can't simply 'understand' the code in a lib.
in almost all design schools (except those that deal with ornamentation, and even there one finds many exhortations to minimalism and admonishments against "overdoing it", ie, including too much ).
Note, for example, that Java advocates are forever apologizing for having to type too much ("the editor takes care of it for you".)
the same with the authors/users/advocates of visual wizard tools.
(a very general answer to an question that I said was too general).
But simplicity has it's limits. Why would everybody use a sedan if minimality is everything? Why isn't everybody driving a minimalistic sports car instead?
My question seemed to be general, but the topic is general too. The goal of most language designs is minimality. There are few slight exceptions (for example Ada, or certain 'scriping' languages), but even those don't attempt to really create a really comprehensive language and seems to execute themself for every additional language feature. recently was refreshing myself on Haskell's syntax, and I thought to myself "man, this is complicated". So I find your comment very strange.
For example, patterns, guards, case, and if/then/else are essentially four different ways of writing a conditional! List comprehensions? The "dot dot" notation? 9 different pattern types? String, list, and tuple support? Let vs where? I think I'll stop here.
Even if some of these are defined in the standard prelude, as far as the user is concerned the syntax is built-in.
Haskell is minimal in the sense that it can implement much things in the library other languages have to put into the syntax.
And I think that the mentioned complexities result from this: To be able to have enough power to implement very basic things in the language instead of make them part of the language, you need a more complex language.
It's for example much more easy to implement complex numbers right into the language (together with floats, ints and with fixed coercion and conversion rules etc) then building a type system which enables you to describe all those coercions and conversions in the language itself.
Almost everything Jeff lists there is sugar rather than core language complexity. The type system turns out to be desirable for many other reasons, not least of which is the likely failure of the language designers to do exactly what the programmer wants in all situations...
But those features are only needed to make things usable even if they aren't defined in the language itself.? Sure, if someone invets some fancy new data type you can't simply modify some library to use it as simple as build-in data type. But I think that this simply don't happen often enough to really count as a reason not to do it.
But those features are only needed to make things usable even if they aren't defined in the language itself.
Unless your language is designed with a prescient knowledge of all possible domains, it's necessary for things to be usable even if they're not part of the language.?
That's not a "more simple" language at all, it's quite the opposite!
Much of the sugar can be stripped from Haskell without making the language unusable, too - a significant amount of it is there to support "coding style" (in a sense analogous to "writing style"). Typing aside, you could not improve the list support by any means other than adding more syntactic sugar. This has much to do with why most of the list support in Haskell is of a distinctly sweet nature...
It's useless to compare a minimal language without it's libraries to a 'fat' language with much more build-in functionality. The libraries always add to the complexity of a language, so you have to consider them too.
Even in a simple language like Pascal its quite easy to add additional functionality by creating libraries. Its always possible, only the degree of 'similarity' to build-in features and the scope of possible language extensions varies from language to language.
By using lots of build in abstractions instead of requiring language features which enables one to implement as much as possible in the language itself the language can be more simple. Sure, the language as a whole will be more complicated than the minimal language itself - but if you add the (neccessary) libraries to the language, even the 'fat' one will win, because this language need less sugar and tricks to enable easy implementation of complex things via libraries.
Haskell has added some sugar to make lists easy to use - but lists are a really natural data structure for every functional language so that's not really difficult. But what about arrays? Sets? Multidimensional sparse arrays?
Please don't get me wrong, I don't want to put Haskell down, it's a nice language with many interesting features. But I think that the primary goal for programming languages should be usability. To enable the programmer to build complex applications as quick and easy as possible. Does Haskell really accomplish this task?
IME, it does pretty well at it and many ills can be fixed by rigging up bindings to external libraries via the FFI. There're things it's less good at, especially if you're forced to stick to Haskell 98, but this is what research is for - monads are most definitely a feature from where I stand.
I've already given you some examples for how further things could be sugared with surprisingly little effort. If you want to start throwing around new things you'd like language-level support for, I'd like to see your idea of a good DSL for manipulating them before looking at including them into a larger language.
Your comments about requiring "sugar and tricks" to implement libraries just don't hold in my experience. Where they exist, the tricks're all ones I'd be exceedingly disappointed not to see in your 'fat language' anyway.
My experience is that Haskell libraries tend to be simple, and considerably easier to work with than those I've used in fatter languages. The mere fact libraries can be factored out into separate modules makes a huge difference to the overall complexity.
karsten_w: But I think that the primary goal for programming languages should be usability. To enable the programmer to build complex applications as quick and easy as possible. Does Haskell really accomplish this task?
I can't answer that last question. But with respect to ease of use, I have a relatively concrete metric that I apply, and that's orthogonality within the language. Languages that I appreciate don't necessarily have shallow learning curves—O'Caml doesn't, Common Lisp doesn't, Oz doesn't, C++ most certainly doesn't—but the ones that I choose to use on my own time, vs. using them for work, have the property that, once I learn a principle about them, it stays learned with a minimum of "but" and "except for" in any description of it. (Given this criterion, it should be obvious that I appreciate C++ and Common Lisp for different reasons than orthogonality!)
So suggestions of a "kitchen sink" language—particularly ones that sound as if it really is just a matter of tossing features together—tend to scare me. They sound extremely naïve when they don't present a very specific, concrete plan for achieving orthogonality or make reference to highly featureful languages with good orthogonality (e.g. Oz, O'Caml, Haskell, Icon, Smalltalk...) or featureful languages with poor orthogonality (Perl, C++, Java, Python, Ruby...). Thinking that you're going to design a good language ex nihilo is a shocking conceit, a hubris of Greek mythic proportions.
What do you mean by orthogonality?
for somewhat-related research that may analogize well to your question
Some of the original RISC research documented
1. how frequently esoteric instructions were used
2. the direct cost (CPU transistors) to implement rarely-used instructions
3. some indirect cost (loss of resources, clock slowdowns) for the simpler, frequently-used instructions
the one paper I'm thinking of was about a FORTRAN compiler
I mean look at x86, it still has a CISC instruction set interface, that gets translated into a RISC-like instruction set under the hood. Isn't that exactly what the OP is advocating anyway?
The CISC interface is most likely only there for legacy compatibility. I'm sure the engineers would love to strip away the CISC->RISC translator.
Actually, no.
CISC compressed commands require less equipment to store. And CISC->RISC translator gradually becomes smaller and smaller because of it's constancy.
but enlarging a cache(*) would seem completely trivial compared to the complexity of a CISC->RISC translator in the instruction pipeline. (Maybe a bigger cache is more expensive, I dunno.) But as I say I'm no hardware engineer, so I'll be happy to concede that point.
(*) System RAM sizes have increased so much in recent years that code size is largely a non-issue except for cache.
Note that we're talking about x86 which is not only CISC, it is ugly, crufty, etc..
Yes CISC reduce instruction cache usage, but there are other ways such as ARM Thumb2 which its 16&32 bit ISA which manage to have nearly the same density as x86 with a simpler decoding.
I'm sure the engineers would love to strip away the CISC->RISC translator.
Not just the engineers...
[Exercise to reader: (Attempt to) write a half-decent code generator (in the compiler writer sense) for a language targeting the x86 and targeting a RISC architecture.]
Library design is language design.
Language design is library design.
Bjarne Stroustrup, of C++ fame (or is it infamy).
If the language designers make a standard library that's difficult to use, will a built-in of the same functionality be any easier to use?
On the other hand, if the language designers have a build-in that's easy to use, couldn't that same ease be built into the standard library?
Eric
Not every language design is good. But the design of a lib is always constrained by the language itself. So if a lib is ugly maybe thats only because of the language itself?
On the other hand, if the language designers have a build-in that's easy to use, couldn't that same ease be built into the standard library?
Thats exactly the thing I'm questioning here. Maybe it is possible, but besides Lisp I don't know a about a language which really is able to do it. And even if Lisp could do it, it doensn't.
So why not stop trying and simply design a language in a 'new' way: Make it simple to use, with a rich syntax and lots and lots of sugar and buildins. I want for example a simple 'print': No complexities like importing packages, creating stream-objects or using monads only to write something somewhere.
For example in my language Kogut:
WriteLine "2+2 = " (2+2);
No importing needed, no stream objects, no monads. But it's just a function. It's not worth special syntax nor special evaluation rules.
WriteLine "2+2 = " (2+2);
This is true for many features. And for most for which it's not true, a macro suffices. Putting things in the core language doesn't make them significantly easier, or often any easier at all.
And if you want to write to a different kind of stream? Use a different encoding? Modify the buffering strategy?
Macros are a way, but only if you have a very powerfull macro language. Lisp is very good here, because the Macro language is Lisp itself. But Macros have the problem that it's difficult do prevent people from defining their own languages which leads to worse maintainability compared to a fixed language.
If you want to do more complex things, the combination of functions needed to archieve that becomes more comlpex, obviously. It's all doable when the language is sensibly designed, no matter whether these features rely on builtin syntax or pure library interfaces.
Usually it's easier when they rely on pure library interfaces because custom abstractions needed for non-standard combinations fit into conventions established by standard things.
See Guy Steele's Growing a language. Notable quote:
APL was designed by one man, a smart man—and I love APL—but it had a flaw that I think has all but killed it: there was no way for a user to grow the language in a smooth way. In most languages, a user can define at least some new words to stand for other pieces of code that can then be called, in such a way that the new words look like primitives. In this way the user can build a larger language to meet his needs. But in APL, new words defined by the user do not look like language primitives at all. The name of a piece of user code is a word, but things that are built in are named by strange glyphs. To add what look like new primitives, to keep the feel of the language, takes a real hacker and a ton of work. This has stopped users from helping to grow the language. APL has grown some, but the real work has been done by just the few programmers that have the source code. If a user adds to APL, and what he added seems good to the hackers in charge of the language, they might then make it be built in, but code to use it would not look the same; the user would have to change his code to a new form, because in APL a use of what is built in does not look at all like a call to user code.
You write:
But Macros have the problem that it's difficult do prevent people from defining their own languages
Languages designed for preventing people from doing bad things are pathetic. The only sensible approach is enabling people doing good things. Cultivating good code, and forgetting about bad code.
Qrczak: Languages designed for preventing people from doing bad things are pathetic. The only sensible approach is enabling people doing good things. Cultivating good code, and forgetting about bad code.
Nicely said: foster the good, forget the bad. Rinse and repeat.
Now if only you could get folks to forget the bad. :-) Coders often have so much short term and long term memory, both, that a sensible strategy of forgetting goes deeply against the grain of endemic packrat tech hoarding. Also, bright folks are very prone to obsessive compulsive disorder when it comes to deconstruction of terms and logic. For example, I can predict someone desperately wants to attack the word 'forget' by bringing out whatever free associations come to mind, just for the joy of dissection.
I wish we used koan oriented design more often: take a phrase like "foster the good, forget the bad" and squeeze out useful lessons without obsessing over ways it doesn't apply.
[Edit: oh yes, and prevention is pathetic. Isolation is more practical. Instead of forbidding cooking because heat is dangerous, just provide a kitchen and a stove where the finger burning activities belong.]
Today most usage patterns are commonly known and while there are lots of them, the total number of those patterns seems quite manageable.
This seems false to me, on a lot of levels. Today, a lot of usage patterns for computer programs certainly are known. A quick scan of just the practical pattern books on my shelves puts the number of common usage patterns in the low thousands, far too many to build into a reasonable programming language. This is particularly true since so many of them conflict with one another, and have to be carefully firewalled apart to avoid catastrophe. Hell, the number of different encapsulation usage patterns is probably in three figures.
Moreover, a lot of these usage patterns, while common, are still pretty bad. This reflects the facts that computer programming is an immature craft, that our ability to train adequate numbers of even journeyman computer programmers is poor, and that resource restrictions of all sorts still require many unfortunate tradeoffs to be made.
For example, it was thought by some a decade ago that the GoF Singleton pattern was central enough to object-oriented software development that direct language support for it would be a good idea.
It would be easy: just an annotation on a class that it had precisely one instance, and some clever way of referencing it. This made some sense given the application domains of the time, and the restrictions on CPU speeds and process spaces that many applications had to cope with. Nowadays, the Singleton pattern looks a lot less sensible, and even pretty quaint. Pretty much every computational resource is remotable, fallible, and upgradeable, and even wristwatches have enough computational power and connectivity to take advantage of those facts. In that context, the idea of any computational object being coded as globally unique and process-eternal is pretty quaint, and is already being discussed for "anti-pattern" status.
On the other hand, the value of libraries is set to explode, but that's a topic for another time.
Many of the conflicting patterns are paradigm-dependent. You have to use different patterns for functional and oo-programming. The other conflicting ones often tackle the same problem with different methodes where no single method is really superior.
If you support pattern A in the language and not the alternative pattern B, the A is used and thus no problems arise, in fact the limitation to using A makes the design process easier and the code more maintainable. Sure, maybe B would be a little bit better for a certain problem, but who cares as long as you can A also and have direct support in the language?
Most bad patterns are a result of the limitations of the implementing language. An example is the visitor-pattern, which only exists because most oo-languages have no pattern matching. But if a language have pm, why would anybody use the visitor pattern instead? Just put support in the language instead require the programmer to use bad patterns.
True, lots of patterns are dependent on the underlying capabilities of the hardware. But thats also a reason to put as much support a spossible in the language instead of require to write it in the code..
It's not that programmers want fancier singletons, it's that there are fewer and fewer valid uses for any sort of singletons, no matter how fancy. If you put singletons in your language in 1990, you'd still be stuck with them in 2006, when they are looking kind of stupid. The history of software architecture is filled with answers like that which were good at the time but have since been outdated.
Requirements change quickly, compilers change infrequently, and viable languages change very, very slowly. Worse, the changes to viable languages usually need to be backward-compatible (or else the language quickly ceases to be viable). I would probably have to specifically recommend against adoption of a non-minimalist language like you describe, as it would involve investing a large amount of effort (and my sponsors' money) in something destined to become quickly obsolete.
I doubt that requirements are really that fast changing anymore. Most basic things have already settled and if you put them in a language it's highly probable that you need them over the next years too.
And if things really changes to much: Use a different language. If your design uses lots of stuff nobody uses anymore, it's probable that you have to rewrite it from scratch or maintain it in it's flawed state.
And like libs and frameworks change, languages can change too. If you want to migrate some app from Struts to Spring you have to rework lots of stuff too. Why is this more difficult then migrate a programm from an earlier release of some language to a later?
But by 'putting patterns into the language' the problem isn't that severe, because the compiler can 'refactor' your programm to a certain degree himself. For example by automatically creating code to distribute those pesky singletons over a network in a later compiler revision which is more network aware. If you've written everything explicitly in your code, you have to rework that by yourself - a much more difficult task.
Quoth the parent:
Most basic things have already settled
Ha! Why, then, is it that we have dozens of object serialization libraries, dozens of database interface libraries, dozens of database types even, hundreds of different implementations of various ADTs, etc...?
Granted lots of people/places suffer from the NIH syndrome and we could probably stand to lose many of these implementations, but remember that many ADT (just to pick one category) implementations which support the same interface may have conflicting characteristics. Example: Implementation A may support O(1) lookup, but require O(n^3) space while implementation B may only support O(log n), but require only O(n) space. Which you end up choosing depends heavily on where your code is actually going to be running.
So we've hopefully established that it is impossible to trim away many of these different implementations of ADTs, DBs, etc. Otherwise your language will not make anyone happy. To see why, consider this simple example: Alice may need ADT implementation X while also needing DB implementation A, while Bob may need ADT implementation Y while also needing DB implementation B. Unfortunately you chose ADT implementation X and DB implementation B satisfying nobody's requirements. This gets exponentially worse as the number of requirements increases.. How are you going to a) write all that code, and b) test all that code? With libraries, the code writing and testing is distributed to library implementors who do not need to know anything about the language implementations's internals to be able to write their libraries.
In short: You're simply wrong and only need to think about what you're suggesting a little bit more to realize it.
(Why do I get the distinct feeling I've been trolled?)
because discussions are off-topic here..
By inventing a good abstraction instead of providing lots of similar but slightly different implementations. This abstraction can then translated by the compiler to a certain implementation by letting him choosing it via some heuristic or via hinting.
I may be wrong, but I'm still waiting for someone to show why. Maybe the problem is talking to much on a metalevel instead of presenting a concrete implementation - which is of course much more difficult. Before I spend months or even years of time to try it, I simply wanted some feedback.
abstraction or interface. To satisfy all the requirements programmers have you still need to implement N types of Tree ADTs (all with the same interface), N types of DBs, N types of sets, etc. since they have conflicting performance/space/cache locality/... charateristics for different operations(*). There is no single type of tree which can satisfy everyone, so you have to implement all types -- this follows from the "combination problem" I described in the parent post with the example of Alice and Bob. Implementing everything isn't feasible. To see why this is so simply imagine a "complete" Perl. It would have to include almost everything(**) on CPAN to satisfy even a fraction of developers since their needs are so different.
You obviously don't seem to be listening to me, nor the many other people who've pointed out other flaws in your reasoning, so I'm done.
(*) No, your compiler is not going to be able to discover new data structures to fulfill some arbitrary requirement set by the programmer. Not in any foreseeable future. So you cannot wiggle out of saying "the compiler will create a data structure to order". A human programmer still has to provide the basic implementation even though there may be tunable parameters which a compiler could deduce.
(**) Modulo Not Invented Here syndrome.
Last answer because discussions are off-topic here.
To clarify our rationale: in the first 36 hour period after you posted your topic, the number of comments you posted made you the most prolific poster on LtU not just for that 36 hour period, but for the past week.
If you examine your own comments, I think you'll find that many of them are simply clarifying or adapting your own position, and in quite a few cases, speculating about very sophisticated solutions as a way to make the approach you're suggesting viable.
You've received plenty of feedback. What we're asking is that you demonstrate that you've put some effort into thinking about the topic and incorporating the feedback you've received, and provide something more solid to form a basis for further discussion. You may find that taking the time to clarify your own thoughts on the matter by writing about them will help answer some of the questions that you feel haven't been answered here.
Say you put associative array in the language, there are different way to implement associative array with different memory usage/CPU usage tradeoff: what if the built-in implementation doesn't fit your needs?
You then have to implement your own feature, but usually it will be 'less nice' to use as you don't have the same syntaxic sugar at your disposition..
That's why the trend is improving the level of 'syntax sugar' that the user can use and putting everything in libraries.
For standard cases: Use hinting or a clever compiler with runtime profiling. I assume that there will be no really new algorithms in the area oft ADTs for some time (and if: just put them into the next compiler revision and be able to use it automatically). Has really anybody used other than the standard hashes/lists/quicksorts etc in the last time? Frequently?
I thing thats some part of the fallacy which lead to those lots of minimal languages: The fear that you need something the language don't support. But is this really happening in practice? Or aren't those few basic datatypes everybody uses everyday not really totally sufficient for the huge majority of situations?
And if you really need something else at some time, fine, just implement it via the language. It will look more ugly and may be more difficulty to use than a 'self-made' datatype in your favourite minimal language which is designed with the goal of extensibility in mind, but who cares if it happens only few times in a really big program?
Wouldn't it be much better to support those 99% standard-use-cases as best as possible, even if it makes the rest maybe 3 times as complicated?
To be honest, I wouldn't want most people from the software pattern camp design language features (as opposed to library features) that deal with numeric problems. Just have a look at the mis-designed complex number C extension or C++ class (as an example of a language that claims to do it right:).
When it comes to numerics and algebra, I rely on numeric and algebra experts, not on language designers. And I'm tired to wait for languages like D to become mainstream, let alone to wait for language-design inclined experts for tensors, quaternions, or Clifford algebras to come up with a language that then eventually will become acceptable for my employer - it'll never happen. Thus, I greatly prefer the library approach. Unfortunately, while this might be an option with a well-designed macro system, it isn't with the mess of mainstream OO languages.
All that remains for me is to do the fun part, R&D and prototyping, in languages such as Mathematica, and then do it the hard way, probably with code generation, in C and C++.
Why would that never happen? I think it's because of the 'small is beautiful' thinking, many language designers have and I ask myself, if this thinking is maybe flawed and eventually the reason for the lack of a really useful programming language after years of research in this area.
And if it's possible to have world wide collaboration via the internet in the area of library design, why not in the area of language design? If the language designer(s) present the first prototype and ask for ideas and critique, most problems should become obvious very fast and could prevented before the first release of the language.
I also don't think, that languages should be created for eternity. Why should 'revision 2' be 100% source compatible with 'revision 1' of a language? Why not simply write a converter which translates it automatically (should be simple, if you create 'r2' of the language with that in mind) into the new version? That would free the language designers from lots of compatiblity concerns and would serve the evolution of the language.
As a user of PLs (not a designer), I appreciate PLs that are extensible, especially ones that make the code I write appear to be first class within the language. I would rather not have to deal with a kitchen sink PL that has all sorts of complexity in the base syntax. I'd rather have standard libraries that integrate seamlessly with the syntax of the language - making the libraries appear to be first class. I consider all the code I write to be kind of an act of writing libraries - even if I'm the only one that will ever use them as such.
Now one can lament that not building these features into the base syntax results in a plethora of libraries, non-standard implementations, etc... But is that the fault of the PL? Or is it the fault of the users not coming together to agree on a standard set of API's? Sure, there's less people involved in the design of the PL, so it's theoretically possible for the PL designers to mandate by fiat what features get integrated into the language. But when you push stuff out of the libraries and back into the base syntax, you have to get many more people involved in the process (thus killing the advantage of a small group). It's not by accident that languages with complex syntax like Ada are designed via committees. And even after you've thrown all this stuff at the PL, you still have to establish massive libraries to do the chores that the PL hasn't got a prayer of integrating.
As Stroustrup has said in the past, there is a certain amount of complexity that you can not resolve simply. You can either build it in the PL or push the problem to the libraries, but that complexity still exists. Perhaps others agree that having that complexity in the PL is the best balance, but I tend to side the other way. Of course, the bigger issue is that PL design and Library design are interrelated issues (as alluded to by others in this thread). Trying to make it a PL only issue just rearranges the chairs on the deck, but doesn't really solve the fundamental issues that PL design and Library design are hard. Even harder is getting everyone to get on the same page,
Why should your code appear 'first class'? It never is (ok, besides maybe in Lisp)..
Now one can lament that not building these features into the base syntax results in a plethora of libraries, non-standard implementations, etc... But is that the fault of the PL?
It doesn't matter, which fault it is. It's the result that counts. Sure, having nice standard-libs will help too, but as someone who does lots of work in Java, I know the limitations of the library approach (and it's still getting worse). Sometimes you have to do really hair-raising stuff to accomplish relativly easy things. Sure, maybe that's all Java's fault, but I've seen similar things in other languages.
Why should your code appear 'first class'? It never is (ok, besides maybe in Lisp).
Every time you write a new function, a new method, and give it a name, you have invented a new word. If you write a library for a new application area, then the methods in that library are a collection of related words, a new technical jargon for that application domain. Look at the Collection API: it adds new words (or new meanings for words) such as "add", "remove", "contains", "Set", "List", and "LinkedHashSet". With that API added to Java, you have a bigger vocabulary, a richer set of concepts to work with..
Sure, having nice standard-libs will help too, but as someone who does lots of work in Java, I know the limitations of the library approach.
Because the complexities introduced in the language tend to impact all code written for all solutions. Whereas the complexity in the libraries can be more localized.
Thats only true for high-level libs but not for the more basic stuff.
Its obvious, that every language needs some kind of libraries, a language which won't would solve every problem in a single line of code. So libraries are a must and of course some methods of extensibility.
I'm simply questioning the goal to create languages which are able to do as much as possible via the libraries, compromising expressivity and usablitity for it.
A perfect example for how this went wrong are Java generics. In practice you need them only for (somehow) typesafe collection classes. But look at the complexity they added to the language. I suspect if Java had full scale collections directly integrated in the language, nobody would have thought about creating generics (not that I'm against generics in general, but in Java it went very wrong).
It's all about overhead: If you have to use a certain feature very often (as for most of the basic stuff), then it's usage should be as easy as possible. If you use it only seldom (like most high-level stuff) the overhead is not so important. Because of this a language with a rich build-in featureset can affort a more cumbersome lib usage because those usage is happening relativly seldom. And because of this the language don't need complex mechanisms to do integration of features via libraries.
Things like "list.get(1)" is much more annoying compared to "list[1]" because you need it very often. But if you create a web-service via a lib-function, it doesn't matter if you need even an additional line of code. Of course you can use operator overloading instead, but that would create other kinds of problems. And operator overloading is really only usefull for those 'basic stuff' which could simply be part of the base language.
This particular example is a good indication of where simplicity is the answer. The immediate problem is that you want to have an index accessor to a list. So the knee jerk reaction is to add special syntax for lists that translates list[1] into list.get(1). But this misses the opportunity for better abstraction. The index accessor is not just useful for lists, but rather for all collections that have some semblance of ordering (arrays, sets, trees, ...). So the answer is not to complicate the syntax just for the special case of lists. Rather, the language must simplify to a common pattern of syntactical sugar for all collections in all the current and future data structures. Not only that, I really want these collections to be treated as iterable in a foreach type of construct.
So the drive to simplicity is not necessarily aimed at cutting out syntactic shortcuts. Rather it is aimed at finding the essence of the abstraction. Because the abstraction can be made to cover more than just lists, the drive to simplicity means that the language should actually gain in expressitivity.
What I don't want is want set of sugar that applies to lists, another to arrays, another to trees, ..... Find something that decouples the syntax from the type of data structure being manipulated, and you have a generic framework that can be applied to all manner of problems.
Anyhow, that's the theory behind the drive to find common patterns. Now, sometimes it'd be nice if I had special syntax for things like handling XML or Strings or Databases or.... But as often as not, adding things to a language causes as much grief as it solves. I'm nt sure why you cite Java Generics as that the kind of thing that happens when you start stacking more stuff on top of an existing language. Well, perhaps we could redesign Java from scratch to make the generics blend more seamlessly with the historical baggage of Java. But then perhaps the real solution to Java is to return to square one and try to make it simpler.
One other thing I should mention, while I'm wasting bits, is that there are many different languages being designed for many different purposes. The suggestions you make would be totally against the philosophy of some languages like Standard ML which has as it's goal rigor (it's probably one of the most conservative languages). However, there are languages being developed which are much more open - the design of the Fortress language is probably more to your liking, though it is aimed at massive scientific computing. Not all languages are designed to solve every known problem in the universe - but there are some that pursue the goal of hegemony.
I also see the goal in creating simplicity by finding the 'essence' of abstractions. The problem is simply that thats much more difficult if you have to implement it in a library than build it into the language itself.
For example with collections: One possibility to simplify them is to only use a single type of collection. And the compiler creates the 'real' collection based on the usage pattern. If you access the elements always in order and do random inserts, it uses a linked list. If you do random access, it uses a array. If you search for elements, it uses a hash or a sorted list with binary search. To specify additional constraints or hint your expected usage patterns, you can define additional attributes, like 'use it as a map with string-keys', 'use it as a set' or 'use a fixed size and a lru algorithm to free space if required'.
But collections are just one example. What about creating an 'observer' by simply using a 'observe' declaration and everything else is created by the compiler. Same for serialisation, database access etc. Or look in the field of guis: You want to edit a datatype? Simply add some attributes to the definition and the compiler creates a gui-dialog to edit this datatype if you want to. And because all those features are combinable, you can use it with those fancy collections, database access etc.
All those is also possible in minimal languages, but often only with lots of metaprogramming, use of reflection, macros etc. Integrating them right into the language would be in most cases much more simple and the result would be better. And if you design all those 'from scratch' but with the knowledge of the wanted features, those features would be better integrated than the work of different programmers who have created lots of libraries. And those better integration would lead to much mor simplicity, even if the language itself is much 'fatter'.
Java generics are just a case of 'useless' complexity: They are primarily needed to build typesafe collections in Java via libraries. And they are relativly complex. If they had instead build all those standard collections into the language that would be more easy to implement and the result would be much more easy to use too.
One advantage of macros that you've missed: it's possible to control which macros are in scope and thus which new 'language' features are available - useful if some of them conflict. If your monolithic language can do this, it becomes the weak cousin of a minimal language with macros that happens to include all the monolithic language's features in its libraries...
... that rather than defining a language and writing a standard library, and worrying about which content should go into each, one were to create a "minimal" language for describing languages. The language itself could be small, while the expressions one could create with it would be unlimited. You ask "Why doesn't a language have special syntax for GUIs, SQL, etc.?", but I say the real question is "Why isn't there a language that let's me write a special syntax for my task du jour?".
If you want special syntax for common tasks, I think what you really want is more powerful macros. In a world where macros can fundamentally extend the language implementation itself, the dividing line between language and library ceases to exist.
Karsten_w,
This discussion is following a pattern we've seen repeated on multiple occasions recently, which was described in the recent blog not forum post as "ungrounded discussion".
You've posted many comments clarifying or expanding on your perspective, and have received a lot of feedback. This would be a good time to pause and write up your ideas in a more considered fashion, taking into account the feedback you've received. When you're done, post it on your own web site or blog, and post a link to it in this thread. Discussion can then proceed with a firmer foundation..
Apart from that I never longed back to the days of 4GL programming.
New features can only be introduced by means of the language itself, so only the developers of the 4GL are able to extend the language; once you really need something within a certain project you are stuck (which in my book is a disaster).
Built-in features are not easily deprecated, which means that lousy design decisions tend to stick around. Refactoring a library is no small task and deprecating functions/methods is not something to take lightheartedly, but it is being done and sometimes appropriately. Within a big language new constructs are added, but badly designed old constructs stay forever for backwards compatibility reasons. That way the old constructs will never die out, for people will keep using them.
Namespace bloating is horrible, leading to long names, resuling in very verbose code.
Genaral stuff like control flow, module system etc. did in the case of this 4GL not get the attention needed. Most of the attention went into adding new features (XML, sockets, message queuing, COM, Corba etc.), while the basic language remained too spartan to be enjoyable.
The language becomes brittle. Once a feature contains a bug it can cause trouble all over the place. If a library contains a bug, don't include it and the pain is not more than missing the functionality of that one library.
Language style can easily become inconsistent. This might sound counter-intuitive, but in my experience libraries in eg. Java, C, Haskell have a more consistent style across all standard libraries than this particular 4GL had. New constructs were added in an ad hoc fashion and in order not to confuse old time programmers who didn't need newer features, hence didn't want to know about it, new features were often different in style from older constructs.
PHP is also a kitchen sink style language. Actually, it seems to do most of what the original poster suggested - it's developed by a community, has just about everything built in, and makes incompatible changes with every release.
PHP's size creates issues that weren't present when say perl was the major cgi language. Because most of its features seem to be added directly to the interpreter rather than as runtime loadable modules, the base interpreter has to be recompiled to add new features, which isn't desirable as a hosting provider, or as an independent developer.
Aside from deployment issues, the languages naming conventions are also all over the place. Some functions use '_' others use studlyCaps to separate words. This adds mental overhead for developers to the point that most only code with the specs a hand.
PHP would seem to be a solid counter example for this design methodology..
It's possible to do this in the database library given a sufficiently advanced type system. It could also be done, in a more ugly manner, using a macro system. Think dependant types.
Since the discussion really went a bit overboard, I will try to clarify my ideas a bit.
I'm not really a theorist, I consider programming languages as tools to get some kind of job done. The job (writing big software systems) is very complicated because the complexity of the system you have to build. Because of this I view programming languages not only from the viewpoint of their theoretical beauty, but also from the viewpoint of their practical usefulness. And I think that lots of very interesting approaches simply failed to deliver the latter. I've looked at lots of programming languages, tried to work with some of them - but I'm still doing my work primarily in Java. And I asked myself often enough: Why, oh why? I think the primary reason is practicability. While Java is relatively ugly, it simply is a language which lets you get your work done. Not in a particularly elegant, clever or even beautiful way, but with all those libraries and a good IDE you can be really productive.
But do I really like Java? No. I would always prefer a more elegant and clever language. But why are those elegant languages so difficult to use to do real-world work? Is it only my stupidity? Or is it (at least to some part) a problem of the language?
I've thought a lot about this topic and one idea I've stumbled upon is that those really brilliant language designers who are able to create a language like Haskell approach the topic from a very different angle as a practitioner who want a language to get work done and not as a research tool/object. If you try to build a work of art you simply don't consider usefulness. And the result is the programming language equivalent of a super hot design chair which look awesome – but is really uncomfortable if you try to sit in it.
The trend with those elegant languages seem to go in the direction of creating meta languages: Languages to create a real languages in. Look at the Let's make a programming language!-thread: Lots of proposals were simply to creating languages which can be used to create programming languages. And look at Lisp which got lots and lots of admiration because of its macro system which enables the creation of new languages constructs direct in the language.
But is this approach really useful in practice? Language design isn't easy, even for people who put lots of effort in doing it. I've designed around 15 languages over the last 20 years and until now none was good enough to publish. Maybe it's again my stupidity, but I think it's also a very hard job if you really want to create something good and useful. And if its so, why should the creation of a language creation language be able to change this? If parser generators would make language design easier, since invention of tools like yacc there should be an abundance of good programming languages on the market. But where are they? Why have I still to use something like Java? The answer is: Tools don't create good languages, good language designers do.
But if this is true (and I'm sure it is), who are the people who would use all those fabulous meta languages to create real languages, languages which enables the common programmer to get his work done? The common programmers them selfs? They simply can't do it, because it's a hard task and they have neither the time nor the knowledge to do it.
I think thats the fallacy lots of modern languages fell into: Thinking that the creation of a tool to build good abstractions would let those 'common' people really create those abstractions. But they won't because they're unable to. And because of this, a language like Java, which is easy to use and actually prevents the creation of new abstractions (because of its limitations) is so useful: People stay in the path the languages provides and thus have some kind of guide to prevents them from doing 'stupid' things. This lead to lots of useful tools and libraries and in the end to a much better productivity as you would expect if you look at the limitations of the language.
How does this relate to the initial mentioned 'fat' languages I asked about? My idea: If a language designer don't creates a meta language but instead a language which is directly useful without requiring the creation of a 'real' language first, it will automatically be such a 'fat' language, because it needs lots of things build in.
But is it really necessary? Why not build a minimal language and also a comprehensive library who can be used to get the work done? Maybe this would work too, but I see two problems:
First: It's more difficult. Creating a meta language plus a real language (in the libs) is simply more work then creating only the language itself. And because of using a meta language (with additional limitations) and not a standard compiler approach, the task will get even more difficult and the resulting language will inherit some of the limitations of the meta language. And because of this additional complexity the resulting language would need more time and will be worse then necessary.
And second: Who prevents all those bad language designers to create their own libraries? Sure, nobody force you to use those bad 'languages' (and yes, they are languages, even if they are implemented as libraries), but what if the 'bad designer' is part of the team who created the language and his work is part of the standard libs (he wasn't good enougth to create the language itself so he had to work on the libs). Or if using his work is simply necessary because there is no alternative implementation? Or if you're forced by a customer to use his work because of compatibility or political concerns? If its possible, there will be lots of cases where the common programmer has no choice but to suffer. I know that this kind of thinking really goes against the belief of many programmers that freedom is always a good thing. But freedom can be misused, and even if you are able to use it wisely, the guy who sells your boss a new framework or the colleagues who works at the same project as you do maybe aren't. And freedom of choice always requires that someone chooses. But every choice can be the wrong one and even if it's a good one, it always takes some time and effort to find it - time one can also use to get work done.
The only way I can imagine right now to prevent those problems is to prevent people to build those bad languages by giving them a good, useful and complete language instead. Sure, extensibility is unavoidable, but the more features you have build right into the language, the more you can afford less elegant means of extensibility.
Of course the next problem is how to avoid a complexity explosion many posters seem to fear if you make a very complex language. But I'm still not sure, why those complexity should really be unavoidable. Why should only the limitations of the 'meta language' (which is then used to create a real usable language 'by library') prevent the explosion of complexity? I really see no inevitability here.
Yes, it's true, there are lots of really bad examples of 'fat' or' monolithic' languages. But I think that the good language designers are simply to interested in creating 'works of art' that they lost sight of the real goal ('Build a language which let's you get work done.'). And the relatively bad ones who aren't able to create those really elegant languages are left to create 'workhorse' languages – and that shows. And then there is the 'Cobol trauma': If someone talks about 'fat languages', everybody thinks of Cobol and goes into some kind of defensive mode.
I won't write here about my ideas on how to create a 'good, fat language'. No, I'm not thinking about a 'kitchen sink' language in the tradition of 4th GL languages or even Cobol. I'm thinking about using better abstractions, but not abstractions to abstract the creation of other abstractions, but abstractions to directly create applications.
...is that when someone finally gets around to creating the perfect PL, the language likely won't require programmers - so we'll all be out of jobs. :-)
I personally think you're operating under several logical fallacies like (a) everybody that uses PLs has the same job description, solving the same sorts of problems with the same considerations; (b) that theoreticians should be juxtaposed against practitioners - i.e. that simplicity is in conflict with pragmatics; (c) that all these syntactical features can seamlessly operate together in an ueber language and make the life of the average programmer easier; .......
From what I gather, LtU Administration thinks you should shell out these ideas offline, as they are mostly open-ended - of the ineffable variety - i.e. why can't I have a programming language that does everything I want it to do? Answers to such questions are mostly matters of opinion, so they are going to drive conversations in directions that are not necessarily productive. It's like starting a thread that discusses everything about PLs in general, but ends up being a discussion about nothing in particular.
Thanks Chris, you nailed it in every respect.
If there are regular members who disagree with this editorial decision, then please post about it in the blog not forum thread. A major purpose of that thread was precisely to ask the question: do regular, long-time members actually get anything from threads like this one? There didn't seem to be much disagreement with the proposed policy of "avoiding ungrounded discussions", which is the main objection in this case.
I'll say one other thing specifically related to this thread: I'm sure many, many LtU readers have, or have had, similar feelings to karsten_w about the various shortcomings of programming languages. I think we can take it as a given that we all understand and perhaps empathize with those kind of feelings. However, it's one thing to recognize that situation, and quite another to claim that you've found the problem, or see a way to a solution. If you're going to make such a claim, then it should meet a higher standard than usual, in terms of explaining the thesis clearly, and as concretely as possible, and demonstrating a good understanding of the existing PL landscape. Otherwise, as Chris said, all you have is "a discussion about nothing in particular", at best.
I maybe don't really understand than what the purpose of this blog is. Only list new papers from researchers? Or only talk about a certain very concrete features of a certain very concrete programming language or type system? If thats true, then this topic is really of topic here (like lots of other topics I've read here in the past and maybe I got a wrong impression out of them, because I always lurk for some time, before I post somewere).
Please put those rules somewhere on the FAQ-page then, I really hate being of-topic. In the moment I just still don't see the point or rule in the FAQ I've violated with this topic.
I have to add, that the reply Chris Rathman is quite unfounded and results from drawing conclusions which are simply not drawable out of my last posting without heavily over-interpreting it. I (a) nowhere promoted the idea of having only a single programming language, (b) that theoreticians should juxtaposed to practitioners (in fact I wrote the opposite) and (c) that I want to create a 'übersprache' and don't see the potential problems of combining features. I'm really a bit surprised that you consider a reply of this quality adequate to your high standards.
I have to add, that the reply Chris Rathman is quite unfounded and results from drawing conclusions which are simply not drawable out of my last posting without heavily over-interpreting it.
. I'm really a bit surprised that you consider a reply of this quality adequate to your high standards.
It might be nice if all aspects of civilized society could be codified as rules, but this isn't the case. While we repeatedly stated that we are working on a more complete set of policies, it should still be clear that if an administrator (Anton) and a one of the founding father of the site (Chris) comment that your posts are not appropriate, the civil thing to do is to think again, try to clarify or stop posting - not to complain about rules. I direct your attention to this thread which is linked from the FAQ which - after explaining the communal aspects of LtU - includes the following:.
I will not go over everything you posted to justify why I feel you are not sincerely trying to engage in civilized and professional discussion. I think others pointed some of the main problems already, but you didn't pause to reflect after reading their polite requests.
I hope that this blunt message will do the trick, since it seems you have a genuine interest in programming languages. If this is the case, please review the responses you received already, and take them to heart. In the hope that this will happen, I am not removing your posting privileges at this point.
I tried to clarify (the 'civil thing' you mentioned), but that is obviously not wanted or even disregarded rudely as 'stream of consciousness' here.
I suggest to remove (or rename) the point 'discussion' at the right menu of the site, because I have really no clue how to discuss something if it's not allow to make clarifications if a reply shows that there was some kind of misunderstanding or poses new questions.
In the future I will refrain from posting here because it's simply useless under a imperative which only allows for 'sniper comments' and I will use this blog solely as a news site with links to interesting papers.
This is my last post here, so please remove my account or even this whole thread, I see no reason to see myself being insulted without the right to answer.
...as I'm really mixing two things here - (1) my opinions on what you are saying; and (2) my opinions on whether this is appropriate for LtU. I really should've left off the first part before delving in the second part
For the first part, perhaps when I see sentences like "But do I really like Java? No.", I made an incorrect inference?
For the second part, Anton in his admin hat asked you to post this stuff in another venue, and use that as a basis for further LtU discussion. As much as getting mad at me might be justified, I still don't see why the original admin request was ignored?
Please put those rules somewhere on the FAQ-page then, I really hate being of-topic. In the moment I just still don't see the point or rule in the FAQ I've violated with this topic.
For the benefit of other members who may not understand the current policies and the action in this case, I'm going to respond to this.
One of the relevant points on the FAQ page is that "Unfounded generalizations about programming languages are usually frowned on". The entire thesis being developed by karsten_w here has depended primarily on a number of such generalizations, many of which appear to be derived from a fairly limited perspective of the field.
Ten days ago, and a week before the before the current topic was posted, Ehud posted the blog not forum topic, which included the following:
One policy which clearly seems needed is that we should try to avoid ungrounded discussions: discussions in which someone defends an idea that they haven't clearly described, and for which there are no existing references. We should not be playing "twenty questions" with people who haven't taken the trouble to express themselves clearly — it's unproductive, and tends to reduce the quality of discussion. LtU is best used to discuss ideas that were published and argued elsewhere. It is not usually a good place for design discussions and the like.
This thread has been a poster-child for the kind of discussion described by this quote. The general lack of links or references to relevant work is one symptom of that.
However, none of this on its own was sufficient to trigger an admin post, partly because we're still working on developing policies, and we don't want to unfairly single out individuals without a strong reason. In this case, what prompted an admin post was the volume of comments from karsten_w in such a short time period: a volume that put him in a category otherwise occupied almost exclusively by a very few previous problem posters.
In this case, we felt that the volume of comments was symptomatic of a quality issue. Unfortunately, attempting to be gentle about this didn't help. A request to post a more detailed and considered writeup elsewhere was ignored, and what was subsequently posted here was basically more of the same. This is in contrast to the previous two posters who recently received similar admin messages, and took them quite well.
That's how we reached the current point. We certainly share some responsibility here: we don't have explicit enough policies, and lacking explicit policies, we've had too little enforcement. This has resulted in a situation in which some people understandably think that LtU is supposed to be an "anything goes" discussion forum for PL-related matters. I apologize to karsten_w if he received that impression. We've already taken some steps to correct that, and we'll be saying more in the very near future.
As always, community feedback on this is welcome. For the moment, any feedback not specific to the current thread should be posted in the "blog not forum" thread linked above.
Anything that might change is better off in a library than in the language. Libraries are easy to change or replace, whereas languages can usually only be extended.
For example, the original Java GUI library was AWT. Now it's Swing. Could the change have occurred if it had been a language feature rather than a library?
I think even standard libraries are dangerous. The C++ standard library contains some parts which are ill thought-out (not just my opinion - Josuttis says so in his book) - but we're probably stuck with them now. The numeric classes in the Haskell Prelude make it harder to write maths code than it ought to be - but because it's automatically loaded, it's hard to work around. | http://lambda-the-ultimate.org/node/1565 | CC-MAIN-2018-05 | refinedweb | 17,315 | 59.94 |
Dino Esposito
Download the Code Sample
In the past two installments of this column I discussed how to build an ASP.NET solution for the evergreen problem of monitoring the progress of a remote task from the client side of a Web application. Despite the success and adoption of AJAX, a comprehensive and widely accepted solution for displaying a context-sensitive progress bar within a Web application without resorting to Silverlight or Flash is still lacking.
To be honest, there aren't many ways in which one can accomplish this. You might craft your own solution if you want, but the underlying pattern won’t be that different from what I presented—specifically targeting ASP.NET MVC—in the past columns. This month, I’m back to the same topic, but I’ll discuss how to build a progress bar using a new and still-in-progress library: SignalR.
SignalR is a Microsoft .NET Framework library and jQuery plug-in being developed by the ASP.NET team, possibly to be included in future releases of the ASP.NET platform. It presents some extremely promising functionality that's currently missing in the .NET Framework and that more and more developers are demanding.
SignalR is an integrated client-and-server library that enables browser-based clients and ASP.NET-based server components to have a bidirectional and multistep conversation. In other words, the conversation isn’t limited to a single, stateless request/response data exchange; rather, it continues until explicitly closed. The conversation takes place over a persistent connection and lets the client send multiple messages to the server and the server reply—and, much more interesting—send asynchronous messages to the client.
It should come as no surprise that the canonical demo I’ll use to illustrate the main capabilities of SignalR is a chat application. The client starts the conversation by sending a message to the server; the server—an ASP.NET endpoint—replies and keeps listening for new requests.
SignalR is specifically for a Web scenario and requires jQuery 1.6 (or newer) on the client and ASP.NET on the server. You can install SignalR via NuGet or by downloading the bits directly from the GitHub repository at github.com/SignalR/SignalR. Figure 1 shows the NuGet page with all SignalR packages. At minimum, you need to download SignalR, which has dependencies on SignalR.Server for the server-side part of the framework, and SignalR.Js for the Web-client part of the framework. The other packages you see in Figure 1 serve more specific purposes such as providing a .NET client, a Ninject dependency resolver and an alternate transportation mechanism based on HTML5 Web sockets.
Figure 1 SignalR Packages Available on the NuGet Platform
Before I attempt to build a progress bar solution, it would be useful to get familiar with the library by taking a look at the chat example distributed with the downloadable source code (archive.msdn.microsoft.com/mag201203CuttingEdge) and other information referenced in the (few) related posts currently available on the Web. Note, though, that SignalR is not a released project.
In the context of an ASP.NET MVC project, you start by referencing a bunch of script files, as shown here:
<script src="@Url.Content("~/Scripts/jquery-1.6.4.min.js")"
type="text/javascript"></script>
<script src="@Url.Content("~/Scripts/jquery.signalr.min.js")"
type="text/javascript"></script>
<script src="@Url.Content("~/signalr/hubs")"
type="text/javascript"></script>
Note that there’s nothing specific to ASP.NET MVC in SignalR, and the library can be used equally well with Web Forms applications.
An interesting point to emphasize is that the first two links reference a specific script file. The third link, instead, still references some JavaScript content, but that content is generated on the fly—and that depends on some other code you have within the host ASP.NET application. Also note that you need the JSON2 library if you intend to support versions of Internet Explorer prior to version 8.
Upon the page loading, you finalize the client setup and open the connection. Figure 2 shows the code you need. You might want to call this code from within the ready event of jQuery. The code binds script handlers to HTML elements—unobtrusive JavaScript—and prepares the SignalR conversation.
Figure 2 Setting Up the SignalR Library for a Chat Example
<script type="text/javascript">
$(document).ready(function () { // Add handler to Send button
$("#sendButton").click(function () {
chat.send($('#msg').val());
});
// Create a proxy for the server endpoint
var chat = $.connection.chat;
// Add a client-side callback to process any data
// received from the server
chat.addMessage = function (message) {
$('#messages').append('<li>' + message + '</li>');
};
// Start the conversation
$.connection.hub.start();
});
</script>
It's worth noting that the $.connection object is defined in the SignalR script file. The chat object, in contrast, is a dynamic object in the sense that its code is generated on the fly and is injected into the client page via the Hub script reference. The chat object is ultimately a JavaScript proxy for a server-side object. At this point it should be clear that the client code in Figure 2 means (and does) little without a strong server-side counterpart.
The ASP.NET project should include a reference to the SignalR assembly and its dependencies such as Microsoft.Web.Infrastructure. The server-side code includes a managed class that matches the JavaScript object you created. With reference to the code in Figure 2, you need to have a server-side object with the same interface as the client-side Chat object. This server class will inherit from the Hub class defined in the SignalR assembly. Here’s the class signature:
using System;
using SignalR.Hubs;
namespace SignalrProgressDemo.Progress
{
public class Chat : Hub
{
public void Send(String message)
{
Clients.addMessage(message);
}
}
}
Every public method in the class must match a JavaScript method on the client. Or, at least, any method invoked on the JavaScript object must have a matching method on the server class. So the Send method you see invoked in the script code of Figure 2 ends up placing a call into the Send method of the Chat object, as defined earlier. To send data back to the client, the server code uses the Clients property on the Hub class. The Clients member is of type dynamic, which enables it to reference dynamically determined objects. In particular, the Clients property contains a reference to a server-side object built after the interface of the client object: the Chat object. Because the Chat object in Figure 2 has an addMessage method, the same addMessage method is expected to be exposed also by the server-side Chat object.
Now let’s use SignalR to build a notification system that reports to the client any progress being made on the server during a possibly lengthy task. As a first step, let’s create a server-side class that encapsulates the task. The name you assign to this class, while arbitrarily chosen, will affect the client code you’ll write later. This simply means you have one more reason to choose the class name with care. Even more important, this class will inherit from a SignalR provided class named Hub. Here’s the signature:
public class BookingHub : Hub
{
...
}
The BookingHub class will have a few public methods—mostly void methods accepting any sequence of input parameters that makes sense for their intended purpose. Every public method on a Hub class represents a possible endpoint for the client to invoke. As an example, let’s add a method to book a flight:
public void BookFlight(String from, String to)
{
...
}
This method is expected to contain all the logic that executes the given action (that is, booking a flight). The code will also contain at various stages calls that in some way will report any progress back to the client. Let’s say the skeleton of method BookFlight looks like this:
public void BookFlight(String from, String to)
{
// Book first leg var ref1 = BookFlight(from, to); // Book return flight
var ref2 = BookFlight(to, from);
// Handle payment
PayFlight(ref1, ref2);
}
In conjunction with these main operations, you want to notify the user about the progress made. The Hub base class offers a property called Clients defined to be of type dynamic. In other words, you’ll invoke a method on this object to call back the client. The form and shape of this method, though, are determined by the client itself. Let’s move to the client, then.
As mentioned, in the client page you’ll have some script code that runs when the page loads. If you use jQuery, the $(document).ready event is a good place for running this code. First, you get a proxy to the server object:
var bookingHub = $.connection.bookingHub;
// Some config work
...
// Open the connection
$.connection.hub.start();
The name of the object you reference on the $.connection SignalR native component is just a dynamically created proxy that exposes the public interface of the BookingHub object to the client. The proxy is generated via the signalr/hubs link you have in the <script> section of the page. The naming convention used for names is camelCase, meaning that class BookingHub in C# becomes object bookingHub in JavaScript. On this object you find methods that match the public interface of the server object. Also, for methods, the naming convention uses the same names, but camelCased. You can add a click handler to an HTML button and start a server operation via AJAX, as shown here:
bookingHub.bookFlight("fco", "jfk");
You can now define client methods to handle any response. For example, you can define on the client proxy a displayMessage method that receives a message and displays it through an HTML span tag:
bookingHub.displayMessage = function (message) {
$("#msg").html(message);
};
Note that you’re responsible for the signature of the displayMessage method. You decide what’s being passed and what type you expect any input to be.
To close the circle, there’s just one final issue: who’s calling displayMessage and who’s ultimately responsible for passing data? It’s the server-side Hub code. You call displayMessage (and any other callback method you want to have in place) from within the Hub object via the Clients object. Figure 3 shows the final version of the Hub class.
Figure 3 The Final Version of the Hub Class
public void BookFlight(String from, String to)
{
// Book first leg
Clients.displayMessage( String.Format("Booking flight: {0}-{1} ...", from, to));
Thread.Sleep(2000);
// Book return
Clients.displayMessage( String.Format("Booking flight: {0}-{1} ...", to, from));
Thread.Sleep(3000);
// Book return
Clients.displayMessage( String.Format("Booking flight: {0}-{1} ...", to, from));
Thread.Sleep(2000);
// Some return value
Clients.displayMessage("Flight booked successfully.");
}
Note that in this case, the displayMessage name must match perfectly the case you used in the JavaScript code. If you mistype it to something such as DisplayMessage, you won’t get any exception—but no code will execute, either.
The Hub code is implemented as a Task object, so it gets its own thread to run and doesn’t affect the ASP.NET thread pool.
If a server task results in asynchronous work being scheduled, it will pick up a thread from the standard worker pool. The advantage is, SignalR request handlers are asynchronous, meaning that while they’re in the wait state, waiting for new messages, they aren’t using a thread at all. When a message is received and there’s work to be done, an ASP.NET worker thread is used.
In past columns, as well as in this one, I used the term progress bar frequently without ever showing a classic gauge bar as an example of the client UI. Having a gauge bar is only a nice visual effect and doesn’t require more complex code in the async infrastructure. However, Figure 4 shows the JavaScript code that builds a gauge bar on the fly given a percentage value. You can change the appearance of the HTML elements via proper CSS classes.
Figure 4 Creating an HTML-Based Gauge Bar
var GaugeBar = GaugeBar || {};
GaugeBar.generate = function (percentage) {
if (typeof (percentage) != "number")
return;
if (percentage > 100 || percentage < 0)
return;
var colspan = 1;
var markup = "<table class='gauge-bar-table'><tr>" +
"<td style='width:" + percentage.toString() +
"%' class='gauge-bar-completed'></td>";
if (percentage < 100) {
markup += "<td class='gauge-bar-tobedone' style='width:" +
(100 - percentage).toString() +
"%'></td>";
colspan++;
}
markup += "</tr><tr class='gauge-bar-statusline'><td colspan='" +
colspan.toString() +
"'>" +
percentage.toString() +
"% completed</td></tr></table>";
return markup;
}
You call this method from a button click handler:
bookingHub.updateGaugeBar = function (perc) {
$("#bar").html(GaugeBar.generate(perc));
};
The updateGaugeBar method is therefore invoked from another Hub method that just uses a different client callback to report progress. You can just replace displayMessage used previously with updateGaugeBar within a Hub method.
I presented SignalR primarily as an API that requires a Web front end. Although this is probably the most compelling scenario in which you might want to use it, SignalR is in no way limited to supporting just Web clients. You can download a client for .NET desktop applications, and another client will be released soon to support Windows Phone clients.
This column only scratched the surface of SignalR in the sense that it presented the simplest and most effective approach to program it. In a future column, I’ll investigate some of the magic it does under the hood and how packets are moved along the wire. Stay tuned.
Dino Esposito is the author of “Programming Microsoft ASP.NET MVC3” (Microsoft Press, 2011) and coauthor of “Microsoft .NET: Architecting Applications for the Enterprise” (Microsoft Press, 2008). Based in Italy, he’s a frequent speaker at industry events worldwide. You can follow him on Twitter at twitter.com/despos.
Thanks to the following technical expert for reviewing this article: Damian Edwards
I (similar to @toddysm) could not find the Chat app in the downloadable. I Googled it and I guess it is the one here: Dror
Never mind - found it :)
Where is the chat example that you say is distributed with the downloadable code? It doesn't seem to be packed with the C# code.
I have IE9 and win7, and it works smoothly in FireFox too, but strange ... It you have several opened browsers with the same url and if you start operation on one all other suddenly display feedback?????
Downloaded the code sample and tried to run but got the error message below: "SignalR: Connection must be started before data can be sent. Call .start() before .send()" Don't know what might be missing. Please help. I am using VS 2010 and running on IE8 and win 7.
More MSDN Magazine Blog entries >
Browse All MSDN Magazines
Receive the MSDN Flash e-mail newsletter every other week, with news and information personalized to your interests and areas of focus. | http://msdn.microsoft.com/en-us/magazine/hh852586.aspx | CC-MAIN-2014-23 | refinedweb | 2,492 | 65.52 |
I am trying to configure the B1 RFID hat on my raspberry pi since it is a cleaner solution than using a USB RFID reader. However I have ran into some issues.
Firstly, the documentation supplied by Eccel is very minimal and offers no library or install guide specific to a Raspberry Pi. It only offers information on a previous model not suited to a pi.
I have tried using some data and know-how, I have tried to use the UART serial to read back data from an RFID card, however it returns nothing. Here is the code I am using:
import time, serial port = serial.Serial("/dev/serial0", baudrate=9600, timeout=3.0) while True: print(port.read(100)) #<-- I assume here the argument is bit count?? time.sleep(0.5)
This code does not return any errors, it just doesn’t print any response when a card is presented.
If anyone has experience with this particular product and can offer an insight it will be much appreciated!
- 1Why guess? Read the documentation. pythonhosted.org/pyserial/pyserial_api.html#serial.Serial.read read(size=1) Parameters: size – Number of bytes to read. Returns: Bytes read from the port. Return type: bytes Read size bytes from the serial port. If a timeout is set it may return less characters as requested. With no timeout it will block until the requested number of bytes is read. – joan 11 hours ago
- This is not an answer, it only confirms what I already thought. This does not contribute to solving my problem at all. This should have been a comment at the very most – EcSync 10 hours ago
- Actually it would seem to contradict what you already thought: “I assume here the argument is bit count??” — nope, docs say bytes. – goldilocks♦ 10 hours ago
- @goldilocks thanks for that – EcSync 10 hours ago
- I skimmed your RFID thing and wow! I have never seen such a deluxe RFID/NFC reader with a Swiss army knife with too many blades (see Fig 1.1 of user manual), no wonder it costs you £30+. Newbie starting suggestion: (1) Do not connect it to Rpi yet. Try to see if you can use Win10 RealTerm to say Hello (eg. “AT” command) talk to it using USB/UART 9k6bd8n1. If successful, then use Rpi python built in UART (TxD/RxD), or USB to TTL UART. – tlfong01 5 mins ago
- The module is very standard interface, so I guess it should be easy to mess around. Reference: (1) My comments of this UART projector Q&A – raspberrypi.stackexchange.com/questions/105405/…. (2) RaspberryPi-B1 – Product Details and Specifications- RS 2017apr24 docs.rs-online.com/9b30/0900766b815c553a.pdf (3) RaspberryPi-B1 User Manual V1.2 (UART COM port) – Eccel docs.rs-online.com/e911/0900766b815c553b.pdf – tlfong01 3 mins ago Edit
- (4) Eccel Technology Ltd Hat RFID/NFC Raspberry Pi HAT for Raspberry Pi 3 Model B – RaspberryPi-B1 (000367) Eccel Tech £33 uk.rs-online.com/web/p/radio-frequency-development-kits/…-google--CSS_UK_EN_Semiconductors--Semiconductor_Development_Kits%7CRadio_Frequency_Development_Kits--PRODUCT_GROUP&matchtype=&aud-827186183926:pla-566139804874&gclid=EAIaIQobChMI1YC7npvE5gIVGLLtCh1uVAo3EAQYAiABEgJBMPD_BwE&gclsrc=aw.ds – tlfong01 3 mins ago Edit
Categories: Uncategorized | https://tlfong01.blog/2020/01/07/%EF%BD%92%EF%BD%86%EF%BD%89%EF%BD%84%E3%80%80%EF%BD%92%EF%BD%86/ | CC-MAIN-2021-25 | refinedweb | 525 | 58.38 |
millions of temp files eating my hard drive
I'm running suds in a long running multi process service that makes tens of thousands of connections each day. Each connection is in a child process that exits on completion of the job, but the temp file is not deleted on exit of either the child or parent process. I'm generating about 10-20GB of temp files/day. The hard drive is filling up with temp files. I've tried everything I can think of to disable the cache.
Forgetting for the moment the issue of temp files not being removed, how do I prevent the creation of the temp files? setting fileCache=None doesn't work, setting cache=None doesn't work. Nothing seems to stop these tribbles from devouring my disk space
For reference I am using python 3.4 and suds-jurko (0.6) installed using pip3.
Had the same issue and the most expedient fix for me was to manually patch the file:
site-packages/suds/reader.py
add to imports:
import hashlib
change this line:
h = abs(hash(name))
to:
h = hashlib.md5(name.encode()).hexdigest()
For me, this fixed the issue. Now there are only 6 cache files and they haven't grown or added in months.
Thanks, Michael
@Michael Marziani Thank you for this fix. It worked. Sorry for replying late.
Looks like this is fixed on master. | https://bitbucket.org/jurko/suds/issues/126/millions-of-temp-files-eating-my-hard | CC-MAIN-2019-04 | refinedweb | 235 | 75.71 |
News Feed Item
RIMOUSKI, QUEBEC -- (Marketwired) -- 08/12/14 -- Puma Exploration (TSX VENTURE: PUM) has increased its strategic land holdings in northern New Brunswick with the acquisition of 5,552 hectares of prospective porphyry system targets. and are shown on a map attached and available at
Marcel Robillard, President of Puma Exploration, said: ".
The objective of the acquisition is to secure land with prospective porphyry targets in northern New Brunswick and apply the discovery model defined at Nicholas-Denys with a minimum of investment required for the next two years as we continue to conduct intensive exploration on the Nicholas-Denys porphyry system. The works planned will include mainly data compilation and field reconnaissance program to define the most promising projects within the Puma's project portfolio. Detailed project description and program will be provided following the data compilation and preliminary field reconnaissance.
The first priority for Puma is to continue exploring the Nicholas-Denys Porphyry by drilling the remaining IP targets currently being refined through the final interpretation of the new downhole IP survey conducted in drillhole FND14-02 and FND14-03.
Benjamin Project
The Benjamin project comprises two contiguous claim blocks totaling 2,269 hectares located 35km northwest of the Nicholas-Denys Project. The first block (Connors block; 21 claims for 454 ha.) was optioned from a local prospector. The second block (84 claims for 1,815 ha.) was staked subsequently by Puma to secure prospective land located north of the porphyry deposit.
The Benjamin River copper-molybdenum-silver deposit contains a historical resource of 21M tons @ 0.21% Cu as defined by Soquem in 1968 from 22 drilholes. The historical resource estimate was prepared prior to the implementation of NI43-101 and includes terminology not compliant with current reporting standards. Puma has not made any attempt to re-classify the estimates according to current standards of disclosure and is not treating the estimate as current mineral resources or mineral reserves. Historical estimates should not be relied upon.
The molybdenum and silver content were never considered in the resources estimate and the deposit is open in all direction and at depth. Of the 22 holes drilled by Soquem in 1968 some returned significant copper-molybdenum-silver mineralization such as hole 7008 with 0.37% Cu, 0.011% Mo and 1.7g/t Ag over 70.1 meters.
Pabineau Project
The Pabineau project consists of two contiguous claim blocks totaling 1,110 hectares located 25km southeast of the Nicholas-Denys Project. These two claim blocks where optioned from the same prospector as the Benjamin Project (South Pabineau Lake; 46 claims for 1,001 ha. and Pabineau; 5 claims for 109 ha.). Little exploration work has been conducted on the property but shows promising molybdenum results from collected grab samples and short pack sack drill holes such as 0.65% MoS2 over 2.3 meters in hole ddh#95. No copper and silver assays were completed on the samples.
Green River Project
The Green River project consists of one claim block totaling 2,173 hectares (100 claims) located 170km west of the Nicholas-Denys Project in Northern New Brunswick. The property was staked by Puma to covers a potential porphyry system with no work done in the area.
Terms of the option agreement
The option agreement covers the combined Pabineau and Benjamin Projects located in Northern New-Brunswick. The properties included in the agreement comprise 72 claims. All the claims are in good standing until 2015. No work commitment is required over the option period but Puma should keep the claims in good standing. To acquire 100% of the properties, Puma has to:
Upon signing: $10,000 of field work and issue 50,000 shares.
1st Anniversary: $10,000 of field work and issue 50,000 shares.
2nd Anniversary: $10,000 of field work and issue 75,000 shares.
3rd Anniversary: $10,000 of field work and issue 100,000 shares.
4th Anniversary: $10,000 of field work and issue 100,000 shares.
As of the 5th Anniversary to the 10th anniversary, an annual pre royalty payment of $10,000 will be made to the vendor. A net smelter return (NSR) of 2% for all minerals remains with the claims owner and Puma keeps the right to purchase 1% of the NSRs for $1,000,000 and the second 1% of the NSRs for $2,000,000.-Au-Pb-Zn fault corridor. Marcel Robillard a Qualified Person as defined in NI 43-101.
More information
Toll free: (800) 321-8564
Published August 12,. | http://news.sys-con.com/node/3144918 | CC-MAIN-2017-43 | refinedweb | 753 | 63.8 |
I see here a chance to acquire a startling lack of interest, along with, likely, a handful of idle downvotes ... just because the thread's already so long and "done". So I'll go for it (as is my style) with one final observation:
I disagree with the characterization of Module-Authors as an automatic
resource guaranteed to give quality or authoritative guidance. My experience with it was
that it simply wasn't. Nobody with any real wisdom was paying any attention, maybe,
that week. Or who knows. For whatever reason, when I asked about a namespace for a CPAN
module I was preparing, I got mostly the flakiest kind of bandwidth and time -wasting
idiocy from the kind of compulsive person who doesn't yet know what they are talking about
(and may never) but cannot restrain themselves from making an appearence anyway. A real
flame-bait-resisting waste of time, all in all. I doubt that I'll ever seek guidance there
again.
In my opinion, which is based on my direct experience with it (not hearsay, not theory, not
wishful thinking, not an assumption), you'll get good advice from the Module-Authors List if
you are both lucky and, likely, persistent. If you are neither, you won't.
There's a larger issue here. I would be far more hesitant than I see many Monks being here,
about advocating in answer to queries like this one, the use of external (to Perlmonks) resources
whether they are Web-based community sites, Mailing lists or newsgroups, or whatever newer
kind of forum appears next. I humbly suggest that: If you aren't actively
involved in that site or forum on a continuing basis and not taking responsibility for
some degree for how well it is fulfilling its charter, then: think twice before recommending it
to a fellow Monk! Just the fact that you know that something exists that the asker
seems not to, is not sufficient grounds to endorse it to them. And furthermore,
jusy because something emanates from a particular domain, e.g. perl.org, this
also (sad tho it may be to say) does not make it valid to assume that it is a really
helpful resource at the current time.
Of course there are people here at Perlmonks, like everywhere, that just like to see their
own name on a node, and whether others take them seriously, or whether they have a reputation
for being astute or sensible, doesn't seem to concern them. Thus the old expression
"consider the source."
Soren A / somian / perlspinr / Intrepid
In reply to Re: "Vanity Tagging" on CPAN?
by Intrepid
in thread "Vanity Tagging" on CPAN?
by BUU
No recent polls found | https://www.perlmonks.org/?parent=392568;node_id=3333 | CC-MAIN-2020-50 | refinedweb | 453 | 59.84 |
The aim of this example is to illustrate defining and implementing modules which are constructed as a hybrid of other modules and behavioral models. We take a simple example wherein a 2-bit fulladder module is built using two 1-bit fulladder components. One of these 1-bit fulladder is the off-the-shelf fulladder module provided in libLCS. The other fulladder module is implemented as a behavioral model. The block diagram for such a scheme is as follows.
The following code is the complete program which implements the above 2-bit fulladder module.
#include <lcs/lcs.h> using namespace lcs; // Since the module is going to have few behavioral constructs, we should derive // it from the base class lcs::Module. class TwoBitFullAdder : public Module { public: // The constructor takes arguments which establish the external connections. TwoBitFullAdder(const Bus<3> &sum, const InputBus<2> &a, const InputBus<2> &a); ~TwoBitFullAdder(); // This is a virtual function which has to be overidden. It is declared in the // base class Module. This is the function which should incorporate the behavioral // model for a one bit fulladder. virtual void onStateChange(int portId); private: InOutBus<1> c1; // The carry input for the fulladder with a behavioral model. InputBus<2> a_, b_; // The inputs to our 2-bit fulladder. Bus<3> s; // The sum output of our full adder. FullAdder *fa; // The fulladder module which will be created in the constructor. }; TwoBitFullAdder::TwoBitFullAdder(const Bus<3> &sum, const InputBus<2> &a, const InputBus<2> &b) : Module(), c1(0), a_(a), b_(b), s(sum) { Bus<> cin(0); // The carry input to the fulladder module. fa = new FullAdder(s[0], c1, a[0], b[0], cin); // Initialising the fulladder module. // The 2-bit fulladder module has to respond to state changes in the relevant // input bits and the carry output of the fulladder module. Hence the module has to request // notification of line events from these signals. Note here that the carry input c1 for the // behavioral model is declared to be of type lcs::InOutBus. If it were to be declared // as a normal lcs::Bus object, then it will not be able to notify the behavioral // model about the events on its lines. c1.notify(this, LINE_STATE_CHANGE, 1); a_.notify(this, LINE_STATE_CHANGE, 2, 1); b_.notify(this, LINE_STATE_CHANGE, 3, 1); } TwoBitFullAdder::~TwoBitFullAdder() { delete fa; c1.stopNotification(this, LINE_STATE_CHANGE, 1); a_.stopNotification(this, LINE_STATE_CHANGE, 2, 1); b_.stopNotification(this, LINE_STATE_CHANGE, 3, 1); } void TwoBitFullAdder::onStateChange(int portId) { // When a state change occurs on of the lines c1[0] or a_[1] or b[1], // The behavioral model should compute the bit values for the second // and third sum output bits and write them onto the output bus. // This is done as follows. s[1] = (~a_[1] & ~b_[1] & c1[0]) | (~a_[1] & b_[1] & ~c1[0]) | (a_[1] & ~b_[1] & ~c1[0]) | (a_[1] & b_[1] & c1[0]);; s[2] = (~a_[1] & b_[1] & c1[0]) | (a_[1] & ~b_[1] & c1[0]) | (a_[1] & b_[1] & ~c1[0]) | (a_[1] & b_[1] & c1[0]); } int main(void) { Bus<2> a, b; // Inputs Bus<3> sum; // Sum output // Initialising a 2-bit fulladder module. TwoBitFullAdder adder(sum, a, b); // A tester to generate inputs for the 2-bit fulladder module. Tester<4> tester((a, b)); ChangeMonitor<2> amon(a, "A"); ChangeMonitor<2> bmon(b, "B"); ChangeMonitor<3> smon(sum, "SUM"); Simulation::setStopTime(2000); Simulation::start(); return 0; }
The following is the output when the following is compiled and run.
At time: 0, A: 00 At time: 0, B: 00 At time: 0, SUM: 000 At time: 200, A: 01 At time: 200, SUM: 001 At time: 300, A: 10 At time: 300, SUM: 010 At time: 400, A: 11 At time: 400, SUM: 011 At time: 500, A: 00 At time: 500, B: 01 At time: 500, SUM: 001 At time: 600, A: 01 At time: 600, SUM: 010 At time: 700, A: 10 At time: 700, SUM: 011 At time: 800, A: 11 At time: 800, SUM: 100 At time: 900, A: 00 At time: 900, B: 10 At time: 900, SUM: 010 At time: 1000, A: 01 At time: 1000, SUM: 011 At time: 1100, A: 10 At time: 1100, SUM: 100 At time: 1200, A: 11 At time: 1200, SUM: 101 At time: 1300, A: 00 At time: 1300, B: 11 At time: 1300, SUM: 011 At time: 1400, A: 01 At time: 1400, SUM: 100 At time: 1500, A: 10 At time: 1500, SUM: 101 At time: 1600, A: 11 At time: 1600, SUM: 110 | http://liblcs.sourceforge.net/two_bit_fa_mixed_module.html | CC-MAIN-2017-22 | refinedweb | 761 | 58.82 |
The last decade has been nothing short of a whirlwind in the mobile space. Phones have been transformed from simple conveniences to indispensable extensions of everyday life. With high-resolution displays, GPS, cameras capable of both still photography and recording high-definition videos, full-featured web browsers, rich native applications, touchscreens, and a constant connection to the Internet, the phone has evolved into a powerful mobile computer. The evolution has gone so far that the actual telephone functionality has essentially become secondary to the rest of the features. Today’s mobile phone is now more than the sum of its parts. It is your connection to the world.
As with any fast-moving market, there are many players with skin in the mobile game at any given time. This book, however, is going to be focused on three of the bigger names right now:
It can be argued that Apple is responsible for being the catalyst in bringing about the modern smartphone generation. Back in early 2007, Apple announced the iPhone, which marked the company’s first foray into building their own mobile phone. The product included many features, such as a touchscreen and a focus on a polished user experience, that would quickly become standard in smartphones across the board. In many ways, the iPhone remains the gold standard for smartphones today, even as the market continues to evolve and innovate. Apple’s mobile operating system, iOS, is also found on its tablet offering, the iPad, as well as the iPod, Apple’s portable music player. Since the company produces both the devices and operating system, it maintains a high level of control over its ecosystem.
Since Google purchased it in 2005 and began releasing versions in 2008, Android has taken the smartphone market by storm. Just a few years and numerous versions after its initial release, as of February 2012, Android accounts for just over 50% of the US smartphone market, a number that continues to climb every month (). Most of Android is open source and licensed in a way that gives hardware vendors a lot of flexibility, so the ecosystem of Android phones is very diverse. Because of that flexibility, many vendors make significant changes to the versions of Android that ship on their devices, so very few devices are actually running a stock version of the operating system. With the release of Honeycomb, Android has also started to stake its claim in the tablet market as well. Additionally, Android can be found in Google’s television platform, Google TV, as well devices such as Barnes & Noble’s Nook Color and Amazon’s Kindle Fire, which bring the richness of tablets to the world of e-readers. Ice Cream Sandwich, the version of Android following Honeycomb, aims to help bridge the growing divide between Android smartphones and tablets.
In 2010, Microsoft released Windows Phone 7, which marked a long-overdue shift away from its legacy Windows Mobile platform that had long since stagnated. The user interface in Windows Phone 7, dubbed Metro, is decidedly unlike the approach taken by both iOS and Android. A strong emphasis is placed on simplicity, typography, and expansive interfaces that aim to provide a sense of depth and a natural user experience. Device vendors are given a small amount of freedom in designing their devices, but Microsoft maintains a strict set of requirements they have to meet in order to ensure stability and quality, as well as avoid some of the fragmentation problems seen in the Android realm. While the platform is still in the very early stages of its life, Microsoft seems dedicated to pushing the platform forward to try and gain back some of the market share the company has lost over the years. In late 2011, Microsoft shipped the Windows Phone 7.5 update, codenamed Mango, which started to bring in many features missing from the first releases, such as local databases and camera access.
iOS, Android, and Windows Phone, despite the fact that they are all mobile platforms, have very distinct ways of doing things and their own required languages in which to do them. iOS applications are written in Objective-C, while Android makes use of Java. Windows Phone leverages the .NET Framework, where the primary languages are C# and Visual Basic .NET. You can also use C and C++ on iOS and Android, but they are not currently supported on Windows Phone (see Table 1-1). As developers, we dread the idea of having to repeat all of our work three times in three different programming languages. Aside from the upfront overhead of doing the work three times, bug fixes found later on will also likely have to be fixed three times. For any non-trivial application, the technical debt can add up quickly, so the natural response is to seek out some sort of cross-platform solution to minimize the cost of building and maintaining applications for these devices.
The promise of a “write once, run anywhere” solution is nothing new in the development world. It tends to come around whenever there’s a need to publish applications on multiple platforms, whether on the desktop or on mobile devices. The mantra was originally coined by Sun when it was pushing for Java to be the unifying language for all platforms and devices. The concept is certainly not unique to Java, though, nor was that the first time such a solution was proposed.
It has a natural appeal to us as developers. Who wouldn’t want a silver bullet like that at our disposal? We could write everything once, get it just the way we want it, and then instantly be able to target users on all platforms. Unfortunately, things that seem too good to be true often are; there’s a reason why Java, over a decade and a half into its life, has yet to become the common language for developing cross-platform desktop applications. I think Nat Friedman, CEO of Xamarin, put it best in an interview he did on the .NET Rocks! podcast:
“‘Write once, run anywhere perfectly’ is a unicorn.”
Now, let me take a step back for just a moment to provide some clarification here. I don’t intend for anything in this chapter, or this book for that matter, to be taken as a slight against frameworks that decided to take this approach to solving the problem. The silver bullet trap works in both directions. No matter the context, there is never a solution so perfect that it solves all problems. Instead, what I will outline in this book is meant to demonstrate only one approach to solving things. It’s another set of tools for your developer tool belt.
Having said that, let’s take a moment to think about who stands to benefit the most from the “write once, run anywhere” method. You could make the argument that the user benefits from you being quicker to market or supporting his platform, and though there is some legitimacy to that, I would tend to disagree. Instead, when all is said and done, it is we, the developers, who really benefit by cutting down the amount of time it takes to write and publish our applications. However, this reduced development time often involves making concessions that sacrifice user experience. Each platform has its own hardware configurations, with varying screen sizes, resolutions, and buttons. Each has its own set of user interaction metaphors and guidelines for how an application should look and behave. In order for your application to look and feel native, it should act like the other applications on that platform.
Writing to the lowest common denominator can end up making your application feel foreign to all of them. Applications on Windows Phone are designed to look and behave differently than those on iOS, and that is something that should be embraced rather than glossed over or abstracted away. The experience you present to your users should be the primary concern when designing your application’s interface. Ultimately, that is what will set your application apart from others who take shortcuts along the way.
By now, you’re probably thinking to yourself, “So if I’m not writing in the platform’s native language, and I’m not leveraging one of these cross-platform frameworks, how do you expect me to write my applications?”
What I am going to propose is an alternative approach where you can leverage the power of the .NET Framework, along with the powerful C# language, across all three platforms. While this may sound similar to “write once, run anywhere,” the key difference is that C# and the Base Class Libraries are used as a universal language and library where the device-specific and user interface-specific elements are not abstracted, but are instead exposed to developers. This means that developers build native applications using three different user interface programming models, one for each platform, while using C# across the board.
As mentioned earlier, .NET is exposed natively on Windows Phone, so there’s no friction there. However, we know that on both iOS and Android it is not, so how can we make this work? To help bridge this gap, a company named Xamarin has created two products, MonoTouch and Mono for Android. We will explore these products in more depth in later chapters, but the elevator pitch is that they allow for writing native applications in C# (see Table 1-2), providing bindings to the platform’s native libraries and toolkits so that you’re targeting the same classes you would in Objective-C for iOS or Java for Android. Because you’re working against the platform’s native user interface toolkits, you don’t need to worry about how to make your application look and feel native to the platform, since it already is.
Apple is known for being strict about what gets into the App Store, so you might be wondering whether it will only accept applications written natively in Objective-C. There are actually thousands of MonoTouch applications in the store. In fact, iCircuit, a MonoTouch application, was shipped with their demo iPad 2 units that were sent out to stores.
As the names imply, MonoTouch and Mono for Android expose .NET on iOS and Android by leveraging Mono, an open-source, cross-platform implementation of the Common Language Infrastructure (CLI), an ISO standard that describes the virtual execution environment that is used by C#. Despite the fact that they are commercial products, both offer free evaluation versions that do not expire, and allow you to deploy to the iOS simulator or the Android emulator. There is no risk in taking them out for a spin. In the next chapter, we will explore how to get started with these tools, and build your first application along the way.
Apple and Google release new versions regularly, so you might be wondering what that means for you if you’re using MonoTouch or Mono for Android. Generally, there is no waiting period here, as both products track the beta programs for iOS and Android. For example, MonoTouch typically releases the bindings for a new operating system within 24 hours of the official Apple release. In addition to the quick release cycle, Xamarin offers first-class support for its products, providing prompt responses through mailing lists and IRC, and maintaining thorough, user-friendly documentation. Even outside of Xamarin, the Mono community in general is very active and helpful as well. You can find information about the company and products at.
If you’re already a .NET developer, you can immediately hit the ground running, still having the familiar namespaces and classes in the Base Class Library at your disposal. Since the Mono Framework is being used to run your code, you don’t need to worry about whether Objective-C or Java implements a particular C# feature you want to use. That means you get things like generics, LINQ to Objects, LINQ to XML, events, lambda expressions, reflection, garbage collection, thread pooling, and asynchronous programming features. Taking things even further, you can often leverage many existing third-party .NET libraries in your applications as well. You can turn your focus towards solving the business problem at hand instead of learning and fighting yet another set of new languages.
As great as this is, the bigger win with this approach is the ability to share a large percentage of your core application code across all platforms. The key to making the most out of this is to structure your applications in such a way that you extract your core business logic into a separate layer, independent of any particular user interface, and reference that across platforms. By doing so, each application essentially becomes a native user interface layer on top of that shared layer, so you get all the benefits of a native user experience without having to rewrite the application’s main functionality every time (see Table 1-3). In Chapter 3, we will explore some techniques available to help keep as much code as possible in this shared layer and maximize code reuse across platforms.
The classes and methods exposed by a framework make up what is
referred to as its profile. The .NET profile exposed
by both MonoTouch and Mono for Android is based on the Mono Mobile
profile. The mobile profile is a version of the .NET 4.0 API that has some
of the desktop and server features removed for the sake of running on
small embedded devices, such as the
System.Configuration namespace. This profile is
very similar to the core of Silverlight, which is also a subset of the
full .NET profile for the same reasons. Since Windows Phone is also based
on Silverlight, there is a large amount of overlap between all three of
these profiles, meaning that non-user interface code you write for one
platform is very likely to be compatible with the other two.
Technically, the Windows Phone platform supports developing applications using both the Silverlight and XNA frameworks. Since XNA is more suited for game development, this book will focus on building applications with Silverlight. By definition, games define their own user interfaces, so not all of the problems outlined earlier with regards to providing a quality cross-platform user experience will necessarily apply when developing games.
The MonoGame project provides an XNA 2D implementation that runs on both MonoTouch and Mono for Android. This is a third-party community project that is continuously evolving. More information about the MonoGame project can be found at.
In later chapters, we’ll explore various patterns and techniques to help maximize the amount of functionality that can go into the shared layer. After walking through the process of creating a simple application for iOS, Android, and Windows Phone, we’ll go through many of the common tasks you’ll want to perform in your applications, such as consuming data from the Internet, persisting data to the filesystem or a database, and accessing the device’s location information and mapping capabilities. As we go through these topics, we’ll discuss how you can achieve some code reusability there as well.
It’s also worth noting that since .NET is being used across the board, reusing code in this shared layer isn’t just limited to mobile applications, or even just Silverlight-based applications. Since the Silverlight profile is essentially a subset of the full .NET Framework, in addition to Silverlight for the web or desktop, the same code can be applied to applications written in ASP.NET, WPF, Windows Forms, or even a Windows 8 Metro application. Using Mono, you can also take your code onto Linux or even the Mac using MonoMac, which takes a similar approach to MonoTouch and Mono for Android. In the end, your code can follow you anywhere the .NET Framework goes. That’s pretty powerful.
In this chapter, we looked at some of the big names in the smartphone world and evaluated some of the options available to us as developers for building applications on these devices. We then explored how to target all three platforms using the .NET Framework, as well as the benefits this method brings with it. By using this approach, you can develop fully native applications across each platform without having to abstract away the user interface, while still being able to reuse the bulk of your code across them all. In the next chapter, we will walk through setting up your development environment and building your first application on all three platforms.
No credit card required | https://www.oreilly.com/library/view/mobile-development-with/9781449338282/ch01.html | CC-MAIN-2018-43 | refinedweb | 2,772 | 58.62 |
A simple Django app to build reports.
Project description
Django - OnMyDesk
A Django app to build reports in a simple way.
Installation
With pip:
pip install django-onmydesk
Add ‘onmydesk’ to your INSTALLED_APPS:
INSTALLED_APPS = [ # ... 'onmydesk', ]
Run ./manage.py migrate to create OnMyDesk models.
Quickstart
To create reports we need to follow just two steps:
- Create a report class in our django app.
- Add this report class to a config in you project settings to enable OnMyDesk to see your reports.
So, let’s do it!
Create a module called reports.py in you django app with the following content:
myapp/reports.py:
from onmydesk.core import reports class UsersReport(reports.SQLReport): name = 'Users report' query = 'SELECT * FROM auth_user'
On your project settings, add the following config:
ONMYDESK_REPORT_LIST = [ 'myapp.reports.UsersReport', ]
Each new report must be added to this list. Otherwise, it won’t be shown on admin screen.
Now, access your OnMyDesk admin screen and you’ll see your Users report available on report creation screen.
After you create a report, it’ll be status settled up as ‘Pending’, to process it you must run process command. E.g:
$ ./manage.py process Found 1 reports to process Processing report #29 - 1 of 1 Report #29 processed
Collaboration
There are many ways of improving and adding more features, so feel free to collaborate with ideas, issues and/or pull requests.
Let us know!
We’d be really happy if you sent us links to your projects where you use our component. Just create an issue and let us know if you have any questions or suggestion regarding the library
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/django-onmydesk/0.1.0/ | CC-MAIN-2018-30 | refinedweb | 295 | 66.64 |
Subject: Re: [boost] [#pragma once]
From: Eugene Wee (crystalrecursion_at_[hidden])
Date: 2009-04-10 11:48:10
Hi,
On Fri, Apr 10, 2009 at 11:23 PM, Marcus Lindblom <macke_at_[hidden]> wrote:
> I've seen benchmarks that say some compilers (gcc, msvc) are smart enough to
> recognize #ifndef/#endif and do the #pragma once equivalent. (i.e. there's
> no discernable performance difference.)
Sutter and Alexandrescu say the same in C++ Coding Standards Item #24,
but with admonishment to keep all other code and comments in between
to cater for less intelligent detection of include guards (whereas
many header files in Boost have comments right at the top).
Regards,
Eugene Wee
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2009/04/150654.php | CC-MAIN-2019-47 | refinedweb | 133 | 63.09 |
24 March 2010 20:28 [Source: ICIS news]
HOUSTON (ICIS news)--The potential for new Middle East and Asian ethylene crackers to flood the market and weaken downstream ethylene glycol (EG) prices in the second quarter is already affecting trader activity, a trader said on Wednesday.
Buyers, expecting EG prices to slide in the coming weeks, are beginning to buy only as much product as they need.
“No one wants to be long on supply when prices begin to slide,” the trader said, adding that it was only a matter of when and by how much prices would weaken, and not if they would weaken.
Through the first quarter of 2010, EG prices were well supported on tight feedstock ethylene supply amid a number of planned and unplanned outages and production issues. However, traders said the April price-hike proposals of 2-cents/lb ($44/tonne, €33/tonne) could be the last for some time if new capacity floods the global market.
In Asia, plants in ?xml:namespace>
In the Middle East, new Sabic Basic Industries Corp (SABIC) plants - Sharq 4 and Yansab in Al Jubail, Saudi Arabia - had just gone to or were about to be moved to online status, a trader said.
“It’s about to get skinny over here” for US producers, the trader said, explaining that domestic prices would soon be at the floor of the market.
Domestic price weakening was not a sure thing, however, a reseller noted. A number of SABIC plants were scheduled for maintenance in the second quarter. The production loss will offset at least some of the capacity from the new facilities, the reseller said.
“When those plants are taken down for maintenance, it could help keep the legs under the market for a while longer,” the reseller said.
Truck and railcar fibre-grade ethylene glycol (EGF) prices at the end of March were 50-52 cents/lb, according to global chemical market intelligence service ICIS pricing.
US EG suppliers include Equistar, Huntsman, MEGlobal,
($1 = €0.74)
For more on ethylene glycol, | http://www.icis.com/Articles/2010/03/24/9345575/new-capacity-could-depress-us-ethylene-based-markets.html | CC-MAIN-2014-52 | refinedweb | 341 | 66.17 |
On 8/17/05, Anthony Baxter <anthony at interlink.com.au> wrote: > If you _really_ want to call a local variable 'id' you can (but shouldn't). Disagreed. The built-in namespace is searched last for a reason -- the design is such that if you don't care for a particular built-in you don't need to know about it. > You also can't/shouldn't call a variable 'class', 'def', or 'len' -- but I > don't see any movement to allow these... Please don't propagate the confusion between reserved keywords and built-in names! -- --Guido van Rossum (home page:) | https://mail.python.org/pipermail/python-dev/2005-August/055497.html | CC-MAIN-2018-05 | refinedweb | 102 | 73.68 |
Python Exec() with Syntax and Examples
Do you remember that fairy basket?
Nevertheless, it has so much knowledge to give, but it has one rule, one thing at a time.
And today’s secret of this basket is something Dynamic.
Not taking too much time to learn this new spell, here I introduce to you Exec() function, and of course, our buddy Python is going to do it all for us again. So let’s learn about Python Exec().
Keeping you updated with latest technology trends, Join TechVidvan on Telegram
What is Exec() function in Python?
Exec() function’s usage is for the dynamic execution of a Python program, which can either be a string or object code in user-defined runtime.
Do you think it is the same as Eval? Of course not.
Did you read it is used for dynamic execution of code? It works on the principle of the dynamic execution of code, which makes it very different from Eval.
What does this mean?
This means if it is a string, the string is parsed as a suite of Python statements and then it is executed.
If it is an object code, it is simply executed in a well-defined manner for any integral value in a code.
And one needs to keep in mind that the return statements may not be used outside of function definitions.
Neither even within the context of code which is passed to the exec() function.
This means if it is a string, it simply doesn’t return any value, hence displays ‘none’.
Python Exec() Syntax
The syntax of exec() is
exec(object[, globals[, locals]])
Like a choosy designer who accepts only perfect fabric to make a completely lavish robe, exec() does not take everything which is assigned to it.
Parameters of Exec() in Python
- Object: It can be a string or object code.
- Globals: This is a dictionary and the parameter is optional.
- Locals: It can be a mapping object and is also optional.
Globals and Locals Parameters in Python Exec()
Using globals and locals parameters in user-specific codes, a coder can restrict what variables and methods the users can access in runtime.
One can either provide both or just the globals, in such a case that value suffices for both- globals and the local variables can be parsed.
At the module level, globals and locals are the same dictionaries.
Python Exec() Example
>>> exec('print(tan(45))')
Output
Example of Python Exec() Function
from math import exec('print(fact(5))', {'fact': factorial}) #factorial re-coded as (fact)
Output
When should we use Exec in Python?
As a side effect, implementation of exec() may insert additional keys into dictionaries.
Besides, the code corresponds to variable names and is set by the executed code.
Python Exec Example
a=int(input(“enter a number”)) if: a= 8 exec('print(a==8)') If true: exec('print(a+3)')
Output
11
Statement Evaluation by Exec()
Not only the integers, strings, as earlier mentioned, can get evaluated by exec() function. Here is an example of the same:
Python Exec Function Example
for i in range(5):\n print('Hello world!')
Output
Hello world!
Hello world!
Hello world!
Hello world!
Warnings while using Python Exec()
Sometimes it happens when the coder uses a Unix system (macOS, Linux etc) or has simply imported an os module.
The os module provides a static way in user defined codes to use operating system functionalities such as the read library and appending a file.
Exec() Vs Eval() in Python
Eval accepts a single expression while exec operates on Python statements: loops, try: except:, class and function/method definitions in runtime.
An expression is whatever you can have as the value irrespective of its original domain in the variable assignment of the code.
Eval() returns the value of the given expression, whereas exec ignores the return value from its code, and always returns none in runtime.
Whereas in ‘eval’ mode, it compiles a single expression into the bytecode that returns user-specific value of that expression. In the ‘eval’ mode the compiler raises an exception in runtime if the source code contains statements that are user static and are also beyond a single expression for any integral value in a code in a definite runtime.
Restricting Usage of Available Methods and Variables in Exec()
If both, parameters are being omitted for user-specific codes, the code executed by exec() is executed in the current scope location of the library having any integral or variable value.
The globals and locals parameters (basically dictionaries) are used for global and local variables respectively.
If a local dictionary is omitted, it defaults to the globals dictionary. If the coder passes an empty dictionary as globals for the search, then only the built-ins are available to the object (the first parameter to the exec()) in runtime.
Summary
Unlike any other programming language, python has a lot to offer to its coders, with the ease and efficiency function is actually what gives its coders the best in python.
Exec() being one of them makes it so much easier for the coder to evaluate string/integer in a simple manner.
But unlike any other function, it performs dynamic real-time execution, which can be either for a string or a code. | https://techvidvan.com/tutorials/python-exec-syntax-examples/ | CC-MAIN-2021-17 | refinedweb | 883 | 51.89 |
How I would like to learn Scala?.
Forewords
After 3 years of Scala usage, I can confidently say that it’s awesome programming language. Looking back in the days when I started learn Scala, I feel myself stupid, because I didn’t follow an optimal learning way. My attempts to understand Scala concepts were unsuccessful. Only through persistence, I achieved first results.
So I want to share with you my vision of Scala learning process.
The problem
Here is a well know fact: Scala allows to write the same thing in many different forms. From the one side it’s nice, because you have a large variety of decisions. But from the other side, programming is not a good place for a writing thoughts in a free form.
Here is an example of function declaration in different forms:
def sum(a: Int, b: Int): Int = { a + b } def sum(a: Int, b: Int) = { a + b } def sum(a: Int, b: Int) { a + b } def sum(a: Int, b: Int) = a + b
As you see, even in so simple action, as a function declaration we have multiple choices, to do it. The further we go, the harder things become for understanding. Of course this can be easily solved by introducing a code guide. Then the only thing you need to do is to learn all basic forms for main language constructions.
Keep in mind, that I’m speaking from the Java developer perspective.
Additional complexity make a syntax and a functional programming. Furthermore you never know what topic need to be learned next. Which came first, the chicken or the egg? In Scala I can ask you: which came first, a case class or a pattern matching? Or maybe an
apply function?
Despite this circumstances, I choose Scala over Java 😉
How I would learn it
I decided to start teaching from the most easiest topic for Java developers – object oriented programming. Scala has its own OOP implementation, and it’s very similar to Java 8, including functional interfaces. So there is nothing more trivial than Scala OOP for Java developers.
Here is a short promo video:
As soon as I’ll collect enough comments regarding this mini video course, I’m going to complete the rest of the video lecture. I invite you to join to the first group of “beta” students and leave your feedback about it. Almost 1 hour of content is waiting for you. Try the most unappreciated JVM programming language!
Get your free copy of the course via the subscription form below. | http://fruzenshtein.com/how-i-would-like-to-learn-scala/ | CC-MAIN-2020-24 | refinedweb | 425 | 72.87 |
I've posted a Tweet about building my blog in less than an hour, and I'll be honest; it took me more time writing this post than actually putting the blog online.
Less than 1 hour. I've used NextJS for the first time and it's pretty amazing. And it goes without saying I've used @zeithq for the hosting 🔥— Telmo Goncalves (@telmo ) January 6, 2020
I might keep it updated, just wanted to check how long would take me to get a blog up and running.
I'll try to explain the steps I took.
If you don't want to follow the tutorial download the Source code.
I've decided to go ahead and create a personal page/blog for myself, and since I'm a massive fan of Zeit and Now, that meant no time wasted thinking about hosting and deployments.
I have a few projects running with using GatsbyJS, and to be honest, I love it, it's easy to use, and really powerful if you plug a third-party such as Contentful. Although this time, I wanted to try something different, and since I love hosting and deploy my projects with Zeit, why not give NextJS a try? First time using it, and let me tell you it's freaking amazing.
Let's get started
Run the following:
mkdir my-blog && cd my-blog
npm init -y && npm install react react-dom next --save
If you're wondering
-ymeans you don't have to answer all the
npm initquestions
Now, in your
package.json file replace
scripts with:
"scripts": { "dev": "next", "build": "next build", "start": "next start" }
If you go ahead and try to start the server
npm run dev, it should throw an error, because NextJS is expecting to find a
/pages folder.
So, let us take care of that, in the root of your project run:
mkdir pages && touch pages/index.js
Now you should be able to run
npm run dev and access your application on
If everything is going as expected you should see an error similar to the following:
The default export is not a React Component in page: "/"
That's alright; keep going.
Our first view
In your
pages/index.js file, paste the following code:
import React from 'react' export default function Index() { return ( <div> ✍️ My blog about Books </div> ) }
Check you should see My blog about Books
Getting props
NextJS comes with a function called
getInitialProps; we can pass props into our
Index component.
Let us start with something simpler; at the end of your component lets put the following code:
import React from 'react' export default function Index() { return ( <div> ✍️ My blog about Books </div> ) } + Index.getInitialProps = () => { + return { + blogCategory: 'Books' + } + }
Here we're passing a
blogCategory prop into our component, go ahead and change your component to look like the following:
export default function Index(props) { return ( <div> ✍️ My blog about {props.blogCategory} </div> ) } // ...
If you refresh the page, it should look exactly the same, although, if you change the value of
blogCategory you'll see that it changes your view with the new value. Give it a try:
// ... Index.getInitialProps = () => { return { blogCategory: 'ReactJS' } }
The content of your view should now be: ✍️ My blog about ReactJS
Awesome, next!
Dynamic Routes
So, to build a blog, you want dynamic routes, according to the route we want to load a different
.md file, which will contain our post data.
If you access we'll want to load a file called
hello-world.md, for that to happen let us follow the next steps:
First of all, NextJS is clever enough to allow us to create a
[slug].js file, which is pretty awesome, let's go ahead and create that file:
mkdir pages/post
Note the folder and file needs to be created inside
/pages
Now create a file inside
/post called
[slug].js, it's exactly like that, with the brackets.
Inside this file we'll create our post template, to display the post title, contents, etc.
Go ahead and paste the following code, we'll go over it in a minute:
import React from 'react' export default function PostTemplate(props) { return ( <div> Here we'll load "{props.slug}" </div> ) } PostTemplate.getInitialProps = async (context) => { const { slug } = context.query return { slug } }
In here we're accessing
context.query to extract the
slug from the URL, this is because we called our file
[slug].js, let's say instead of a blog post you want to display a product page, that might contain an id, you can create a file called
[id].js instead and access
context.query.id.
If you access you should see Here we'll load "hello-world"
Brilliant, let's keep going!
As a first step lets create a
.md file:
mkdir content && touch content/hello-world.md
In the
hello-world.md file paste the following:
--- title: "Hello World" date: "2020-01-07" --- This is my first blog post!
That's great; now we need to load the content of this file and pass it through
props in our
PostTemplate file.
Check the comments on the changed lines:
// ... PostTemplate.getInitialProps = async (context) => { const { slug } = context.query // Import our .md file using the `slug` from the URL const content = await import(`../../content/${slug}.md`) return { slug } }
Now that we have the data, we'll be using [gray-matter () to parse our file
frontmatter data.
frontmatterdata is the information between
---in our
.mdfile
To install
gray-matter run:
npm install gray-matter --save
We can now parse the data and pass it to the
PostTemplate props:
Don't forget to import
matter
import matter from 'gray-matter' // ... PostTemplate.getInitialProps = async (context) => { // ... // Parse .md data through `matter` const data = matter(content.default) // Pass data to the component props return { ...data } }
Awesome, now we should be able to access
data in our component
props. Let's try it, refresh the page... Ah, snap!
Are you getting a
TypeError: expected input to be a string or buffer error?
No worries, we need to add some NextJS configuration to tell it to load
.md files, this is a simple process, in the root of your project run:
touch next.config.js
Inside that new file paste the following code:
module.exports = { webpack: function(config) { config.module.rules.push({ test: /\.md$/, use: 'raw-loader', }) return config } }
This will be using the
raw-loader package, so we'll need to install that as well:
npm install raw-loader --save
Don't forget to restart your application
Now let's change our component to receive our new
props:
// ... export default function PostTemplate({ content, data }) { // This holds the data between `---` from the .md file const frontmatter = data return ( <div> <h1>{frontmatter.title}</h1> </div> ) }
Refresh your page, you should see Hello World.
It's missing rendering the
content, lets take care of that:
export default function PostTemplate({ content, data }) { // This holds the data between `---` from the .md file const frontmatter = data return ( <div> <h1>{frontmatter.title}</h1> <p>{content}</p> </div> ) }
Ok, this is great, you should be able to see This is my first blog post!
Markdown Format
Now that we can render our markdown files fine, let's add some formatting to our post file, go ahead and change
hello-world.md:
--- title: "Hello World" date: "2020-01-07" --- ### Step 1 - Install dependencies - Run locally - Deploy to Zeit
Hmmm, the format is not working like expected, it's just raw text.
Let's take care of that, we'll be using react-markdown to handle markdown formatting:
npm install react-markdown --save
Now lets update our
PostTemplate component:
import React from 'react' import matter from 'gray-matter' import ReactMarkdown from 'react-markdown' export default function PostTemplate({ content, data }) { // This holds the data between `---` from the .md file const frontmatter = data return ( <div> <h1>{frontmatter.title}</h1> <ReactMarkdown source={content} /> </div> ) }
That's it; we are done here! You can download the final code here.
If you liked this post, I would really appreciate if you could share it with your network and follow me on Twitter 👏
Discussion
Thanks! This article is very helpful!
Thanks Krisztian 🎉
Do you have any idea how to implement MDX support into this logic?
I do not, but I can explore it.
That is very helpful. Thanks
Thank you for writing this article!
It was helpful.
Thank you Kerry 😃
Hi Telmo, I think you have a typo in the code here -
const content = import(../../content/${slug}.md
), it should be
const content = await import(../../content/${slug}.md
)
Hey Lawrence, you're right. Thanks for the heads up, will change it 👍
So i have a doubt , Suppose if i have a count of 500 posts in my website ,should i need to create 500 markdown files for that?
Please help in an efficient approach
Awesome tutorial. This very helpful
Thanks this is very helpful! | https://practicaldev-herokuapp-com.global.ssl.fastly.net/telmo/build-a-markdown-blog-with-nextjs-4521 | CC-MAIN-2021-04 | refinedweb | 1,477 | 72.05 |
solexalex Wrote:it is a good idea, and keeping the autoexec.py seems to me a good idea too. Too many times when a new XBMC version is released, backwrd compatilbility is not made. I think it is sad and it adds the feeling that addons are not reliable...
Anyway keep going the good job !
Maybe a kind of feature request... It has been a while we have autoexec.py to run scripts at XBMC startup, what about a special script (or extension point) to do some job at XBMC "off time" (when XBMC close). It can be helpfull to stop home made python servers properly, or send messages to listening clients, or even cleanup some temporary files that an addon may have write anywhere...
blinkseb Wrote:No, it's not available on Dharma, it wasn't backported
lophie Wrote:so if we downloaded the latest how would we implements this?
## auto execute scripts when xbmc starts
## note:- use only one script at a time which asks for user input!
import os, xbmc
# create playlists from iTunes playlist file
print "running autoexec.py"
xbmc.executescript('special://home/scripts/myscript.py')
print "finished autoexec.py" | http://forum.kodi.tv/showthread.php?tid=92751&pid=734871 | CC-MAIN-2014-52 | refinedweb | 194 | 74.39 |
In-Depth
Like JSON, only in binary format, BSON is now easier to parse with built-in media type formatters that are included with ASP.NET Web API 2.2 Client Libraries. Here's how.
Binary JSON, or BSON, is a format similar to JSON, but as the name suggests is in a binary format. Developers like to use BSON because it's lightweight with minimal spatial overhead, it's easy to parse, and it means more efficient encoding and decoding. With the release of the Microsoft ASP.NET Web API 2.2 Client Libraries, parsing BSON can be done using the built-in media type formatters.
Before demonstrating the use of the BSON format, I'll first discuss the format. BSON objects consist of an ordered list of elements. Each element contains field type, name and value.
Field names are strings. Types can be any of the following:
Because the BSON format allows for storing various data types, there's no need to convert a string to a given type. This accelerates parsing and data retrieval in comparison to JSON or other text-based formats.
To demonstrate BSON, I'll rely on a previous article I wrote as a starting point, "Implementing Binary JSON in ASP.NET Web API 2.1," which shows how to create a Web API service that renders BSON. Moving forward, here I'll modify the project to utilize more data types in BSON. In addition, I'll examine the data structure passed to the client application.
Looking at the Visual Studio solution for the Web API service, I modified the Car.cs file to include two DateTime fields called DateServiced and TimeServiced. These will be used later to illustrate the BSON format of DateTime types. The modified file can be seen in Listing 1.
using System;
namespace CarInventory21.Models
{
public class Car
{
public Int32 Id { get; set; }
public Int32 Year { get; set; }
public string Make { get; set; }
public string Model { get; set; }
public string Color { get; set; }
public DateTime DateServiced { get; set; } //Newly added field
public DateTime TimeServiced { get; set; } //Newly added field
}
}
In the CarController.cs file, I modified the instantiation of the Cars object to include the newly added fields, as seen here:
Car[] cars = new Car[]
{
new Car { Id = 1, Year = 2012, Make = "Cheverolet", Model = "Corvette", Color ="Red",
DateServiced=Convert.ToDateTime("07/21/2014"), TimeServiced=Convert.ToDateTime("08:32:00") },
new Car { Id = 2, Year = 2011, Make = "Ford", Model = "Mustang GT", Color = "Silver",
DateServiced=Convert.ToDateTime("08/16/2014"), TimeServiced=Convert.ToDateTime("08:33:00") },
new Car { Id = 3, Year = 2008, Make = "Mercedes-Benz", Model = "C300", Color = "Black",
DateServiced=Convert.ToDateTime("06/30/2014"), TimeServiced=Convert.ToDateTime("08:34:00") }
};
Next, I'll ensure the Web API service is configured to send the BSON format. To do this, I'll make sure the BsonMediaTypeFormatter object is being added to the config object (HttpConfiguration type). The complete listing of the WebApiConfig.cs is shown
}
}
}
After all coding changes have been completed, I'll set the index.html to be the start page. If you recall this is performed by simply right-clicking on the page in Solution Explorer and selecting Set As Start Page. The completed CarInventory solution is shown in Figure 1.
Now, when I run the application, index.html will be rendered, as seen in Figure 2.
After I click the /api/car link, the Web API responds by returning a complete listing of the Car[] object in a BSON format. The results are returned to the browser and, hence, prompts me to save or open the file. I'll save the file as car.json and then view it in Visual Studio 2013 so I can examine the binary structure. Looking at the overall structure, you'll see records, or documents as referred to in the BSON specification, are automatically indexed with a 0-based number, as seen in Figure 3.
In addition, each record index is preceded by \0x03, indicating it's an embedded document. This can be seen in Table 1, where all the field designations used in the BSON specification are outlined. The null character (\0x00) is used as a field separator throughout the structure.
As mentioned previously, each field within each record contains information on the type, name and value of each field. The first record, highlighted in Figure 4, shows the first field in the record is "Id" represented by the Hex codes 49 64. Immediately before those bytes is a byte with value \0x10.
Looking at Table 1, the value \0x10 designates a 32-bit integer, which is the data type of the Id field.
The two bytes immediately after "Id" have a value of 1. This is the value of the Id field for that record. This completes the pattern of type-name-value for the Id field. The same cycle repeats for other fields within that record for Year.
Similarly, Make, Model and Color have a similar pattern with a slight difference. They each have a type designation of \0x02 because they're all string values. The values for the string fields are represented by the ASCII character values (that is, "Red" = 52 65 64).
The remaining two fields, DateServiced and TimeServiced are both DateTime types. In the BSON format, they both have a \0x09 designation, indicating they're UTC milliseconds in Unix epoch. This is a time format, defined as the number of seconds that have elapsed since 00:00:00 Coordinated Universal Time (UTC), Thursday, 1 January 1970. Because this time format will produce a large number, the BSON format utilizes a 64-bit integer field and stored the number of seconds in a hex value.
Looking at the code in CarController.cs, the DateServiced for the first record is 07/21/2014. Using an epoch converter, like the one at EpochConverter.com, July 21, 2014 at Midnight EST is equivalent to 1405915200000 milliseconds. This can be seen in Figure 5.
Notice the dropdown box is set to Local Time, which is EST for my location. Using a calculator, 1405915200000 milliseconds converts to 14757137A00 in Hex. However, the binary representation in BSON is reversed, as seen in Figure 6.
This is due to the date being represented in Least Significant Bit First format. You can read more about this at Wikipedia.
With its use of binary data, BSON can be used to transport a variety of data types. Because the data is already in its native format, there's no need to convert strings to integers or other types. This helps to increase performance in parsing data. In addition, the built-in Media formatter within the Web API framework makes data conversion easy with only a few lines of code. These combined features make BSON a powerful and easy format to | https://visualstudiomagazine.com/articles/2014/10/01/parsing-the-bson-beast.aspx?admgarea=features | CC-MAIN-2020-45 | refinedweb | 1,132 | 65.73 |
Perl6::Slurp::Interpret - Interpret slurped files
use Perl6::Slurp::Interpret; my $colums = "Name, Birthdate"; my $table = "Customer"; #### Use with regular file ... #### my $interpreted = eval_slurp( $filename ); #### Use e.g. with Inline::Files #### use Inline::Files; my $sql = eval_slurp( \*SQL ); # Or any other token name # Now do something useful with $sql __SQL__ SELECT $columns FROM $table
WARNING: This module allows code injection. Use with Caution
Perl6::Slurp::Interpret
Perl6::Slurp::Interpret exports two functions, eval_slurp and quote_slurp. Both functions slurp in a file and quote them. eval_slurp takes the additional step of eval'ing in the caller's namespace, e.g. global symbols will be interpolated.
The module was predominantly designed with Inline::Files in mind. It can be used as seperatng content from code but not at the expense multiple external files. It is a more elegant approach to the <<"HEREDOC" practice commonly found in Perl programs.
The power of such an approach should be striking to anyone who has written scripts that interact to any number of external programs or processes. eval_slurp'd files can be passed to function or system calls.
eval_slurp passes all it's arguments to Perl6::Slurp. So it is possible to slurp anything that Perl6::Slurp slurps. Perl6::Slurp's magic works...and the result is eval'd in the current scope.
It's a one line function. That's it.
This modules presents a serious security risk since it evals an external, possibly user supplied file. Do not use this module if you:
* Do not know what you are doing.
* Cannot insure that friendliness of the slurped file or environment.
eval_slurp
quote_slurp
* Create a quoted and an unquoted version?
** eval_slurp_quoted ... evals Perl code.
* Make the eval in the namespace more robust.
Inline::Files, Perl6::Slurp, Inline::TT
Brown, <ctbrown@cpan.org>
This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself, either Perl version 5.8.8 or, at your option, any later version of Perl 5 you may have available. | http://search.cpan.org/dist/Perl6-Slurp-Interpret/lib/Perl6/Slurp/Interpret.pm | CC-MAIN-2015-32 | refinedweb | 337 | 59.6 |
Hello everyone. In this post, I’ll be showing you how to include the Syncfusion Charts control in a Xamarin.Forms application and configure the elements of a chart.
The Syncfusion Charts control includes more than 30 chart types ranging from basic charts to financial charts. In this tutorial, I am going to use a bar chart to compare a sales target to actual sales, as shown in the following picture.
Target vs. Sale bar chart in a Xamarin.Forms app
Create Xamarin.Forms project
1. Open Visual Studio and create a new cross-platform project with the name ChartGettingStarted and choose .NET Standard as the code sharing strategy.
2. Create a model class named SalesInfo. It should represent a data point in a chart and should contain three properties to store year, sale, and target amounts.
public class SalesInfo { public string Year { get; set; } public double Target { get; set; } public double Sale { get; set; } }
3. Create a view model class named SalesViewModel and it should contain a list of SalesInfo objects in it.
public class SalesViewModel { public List SalesData { get; set; } public SalesViewModel() { SalesData = new List(); SalesData.Add(new SalesInfo { Year = "2014", Target = 500, Sale = 340 }); SalesData.Add(new SalesInfo { Year = "2015", Target = 520, Sale = 390 }); SalesData.Add(new SalesInfo { Year = "2016", Target = 560, Sale = 430 }); SalesData.Add(new SalesInfo { Year = "2017", Target = 600, Sale = 520 }); SalesData.Add(new SalesInfo { Year = "2018", Target = 600, Sale = 580 }); } }
Install NuGet package
Now, let’s add the SfChart NuGet from nuget.org:
1. Select Manage NuGet Packages for Solution in the context menu that appears when clicking right on the solution file in the Solution Explorer.
2. In the Browse tab, search for Syncfusion.Xamarin.SfChart in the search bar. This is the package that needs to be installed in all Xamarin.Forms projects.
Syncfusion SfChart NuGet reference installation
3. Click Install. You’ll need to read and accept the license to finish installing.
The SfChart NuGet package will now be installed in all your Xamarin.Forms projects.
Initialize the chart and axis
Now, let’s configure the chart on the main page of the project:
1. First, set the binding context to bind the data between the ViewModel and the View. The ViewModel, in this case, will be SalesViewModel.
2. Next, add the namespace of Syncfusion chart on this page to access the SfChart classes. The SfChart class is available inside the Syncfusion.SfChart.XForms namespace.
<ContentPage xmlns="" xmlns: <ContentPage.BindingContext> <local:SalesViewModel/> </ContentPage.BindingContext> </ContentPage>
3. Then, create an instance of SfChart using the alias name of the SfChart namespace we declared before.
4. Finally, define the x and y axes using the PrimaryAxis and SecondaryAxis properties:
a. The x-axis will show the years as a string, so set the CategoryAxis to the PrimaryAxis property.
b. The y-axis will show the amount, as it is a double value, so set a NumericalAxis to the SecondaryAxis property.
<chart:SfChart> <chart:SfChart.PrimaryAxis> <chart:CategoryAxis/> </chart:SfChart.PrimaryAxis> <chart:SfChart.SecondaryAxis> <chart:NumericalAxis/> </chart:SfChart.SecondaryAxis> </chart:SfChart>
Since the series has not been added to the chart, the numerical axis is rendered with its default range of 0 to 1, and the category axis does not have any default labels.
Add series
Next, we need to add the series to visualize the data. As I am going to compare the target and sales values, two column series need to be added to the chart to visualize those values:
1. Add the first column series and bind the data from the ViewModel using the ItemsSource property. The SalesData property is declared in the ViewModel class.
2. The XBindingPath and YBindingPath properties are used to map the properties to fetch the values from the model object. Set the XBindingPath path to Year and the YBindingPath to Target. These properties are declared in the model:SfChart.Series> </chart:SfChart>
Let’s add one more column series to visualize the sales values:
1. Pull the sales data from the same binding context as before.
2. Set the XBindingPath to Year, and the YBindingPath to Sale for this:ColumnSeries </chart:ColumnSeries> </chart:SfChart.Series> </chart:SfChart>
Add a title and legend
Then, to make this chart more meaningful, let’s configure the title and legend to represent each series:
1. Set an instance of ChartTitle to the Title property of SfChart. The title will be Target vs Sale.
2. Enable the legend using the Legend property of SfChart. The empty legend icons are added to the chart with different colors identifying the series. But we need labels to tell us what the colors represent.
3. Configure the labels using the Label property in each of the column series. We’ll name the first series Target and second series Sale.
<chart:SfChart> <chart:SfChart.Title> <chart:ChartTitle </chart:SfChart.Title> <chart:SfChart.Series> <chart:ColumnSeries </chart:ColumnSeries> <chart:ColumnSeries </chart:ColumnSeries> </chart:SfChart.Series> <chart:SfChart.Legend> <chart:ChartLegend/> </chart:SfChart.Legend> </chart:SfChart>
Customize the axis
Now let’s customize the axis by setting the title and format of the labels. We’ll set the title as Year for the primary axis using the Title property of CategoryAxis.
<chart:SfChart> <chart:SfChart.PrimaryAxis> <chart:CategoryAxis> <chart:CategoryAxis.Title> <chart:ChartAxisTitle </chart:CategoryAxis.Title> </chart:CategoryAxis> </chart:SfChart.PrimaryAxis> <chart:SfChart.SecondaryAxis> <chart:NumericalAxis/> </chart:SfChart.SecondaryAxis> </chart:SfChart>
Then, let’s format the labels of the y-axis to show the dollar ($) symbol and represent the numbers in terms of millions. We can use the LabelStyle property to customize the axis labels. Set an instance of ChartAxisLabelStyle to the LabelStyle property. Format the label with the dollar sign, three-digit values, and M symbols.
<chart:SfChart> <chart:SfChart.PrimaryAxis> <chart:CategoryAxis> <chart:CategoryAxis.Title> <chart:ChartAxisTitle </chart:CategoryAxis.Title> </chart:CategoryAxis> </chart:SfChart.PrimaryAxis> <chart:SfChart.SecondaryAxis> <chart:NumericalAxis> <chart:NumericalAxis.LabelStyle> <chart:ChartAxisLabelStyle </chart:NumericalAxis.LabelStyle> </chart:NumericalAxis> </chart:SfChart.SecondaryAxis> </chart:SfChart>
You can now run this on an Android phone to see the chart.
The chart in an Android app
Deploy for iOS
Now we want to run the same app in an iPhone simulator. Before running it on an iOS platform, though, we need to take one additional step in the iOS project to load the assemblies of the renderer projects:
1. In the Solution Explorer, go to the iOS project and open the App.Delegate.cs file.
2. Inside the FinishedLaunching method, and after invoking the Xamarin.Forms Init method, call the Init method of SfChartRenderer.
public override bool FinishedLaunching(UIApplication app, NSDictionary options) { ... global::Xamarin.Forms.Forms.Init(); Syncfusion.SfChart.XForms.iOS.Renderers.SfChartRenderer.Init(); LoadApplication(new App()); ... }
3. Then, set the iOS project as the startup project and deploy the application. The output will be similar to the output we got on Android.
Deploy for UWP
Lastly, we’ll deploy this app in the UWP platform to make sure that we are getting the same appearance of the chart there, too. We just set the UWP project as the startup project and run it in the local machine.
This is the actual output of Xamarin.Forms Charts in a UWP application.
The chart in a UWP app
I hope this post was helpful in getting you started with the Charts control in Xamarin.Forms. If you have any requests for our next tutorial, please share them in the comments section below. You can download this sample project from here.
If you like this post, we think you’ll also enjoy:
[Ebook] Xamarin.Forms Succinctly
[Blog] What’s New in 2019 Volume 1: Xamarin Highlights
[Blog] What’s New in Xamarin.Forms 4.0
nice one. thanks | https://www.syncfusion.com/blogs/post/10-minutes-to-get-started-with-xamarin-charts.aspx | CC-MAIN-2021-43 | refinedweb | 1,279 | 51.24 |
A 32-bit x86 CPU context (register state) carried in a minidump file. More...
#include "minidump/minidump_context.h"
A 32-bit x86 CPU context (register state) carried in a minidump file.
This is analogous to the
CONTEXT structure on Windows when targeting 32-bit x86, and the
WOW64_CONTEXT structure when targeting an x86-family CPU, either 32- or 64-bit. This structure is used instead of
CONTEXT or
WOW64_CONTEXT to make it available when targeting other architectures.
dr4or
dr5, which are obsolete and normally alias
dr6and
dr7, respectively. See Intel Software Developer’s Manual, Volume 3B: System Programming, Part 2 (253669-052), 17.2.2 “Debug Registers DR4 and DR5”.
A bitfield composed of values of MinidumpContextFlags and MinidumpContextX86Flags.
This field identifies the context structure as a 32-bit x86 CPU context, and indicates which other fields in the structure are valid. | https://crashpad.chromium.org/doxygen/structcrashpad_1_1MinidumpContextX86.html | CC-MAIN-2019-13 | refinedweb | 142 | 56.86 |
Commando is a Flash debugger that lets you change variables at runtime and run your own custom commands. It will allow you to try out whatever tweaks you want, without the hassle of changing your code and recompiling every time. This debugger also comes with its own memory monitor, and an output panel that is similar to the output dialog in the Flash IDE.
See Commando in Action
Why Use Commando?
Using Commando you can change your code at runtime. Let's pretend you are making a platformer game. You have a
jumpPower variable, but when testing your game you feel that the player can't jump high enough. So instead of going back and changing your code, you can just type
set jumpPower(25) in Commando and you can try out the new value.
Of course, this is just a simple demonstration; Commando can be extended even more. Just continue reading...
Configuration
First, download the ZIP file included with this article. Then, add the SWC file to your project's library path.
Once you have added the SWC to your project's library path, all you need are three lines of code to add an instance of Commando on the stage:
import com.pxlcoder.debug.Commando; var commando:Commando = new Commando(flash.ui.Keyboard.ENTER, this); addChild(commando);
Now press CTRL+ENTER (CMD+ENTER on a Mac), and you will see Commando up and running in your Flash project!
Explore
Commando comes with eight built-in functions. In this section I will explain what they are and how to use them.
Math
Using the Math function you can do addition, subtraction, multiplication and division between two numbers. The Math function can also calculate the square root of a number. For example, type
math 1+1 or
math sqrt(144) in the Commando dialog. The answer will show up in the output dialog.
Hide
You can use the Hide function to hide objects. You can type
hide monitor or
hide output to hide the two panels at the bottom. You can also use the Hide function with movieclips or buttons by simply typing
hide myInstanceName.
View
You can use the View function to view hidden objects. You can type
view monitor or
view output to show the two panels at the bottom. You can also use the View function with movieclips or buttons by simply typing
view myInstanceName. If any of your objects have their
visible property set to
false, typing
view myInstanceName will set it to true.
Set
Using the Set function you can set values of your variables or you can set properties of your objects. To use the Set function on variables type
set myVariable(myValue). To use the Set function on objects, type
set myInstanceName(myPropertyName,myValue).
Get
Using the Get function you can get the values of your variables and properties. To use the Get function type
get myVariable. You can also get properties by typing
get myInstanceName.myPropertyName.The values will show up in the output dialog.
Probe
Using the Rrobe function you can get the probe all of the properties of an object. To use the Probe function type:
probe myObjectInstanceName. The properties will be traced in the Flash IDE, rather than in the Commando output dialog.
Remove
You can use the Remove function to remove objects from the stage. To use the Remove function type
remove myInstanceName.
Add
You can use the Add function to add objects back on to the stage. To use the Add function type
add myInstanceName.
Note: Commando's built-in functions each evaluate a single string, so after you type your function name and press space, make sure to type your arguments without any spaces. Instead, type your arguments as one continuous word, with commas if necessary.
Extend
While Commando has many great built-in functions, you may want something more. To solve this problem, Commando comes with a function to add your own custom commands.
Here is a quick code example of how you can create your own custom commands:
import com.pxlcoder.debug.Commando; var commando:Commando = new Commando(flash.ui.Keyboard.ENTER, this); addChild(commando); commando.addCommand("output", outputFunction); //Sets the command keyword to "output" and calls the outputFunction below public function outputFunction(s:String):void { commando.output(s); //A call to Commando's built-in output dialog }
Now press CTRL+ENTER (CMD+ENTER on a Mac), to run your code. In the Commando dialog, type
output hello, and press Enter. The output dialog will now say hello!
You can also remove commands from Commando by using the
removeCommand() function.
import com.pxlcoder.debug.Commando; var commando:Commando = new Commando(flash.ui.Keyboard.ENTER, this); addChild(commando); commando.removeCommand("output");
Recap: Commando has three functions that you can access;
addCommand(),
output() and
removeCommand().
Conclusion
At the end of the day, debugging is the most important part in the development process. Commando has everything you could ever ask for in a debugger. You can use it for anything and everything.
If you're a Tuts+ Premium member, you can download the source files for Commando - just log in and head to the Source File page.
Any questions, comments or concerns? Feel free to get in touch in the comments.
Take control of your Flash projects!
Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
| https://code.tutsplus.com/tutorials/free-flash-debugger-commando-with-premium-source-files--active-10775 | CC-MAIN-2018-17 | refinedweb | 900 | 58.58 |
Closed Bug 154931 Opened 19 years ago Closed 19 years ago
Make Math
ML handle the <semantics> tag implicitly
Categories
(Core :: MathML, defect)
Tracking
()
mozilla1.3final
People
(Reporter: jasonh, Assigned: rbs)
Details
Attachments
(3 files)
From Bugzilla Helper: User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.1a+) Gecko/20020628 BuildID: 2002062808 The two main generators of MathML generate MathML with both presentation and content. Unfortunately due to XSLT issues it is not always viable to use style sheets to transform the content away. Thus, it is vital for the support of MathML in Mozilla that Mozilla handles the content tags implicitly. A good stopgap measure might be just to ignore all other annotations besides the primary child. This would make Mozilla work with both Mathematica and Maple since by default both of these MathML generators put presentation first. Indeed the majority of users who perform computations with MathML highly desire the ability to have both presentation and content mathml in the same mathML fragment. Here is an example of a simple MathML fragment which is causing problems: <math xmlns=''> <semantics> <mrow> <msup> <mi>x</mi> <mn>2</mn> </msup> <mo>+</mo> <mi>y</mi> </mrow> <annotation-xml <apply> <plus/> <apply> <power/> <ci>x</ci> <cn type='integer'>2</cn> </apply> <ci>y</ci> </apply> </annotation-xml> </semantics> </math> Reproducible: Always Steps to Reproduce: 1.Create a text file containing the above MathML 2.Open the text file in Mozilla Actual Results: You will see the presentation renders correctly, but it will be followed by garbage semantics. Expected Results: You should see the presentation renders correctly, but the semantics should not appear at all.
I am sending this comment from a public booth at the Interntaional MathML Conference in Chicago where Mathematica & Maple folks have been hammering me with this issue. A stop-gap fix is simply to add the following style rule in mathml.css: annotation-xml[encoding='MathML-Content'] { display: none; } roc/waterson/asa, care to r/sr/a, thanks. hixie, would be so kind to checkin the fix too, if approved? As you might imagine, I don't have all the setup to pursue the fix in the proper channels from this end, but I am hoping that the one-liner can still make it before the final build for Netscape 7.0 is pulled -- which is really why Wolfram & Maple have been hammering me with the issue here.
Status: UNCONFIRMED → NEW
Ever confirmed: true
Keywords: mozilla1.0.1
Summary: Not handling semantics tags implicitly → [PATCH] Not handling semantics tags implicitly
Target Milestone: --- → mozilla1.0.1
That style rule should really be in a namespace. Otherwise it could slow things down. What namespace should it be in?
mathml.css has a default namespace declared
erm, i was about to make a patch for this, when i found that attachment 89658 [details] already works fine. I can't see why. The annotation-xml node is display:none but there are no corresponding rules. In principle, r=hixie, but I think something odd may be going on here.
Oh, I think actually that may just have been a bug in the DOM inspector. I think I changed the mathml.css file before loading any MathML content, and since the exported mathml.css file is just a symlink to the one I was editing, it took effect. Or something. Anyway, r=hixie on rbs' patch: Index: mathml.css =================================================================== RCS file: /cvsroot/mozilla/layout/mathml/content/src/mathml.css,v retrieving revision 1.14 diff -u -r1.14 mathml.css --- mathml.css 27 Feb 2002 01:35:24 -0000 1.14 +++ mathml.css 29 Jun 2002 22:57:23 -0000 @@ -407,3 +407,10 @@ } :-moz-math-font-style-anonymous { } + +/**********************************************************************/ +/* Hide embedded semantic MathML content (as opposed to presentational + content, which we render). */ +annotation-xml[encoding="MathML-Content"] { + display: none; +}
r=dbaron too, then.
sr=roc+moz
checked in to trunk
please checkin to the 1.0.1 branch. once there, remove the "mozilla1.0.1+" keyword and add the "fixed1.0.1" keyword.
Keywords: mozilla1.0.1 → mozilla1.0.1+
checked in to branch - marking fixed1.0.1
Status: NEW → RESOLVED
Closed: 19 years ago
Keywords: mozilla1.0.1+ → fixed1.0.1
Resolution: --- → FIXED
Summary: [PATCH] Not handling semantics tags implicitly → Not handling semantics tags implicitly
Summary: Not handling semantics tags implicitly → Make MathML handle the <semantics> tag implicitly
re-opening due to the following comment: From: David Carlisle (davidc@nag.co.uk) Subject: rendering of semantics elements in mozilla. Newsgroups: netscape.public.mozilla.mathml Date: 2003-02-03 03:55:20 PST As Mozilla doesn't render Content MathML it would be nice to be able to combine Content expressions together with Presentation MathML (and other things) in a semantics element, and have mozilla ignore everything except the presentation mathml. In addition to Content mathml, editors and computer algebra systems often use the semantics element to annotate an expression with processor-specific information (which is the intended use of this element). All a browser has to do is ignore the annotations. there was/is a bugzilla report for this which is marked as fixed however the css rule added at that time appears to be: /**********************************************************************/ /* Hide embedded semantic MathML content (as opposed to presentational content, which we render). */ annotation-xml[ render that annotation, and ignore the first child of the semantics element and all other annotations, else render the first child of semantics and ignore all annotations I don't think you can express this rule purely in css though. The simpler rule of just ignoring all annotations is still conformant with the specification and would cover most existing cases that are causing problems. (The XSLT stylesheet for MathML implements the more complete rule, as well as doing content-presentation transformation, for cases when the full functionality is needed). For example, asking maple to output the expression cos x as mathml produces <math xmlns=''> <semantics> <mrow xref='id3'> <mi xref='id1'>cos</mi> <mo></mo> <mfenced><mi xref='id2'>x</mi></mfenced> </mrow> <annotation-xml <apply id='id3'><cos id='id1'/><ci id='id2'>x</ci></apply> </annotation-xml> <annotation encoding='Maple'> cos(x) </annotation> </semantics> </math> which renders as cos(x)cos(x) in mozilla 1.1 (which I suppose is a bit old, but I believe newer mozilla are the same?) David
Status: RESOLVED → REOPENED
Resolution: FIXED → ---
Using Opera? I was bitten once... Resending with a more capable browser that knows how to wrap :-) ------- Additional Comment #13 From Jason Harris 2003-02-04 02:57 -------
So, the ideal is: if there is an <annotation-XML render that annotation, and ignore the first child of the semantics element and all other annotations, else render the first child of semantics and ignore all annotations I applied the patch on bug 135141 thinking that it might help. But there doesn't seem a way to get the desired effect -- or my CSS is a bit rusty to get there... As a stop-gap, we can either have: annotation { display: none; } annotation-xml { display: none; } or as suggested, just render the first child: semantics > :not(:first-child) { display: none; }
Comment on attachment 113534 [details] [diff] [review] stop-gap patch re-requesting r/sr
Attachment #113534 - Flags: superreview?(roc+moz)
Attachment #113534 - Flags: review?(dbaron)
The reason why the patch on bug 135141 didn't help is because 'adjacent' (CSS operator '~') is interpred as 'preceded by'. Since :first-child is preceded by nothing, there was no way to conditionally select it. If 'adjacent' really meant what its name suggests, CSS could have done the trick: semantics > :not(:first-child) { display: none; } semantics > annotation-xml[encoding="MathML-Presentation"] { display: inline; } annotation-xml[encoding="MathML-Presentation"] ~ semantics:first-child { display: none; }
Oops, s/ semantics:first-child / semantics > :first-child / With that correction, not sure if the CSS is still valid though.
Yeah, CSS can't do what you're describing. Should be quite easy to do at the C++ level though...
Would have been nice if '~' means 'sibling'.
Or rather if there was a 'sibling' operator, rather than just the 'preceded by' operator.
I've proposed :matches(), which is a superset of that, several times. It's very well defined, but has so far been rejected because it would be a MASSIVE performance hit.
Comment on attachment 113534 [details] [diff] [review] stop-gap patch Seeking a= on this simple patch to apply some CSS rules to the <semantics> tag so that a certain behavior is emulated.
Attachment #113534 - Flags: approval1.3b?
Comment on attachment 113534 [details] [diff] [review] stop-gap patch This is going to have to wait 'till we open for final. I think we've got what we need for beta and until we get the beta released we can't take further non-critical changes.
Attachment #113534 - Flags: approval1.3b? → approval1.3b-
>I've proposed :matches(), which is a superset of that, several times. It's very >well defined, but has so far been rejected because it would be a MASSIVE >performance hit. Got a reference? I am not that thrilled to add some special C++ for this <semantics>. Maybe a :-moz-matches() can be of wide use, here and elsewhere? XBL isn't particularly fast, yet it avoids repetition, thereby cutting the bloat.
Comment on attachment 113534 [details] [diff] [review] stop-gap patch per asa's comment 26, migrating request for a=1.3final.
Attachment #113534 - Flags: approval1.3?
Comment on attachment 113534 [details] [diff] [review] stop-gap patch a=asa (on behalf of drivers) for checkin to 1.3 final.
Attachment #113534 - Flags: approval1.3? → approval1.3+
Checked in mozilla1.3 final.
Status: REOPENED → RESOLVED
Closed: 19 years ago → 19 years ago
Resolution: --- → FIXED
Target Milestone: mozilla1.0.1 → mozilla1.3final | https://bugzilla.mozilla.org/show_bug.cgi?id=154931 | CC-MAIN-2021-31 | refinedweb | 1,627 | 55.03 |
The source code to life is the code written by God for the universe. Life would be so much simpler if we had the source code. I would settle for simply the API or even just some documentation. God having compiled and shipped the universe not under GPL we don’t have it. Attempts to reverse engineer, or decompile the code have to date been unsuccessful. Attempts to reverse engineer have been known to be stuck down by plague. I’m pretty sure there is no “thou shall not reverse engineer the universe in the license agreement though this could be the reason for their failure.
I've seen the source code it is
PRINT "Hello World!"
SLEEP 5
CLS
/* life.cpp
last revised: I B.C.
by God
released under version 1.0 of the DPL (Divine Public License) */
#include <stdio.h>
#include <math.h>
#include "divine_word.h"
#include "life.h"
//#include "meaning.h" // weird. it will not compile cleanly with this here.
// oh well. implement in version 2
int main(int argc, char *argv[])
{
Species s;
Gender g;
Name n;
NigglingBits nb;
int sp;
divine_word::parseArgs(argc, *argv[], *s, *g, *n ,*nb, *sp);
sp = salloc() // pointer to the soul
life::Being(sp, g, n, nb, *s) // constructor for Being
life::Being.childhood; // whee!
life::Being.adolescence; // angst. lots of angst.
life::Being.adulthood; // doin' the 9 to 5
life::Being.senescence; // kid! get me my Depends, for chrissakes!
life::~Being(life::deathMethod[rand() * MAXINT]);
// destructor. note that there are 32767 ways to die, in a 16-bit universe.
// deallocation of the soul occurs here.
exit(0); // that's all. nothing to see here.
// nota bene: hopefully there have been no soul leaks.
// remember to debug this with ElectricFence Purgatorio
}
Log in or register to write something here or to contact authors.
Need help? accounthelp@everything2.com | https://everything2.com/title/Source+Code+to+Life | CC-MAIN-2021-17 | refinedweb | 308 | 78.45 |
Dependency injection means giving an object its instance variables. Really. That’s it.
The best definition I’ve found so far is one by James Shore:
“Dependency Injection” is a 25-dollar term for a 5-cent concept. […] Dependency injection means giving an object its instance variables. […].
He explains it very well:.
Part I: Dependency Non-Injection
Classes have these things they call methods on. Let’s call those “dependencies.” Most people call them “variables.” Sometimes, when they’re feeling fancy, they call them “instance variables.”
public class Example { private DatabaseThingie myDatabase; public Example() { myDatabase = new DatabaseThingie(); } public void DoStuff() { ... myDatabase.GetData(); ... } }
Here, we have a variable… uh, dependency… named “myDatabase.” We initialize it in the constructor.
Part II: Dependency Injection
If we wanted to, we could pass the variable into the constructor. That would “inject” the “dependency” into the class. Now when we use the variable (dependency), we use the object that we were given rather than the one we created. really all there is to it. The rest is just variations on the theme. You could set the dependency (<cough> variable) in… wait for it… a setter method. You could set the dependency by calling a setter method that’s defined in a special interface. You can have the dependency be an interface and then polymorphically pass in some polyjuice. Whatever.
Part III: Why Do We Do This?
Among other things, it’s handy for isolating classes during testing.
public class ExampleTest { TestDoStuff() { MockDatabase mockDatabase = new MockDatabase(); // MockDatabase is a subclass of DatabaseThingie, so we can // "inject" it here: Example example = new Example(mockDatabase); example.DoStuff(); mockDatabase.AssertGetDataWasCalled(); } } it. Dependency injection is really just passing in an instance variable.
Or dependency injection can be understood as follows: an injection by the framework.
Answer #2:
Dependency Injection is passing dependency to other objects or framework( dependency injector).
Dependency injection makes testing easier. The injection can be done through constructor.
SomeClass() has its constructor as following:
public SomeClass() { myObject = Factory.getObject(); }
Problem: If
myObject involves complex tasks such as disk access or network access, it is hard to do unit test on
SomeClass(). Programmers have to mock
myObject and might intercept the factory call.
Alternative solution:
- Passing
myObjectin as an argument to the constructor
public SomeClass (MyClass myObject) { this.myObject = myObject; }
myObject can be passed directly which makes testing easier.
- One common alternative is defining a do-nothing constructor. Dependency injection can be done through setters.
- Martin Fowler documents a third alternative, where classes explicitly implement an interface for the dependencies programmers wish injected.
It is harder to isolate components in unit testing without dependency injection.
In 2013, when I wrote this answer, this was a major theme on the Google Testing Blog. It remains the biggest advantage to me, as programmers do not always need the extra flexibility in their run-time design (for instance, for service locator or similar patterns). Programmers often need to isolate the classes during testing.
Answer #3:(); //The does the
Dependency Injection do for us…?. We normally rely on DI frameworks such as Spring, Guice, Weld to create the dependencies and inject where needed.; } }
The advantages are:
- decoupling the creation of object (in other word, separate usage from the creation of object)
- ability to replace dependencies (eg: Wheel, Battery) without changing the class that uses it(Car)
- promotes “Code to interface not to implementation” principle
- ability to create and use mock dependency during test (if we want to use a Mock of Wheel during test instead of a real instance.. we can create Mock Wheel object and let DI framework inject to Car)
For example, consider these classes: – another explanation:
The first answer is a good one – but I would like to add to this that DI is very much like the classic avoiding of hardcoded constants in the code.
When you use some constant like a database name you’d quickly move it from the inside of the code to some config file and pass a variable containing that value to the place where it is needed. The reason to do that is that these constants usually change more frequently than the rest of the code. For example, if you’d like to test the code in a test database.
DI is analogous to this in the world of Object-Oriented programming. The values there instead of constant literals are whole objects – but the reason to move the code creating them out from the class code is similar – the objects change more frequently than the code that uses them. One important case where such a change is needed tests.
Hope you learned something from this post.
Follow Programming Articles for more! | https://programming-articles.com/what-is-dependency-injection/ | CC-MAIN-2022-40 | refinedweb | 778 | 56.45 |
How can I use makefiles with Java?
Created May 4, 2012
There is no simple answer to this question, and there are even conflicting answers. Here's what several jGuru community members have to offer on the subject:
According to Sandip Chitale:
- Standard 'make' is based on comparison of timestamps of the source and the target.
- In the case of Java, the source is a .java file and the target is the .class file. For various reasons there is not necessarily a one-to-one correspondence between .java and .class file. The reasons are:
- A single .java file may contain more than one class/interface in it. Each of those will get it's own .class file. For the public class, the name of the .java file has to match the name of the class. Therefore the .class file name for the top-level public class will match the .java file name. However for non-public classes, the name of the .class file may be completely different than name of .java file. For inner classes of the top-level public class, the name of the .class file will at least have the same prefix as the .java file (more than likely followed by '$' followed by the inner class name, anonymous or otherwise). For the inner classes of non-public top-level classes, the name of the .class file will have the prefix of the name of the outer class and not .java file.
- The 'package' statement also affects the place where the .class files are located. Thus it is impossible to predict the names and locations of .class files without parsing the .java file.
For the above reasons it is not possible to write a generic 'make' rule like the following:
.c.o:
cc -o @*.o @* or something like that
or even a one time make file so that it will always work.
Contrary to Sandip, according to Greg Brouelette:
To build make files for Java should be to buy the excellent O'Reilly book Managing Projects with make. You may also benefit from the O'Reilly book Learning the bash shell.
With tools in hand you're ready to build your development environment. For the sake of this example let's assume that our project is code named "Gumball".
I would have a Gumball directory with 3 sub-directories: com, lib, and minclude (for "make include"). Your source "com tree" starts under the com directory. This is where your version control manager places any code that you check out.
When you compile your code, the class files go into a "com tree" under Gumball/lib. As long as the Gumball/lib directory is in your classpath you'll have access to all of your compiled code.
Finally there is the minclude directory. I like to keep a minimal amount of work in my makefiles. I put all my rules and targets into a Rules.mk file that I keep in the minclude directory. That way, a change in my Rules.mk file will properly effect all files in my project.
OK, I've mentioned the "com tree" twice. What's a "com tree"? If you're using Java packages properly, then the suggested method of naming your packages is to reverse your domain name and make a directory structure out of it.
For example:
Assume that your domain name is jguru.com. If you had 3 directories of source code call gui, util, and app then you would have these package names:
com.jguru.gui com.jguru.util com.jguru.app
Since package names are related to the actual directory your code is in then your directory structure should look like this:
com | jguru | gui | util | app
Since you have a directory tree that spawns from the top level of "com" it's called the "com tree". The com tree that's under the src directory will contain all your Java files. The com tree that's under your lib directory (your make files will create this com tree) will contain all your class files. If you ever need to completely clean out all the class files then you simply delete the com tree under the lib directory and re-make it (although I generally have a target called "clean" to do that).
MakeFiles:
My make file (which is actually named "makefile") in my Gumball directory looks like this:
TOP = . DIR = Gumball SUBDIRS = com include $(TOP)/minclude/Rules.mk
Notice that I'm including my rules file so my actual makefile doesn't have any rules in it. Also notice that TOP is equal to the dot meaning "this is the top directory".
In the Gumball/com directory my makefile looks like this:
TOP = .. DIR = com SUBDIRS = jguru include $(TOP)/minclude/Rules.mk
DIR changed to show the current directory. SUBDIRS changed to show what directories are below it, and TOP changed to ".." to show that we're now one directory down from the top.
Let's keep going. The makefile in the Gumball/com/jguru looks like this:
TOP = ../.. DIR = com/somename SUBDIRS = util gui app include $(TOP)/minclude/Rules.mk
Do you see the pattern? I should mention that the order of your subdirectories is the order in which they are compiled. So if gui has a dependency on a class in the util directory then the util directory must be compiled first (as it is here in this example).
Lastly, let's look at one of the leaf directories in your tree (the util, gui, and app directories will all have similar makefile files).
TOP = ../../.. DIR= com/jguru/util SUBDIRS=NULL JAVA_SRCS = AUtilClass.java ADifferentUtilClass.java ASortableVector.java AGridBagPanel.java include $(TOP)/minclude/Rules.mk
We have a new entry call JAVA_SRCS. This contains a list of Java source files in this directory. Notice that there is a backslash after each one except for the last one. This is a line continuation which allows us to compile multiple files.
Also notice that the SUBDIRS is equals to NULL. We use this to tell our Rules.mk file to stop recursing the directories.
As you can see, it's pretty simple so far. That's because all the work is in the Rules.mk file. Here's a copy of that beast:
# A line that starts with a # is a comment SRCDIR=. # Point this to wherever your java home directory is # In Unix/Linux it might be something like /user/java1.2.2 JAVA_HOME=c:/jdk1.2.2 JAVAC=$(JAVA_HOME)/bin/javac JAVAH=$(JAVA_HOME)/bin/jabah JAVADOC=$(JAVA_HOME)/bin/javadoc CLASS_DIR=$(TOP)/lib CP="$(CLASSPATH):$(CLASS_DIR):. " RM = rm -f # New suffixes .SUFFIXES: .java .class .h # Temp file for list of files to compile COMPILEME = .compileme$(USER) COPYME = .copyme$(USER) CURRENTDIR = . JAR_DIR=$(CLASS_DIR) OUT_DIR=$(CLASS_DIR)/$(DIR) JFLAGS= -g -deprecation -d $(CLASS_DIR) -classpath $(CP) RMICFLAGS= -g -d $(CLASS_DIR) JARFLAGS = cfm JAVAH = $(JAVAH) -jni RMIC = rmic $(RMICFLAGS) JC = $(JAVAC) $(JFLAGS) PACKAGE = $(subst /,.,$(DIR)) JAVA_OBJS = $(JAVA_SRCS:%.java=$(OUT_DIR)/%.class) RMI_OBJS = $(RMI_SRCS:%.java=$(OUT_DIR)/%.class) STUB_OBJS = $(RMI_OBJS:%.class=%_Stub.class) SKEL_OBJS = $(RMI_OBJS:%.class=%_Skel.class) H_FILES = $(JNI_SRCS:%.java=$(JNI_DIR)/%.h) # Notice that each line that starts with an @ is ONE LONG LINE # It may not show up or print out like that in the FAQ # Walk down the SUBDIRS first all:: @echo "subdirs is " $(SUBDIRS); if test "$(SUBDIRS)" != "NULL" ; then for i in $(SUBDIRS) ; do (cd $$i ; echo "making" all "in $(CURRENTDIR)/$$i"; $(MAKE) CURRENTDIR=$(CURRENTDIR)/$$i all); done fi # Then compile each file in each subdir all:: $(JAVA_OBJS) $(RMI_OBJS) $(JNI_OBJS) @if test -r ${COMPILEME}; then CFF=`cat ${COMPILEME}`; fi; $(RM) ${COMPILEME}; if test "$${CFF}" != ""; then echo $(JC) $${CFF}; fi; if test "$${CFF}" != ""; then $(JC) $${CFF}; fi @$(RM) ${COMPILEME} # "make clean" will delete all your class files to start fresh clean:: $(RM) $(OUT_DIR)/*.class *~ $(COMPILEME) $(RM) $(OUT_DIR)/*.gif *~ $(COMPILEME) $(RM) $(OUT_DIR)/*.jpg *~ $(COMPILEME) $(RM) $(OUT_DIR)/*.py *~ $(COMPILEME) # SUBDIRS clean:: @echo "2nd check: subdirs is " $(SUBDIRS); if test "$(SUBDIRS)" != "NULL"; then echo "Past the 2nd if then"; for i in $(SUBDIRS) ; do (cd $$i ; echo "making" clean "in $(CURRENTDIR)/$$i"; $(MAKE) CURRENTDIR=$(CURRENTDIR)/$$i clean); done fi clean:: @if [ "$(H_FILES)" != "/" ] && [ "$(H_FILES)" != "" ]; then echo $(RM) $(H_FILES); $(RM) $(H_FILES); fi @if [ "$(RMI_OBJS)" != "/" ] && [ "$(RMI_OBJS)" != "" ]; then echo $(RM) $(RMI_OBJS); $(RM) $(RMI_OBJS); fi @if [ "$(STUB_OBJS)" != "/" ] && [ "$(STUB_OBJS)" != "" ]; then echo $(RM) $(STUB_OBJS); $(RM) $(STUB_OBJS); fi @if [ "$(SKEL_OBJS)" != "/" ] && [ "$(SKEL_OBJS)" != "" ]; then echo $(RM) $(SKEL_OBJS); $(RM) $(SKEL_OBJS); fi all:: $(STUB_OBJS) $(SKEL_OBJS) $(H_FILES) ### Rules # .class.java rule, add file to list $(OUT_DIR)/%.class : %.java @echo $< >> $(COMPILEME) # Rule for compiling a Stub/Skel files $(OUT_DIR)/%_Skel.class :: $(OUT_DIR)/%_Stub.class $(OUT_DIR)/%_Stub.class:: %.java $(RMIC) $(PACKAGE).$(notdir $(basename $(<F))) # Rule for compiling a .h file $(JNI_DIR)/%.h : %.java $(JAVAH) -o $@ $(PACKAGE).$(notdir $(basename $< ))
OK, I'm not going to explain every line, that's what the O'Reilly book is for. Essentially, this file tells the make utility where the javac program is (which means you can compile using different versions of Java just by changing the JAVA_HOME line). It then walks down all the subdirectories and compiles each Java file putting the resulting class file into the com tree under the lib directory.
However, because of the magic of the make utility it will only compile the Java files which have changed since the last time you did a make. If you want to force a recompile you can either run "make clean" in the directory you want to re-make, or use the Unix "touch" utility to update the date-time stamp on a particular java file.
This is only an example of the many ways you can use the make utility. With a little additional work you could add targets for "make jar" which could automatically build your jar file; you could add a "make doc" which would run javadoc on all your source code and so on.
The make utility is quite a useful tool.
According to Sandip Chitale, Finlay McWalter:
'javac' includes implicit compilation of direct dependencies. A more extensive support to compile indirect dependencies is also provided through the use of '-depend' (JDK1.1) or '-Xdepend' (JDk1.2) 'javac' flags.
Many members also provided references to available articles or tools:
According to Bogdan Ghidireac, Robert Castaneda, Didier Trosset:
Ant is a Java-based build tool from the Apache project:
According to Greg Brouelette:
Cygwin is a windows-based:
According to Davanum Srinivas: seems to discuss something similar to Greg's comments above.
| http://www.jguru.com/faq/view.jsp?EID=70955 | CC-MAIN-2016-44 | refinedweb | 1,723 | 75.1 |
I want to add the smallest possible value of a float to a float. So, for example, I tried doing this to get 1.0 + the smallest possible float:
float result = 1.0f + std::numeric_limits<float>::min();
But after doing that, I get the following results:
(result > 1.0f) == false (result == 1.0f) == true
I’m using Visual Studio 2015. Why does this happen? What can I do to get around it?
If you want the next representable value after 1, there is a function for that called
std::nextafter, from the
<cmath> header.
float result = std::nextafter(1.0f, 2.0f);
It returns the next representable value starting from the first argument in the direction of the second argument. So if you wanted to find the next value below 1, you could do this:
float result = std::nextafter(1.0f, 0.0f);
Adding the smallest positive representable value to 1 doesn’t work because the difference between 1 and the next representable value is greater than the difference between 0 and the next representable value.
The “problem” you’re observing is because of the very nature of floating point arithmetic.
In FP the precision depends on the scale; around the value
1.0 the precision is not enough to be able to differentiate between
1.0 and
1.0+min_representable where
min_representable is the smallest possible value greater than zero (even if we only consider the smallest normalized number,
std::numeric_limits<float>::min(). The smallest denormal is another few orders of magnitude smaller).
For example with double-precision 64-bit IEEE754 floating point numbers, around the scale of
x=10000000000000000 (1016) it’s impossible to distinguish between
x and
x+1.
The fact that the resolution changes with scale is the very reason for the name “floating point”, because the decimal point “floats”. A fixed point representation instead will have a fixed resolution (for example with 16 binary digits below units you have a precision of 1/65536 ~ 0.00001).
For example in the IEEE754 32-bit floating point format there is one bit for the sign, 8 bits for the exponent and 31 bits for the mantissa:
The smallest value
eps such that
1.0f + eps != 1.0f is available as a pre-defined constant as
FLT_EPSILON, or
std::numeric_limits<float>::epsilon. See also machine epsilon on Wikipedia, which discusses how epsilon relates to rounding errors.
I.e. epsilon is the smallest value that does what you were expecting here, making a difference when added to 1.0.
The more general version of this (for numbers other than 1.0) is called 1 unit in the last place (of the mantissa). See Wikipedia’s ULP article.
min is the smallest non-zero value that a (normalized-form) float can assume, i.e. something around 2-126 (-126 is the minimum allowed exponent for a float); now, if you sum it to 1 you’ll still get 1, since a
float has just 23 bits of mantissa, so such a small change cannot be represented in such a “big” number (you would need a 126 bit mantissa to see a change summing 2-126 to 1).
The minimum possible change to 1, instead, is
epsilon (the so-called machine epsilon), which is in fact 2-23 – as it affects the last bit of the mantissa.
To increase/decrement a floating point value by the smallest possible amount, use
nextafter towards +/-
infinity().
If you just use
next_after(x,std::numeric_limits::max()), the result with be wrong in case
x is infinity.
#include <iostream> #include <limits> #include <cmath> template<typename T> T next_above(const T& v){ return std::nextafter(1.0,std::numeric_limits<T>::infinity()) ; } template<typename T> T next_below(const T& v){ return std::nextafter(1.0,-std::numeric_limits<T>::infinity()) ; } int main(){ std::cout << next_below(1.0) - 1.0<< std::endl; // gives eps std::cout << next_above(1.0) - 1.0<< std::endl; // gives ~ -eps/2 // Note: std::cout << std::nextafter(std::numeric_limits<double>::infinity(), std::numeric_limits<double>::infinity()) << std::endl; // gives inf std::cout << std::nextafter(std::numeric_limits<double>::infinity(), std::numeric_limits<double>::max()) << std::endl; // gives 1.79769e+308 } | https://exceptionshub.com/adding-smallest-possible-float-to-a-float.html | CC-MAIN-2021-21 | refinedweb | 691 | 56.35 |
Hi all, The "scrap your boilerplate with class" sytstem [1] has two big advantages over the plain SYB system from Data.Generics, IMHO: One, it lets you declare an 'open' generic function as a type class, to which new cases can be added by adding new instances (emphasized in the paper); and two, it lets you write recursive functions that require other type class constraints in addition to Data (not emphasized in the paper, but something I've frequently found myself wanting with Data.Generics). [1] However, when trying to convert the codebase I'm working on to SYB-with-class, I've found that the type proxies and explicit dictionaries used to simulate type class abstraction over type classes are... annoying. Today, I've hit on an alternative approach to implementing SYB-with-class (YAGS, yet another generics scheme...), with less boilerplate per generic function. The approach may or may not be new (I haven't studied *all* of the generics proposals out there yet); in any case, it shares the use of type-level functions with Smash Your Boilerplate, and it uses the same underlying gfoldl operator as SYB, but implements it in a quite different way. I believe that the equivalent of everywhere, mkT and friends can be implemented as type-level functions in this framework, but I haven't actually tried it yet. This mail is a literate script demonstrating the approach. I'm hoping to get some feedback on the idea. :) On to the code: > {-# OPTIONS_GHC -fglasgow-exts -fallow-overlapping-instances > -fallow-undecidable-instances #-} Yup, we need it all... I'll start with three example generic functions. * 'size' calculates the number of constructors in a term, except for lists, for which it returns one plus sum of the element sizes. * 'inc' increases all Ints in a term by one. * 'prints' prints out each subterm of a term on its own line, except for strings, for which it prints the string, but not its subterms. Thus, the following code: > test = ("Hello", 7::Int, [2,3::Int]) > main = do print (size test); print (inc test) > putStrLn ""; prints test; return () prints this: ---------------------------------------------------------------------- 11 ("Hello",8,[3,4]) ("Hello",7,[2,3]) "Hello" 7 [2,3] 2 [3] 3 [] ---------------------------------------------------------------------- Here is the 'size' function: > class Size a where size :: a -> Int > > data SizeF = SizeF > instance Size a => Apply SizeF a Int where apply _ = size > > instance Size a => Size [a] where size xs = 1 + sum (map size xs) > instance Apply (GMapQ SizeF) a [Int] => Size a where > size x = 1 + sum (gmapQ SizeF x) The constraint (Apply f x r) means that 'f' is a type-level function that, when applied to 'x,' returns 'r': > class Apply f x r | f x -> r where apply :: f -> x -> r Here is the 'inc' function: > class Inc a where inc :: a -> a > > data IncF = IncF > instance Inc a => Apply IncF a a where apply _ = inc > > instance Inc Int where inc = (+1) > instance Apply (GMapT IncF) a a => Inc a where inc = gmapT IncF And here is the 'prints' function; for illustration, the implementation is in a slightly different style, which does without the declaration of a new type class: > data PrintsF = PrintsF; prints x = apply PrintsF x > instance Apply PrintsF String (IO String) where > apply _ x = print x >> return x > instance (Show a, Apply (GMapM PrintsF) a (IO a)) => > Apply PrintsF a (IO a) where > apply f x = print x >> gmapM f x Note the 'Show' constraint: 'prints' can only be applied to values all of whose subterms implement 'Show.' This is the kind of constraint you can't have with the standard, not-with-class SYB code. So much for the demo code; now, onwards to the actual library. The core consists of the following three type classes: > class Constr x f where constr :: x -> a -> f a > class Param x p f where param :: x -> f (p -> a) -> p -> f a > > class GFoldl x a f where gfoldl :: x -> a -> f a Together, these classes form the equivalent of the standard SYB's 'gfoldl' method. (I'm ignoring the rest of the Data class at this time, but I believe that it could be implemented in a similar fashion.) * 'Constr' and 'Param' correspond to the first and second argument of the standard SYB's gfoldl. * The parameter 'x' specifies the type of fold to perform (GMapQ, GMapT and GMapM in the present module). * We give an instance 'Constr' and 'Param' for each type of fold. We give an instance of 'GFoldl' for each type we want to fold over. Here are the instances of GFoldl: > instance Constr x f => GFoldl x () f where gfoldl = constr > instance Constr x f => GFoldl x Char f where gfoldl = constr > instance Constr x f => GFoldl x Int f where gfoldl = constr > > instance (Constr x f, Param x a f, Param x [a] f) => GFoldl x [a] f where > gfoldl x [] = constr x [] > gfoldl x (y:ys) = constr x (:) `p` y `p` ys where > p a b = param x a b > > instance (Constr x f, Param x a f, Param x b f, Param x c f) => > GFoldl x (a,b,c) f where > gfoldl x (a,b,c) = constr x (,,) `p` a `p` b `p` c where > p a b = param x a b What remains is the code for GMapQ, GMapT and GMapM: > newtype GMapQ f = GMapQ f; gmapQ f = apply (GMapQ f) > > newtype K a b = K { fromK :: a } > > instance GFoldl (GMapQ f) a (K [r]) => Apply (GMapQ f) a [r] where > apply (GMapQ f) x = reverse $ fromK $ gfoldl (GMapQ f) x > > instance Constr (GMapQ f) (K [r]) where constr _ _ = K [] > instance Apply f a r => Param (GMapQ f) a (K [r]) where > param (GMapQ f) (K xs) x = K (apply f x : xs) > newtype GMapT f = GMapT f; gmapT f = apply (GMapT f) > > newtype I a = I { fromI :: a } > > instance GFoldl (GMapT f) a I => Apply (GMapT f) a a where > apply (GMapT f) x = fromI $ gfoldl (GMapT f) x > > instance Constr (GMapT f) I where constr _ = I > instance Apply f a a => Param (GMapT f) a I where > param (GMapT f) (I x) y = I (x (apply f y)) > newtype GMapM f = GMapM f; gmapM f = apply (GMapM f) > > instance (Monad m, GFoldl (GMapM f) a m) => Apply (GMapM f) a (m a) where > apply (GMapM f) x = gfoldl (GMapM f) x > > instance Monad m => Constr (GMapM f) m where constr _ = return > instance (Monad m, Apply f a (m a)) => Param (GMapM f) a m where > param (GMapM f) m x = do fn <- m; arg <- apply f x; return (fn arg) That ends the example. Comments would be appreciated! :-) Thanks, - Benja | http://www.haskell.org/pipermail/haskell-cafe/2007-June/027519.html | CC-MAIN-2014-15 | refinedweb | 1,118 | 53.21 |
I'm sure that you've heard of the Kinect from Microsoft by now, the revolutionary new technology that recognises the human form and lets you control games and apps with it. When the Kinect was released, there was big news surrounding it, not due to what it could do, but what developers were doing with it. In this tutorial, you'll learn how to create a Pong game that can be controlled by moving your hands, and a picture gallery that can be navigated in the style of Minority Report -- all in the browser's Flash Player with no drivers required, if you're on the Mac.
The Kinect got hacked within a few weeks, and developers started using it to control things other than the Xbox 360. This was huge, suddenly every nerd's dream of living out that famous scene from minority report was coming to life. But for us flash dev's we couldn't get in on the action without having to learn C++. That makes it tricky, for someone like myself who just wants to make games with it.
Well, luckily for us, we now can, thanks to something called TUIO and some very nice developers who got kinect recognition streaming out the data via TUIO.
This description of TUIO is taken directly for their site, tuio.org:.
That's quite a description. Basically, what it does is allow the decoding of touch interfaces like touch tables, through blob recognition. What clever dev's have done is transcode the Kinect data into "blobs" so that TUIO can recognise them. From there, Flash can read the data with the help of TUIO's native as3 libraries.
What TUIO can also do is recognise gestures. This is quite a nice feature as it allows you to do things with the kinect as if it was a big imaginary touch table. You can shrink or expand objects using a larger, more elaborate version of the pinch gesture, it can regonise swipes and touch gestures too.
For basic implementations though, a Kinect plus TUIO lets you track hands, which is what I'll be covering first since that is something I've most recently used with great success in my project Spaced Out.
Spaced Out is a very experimental game. It used the Kinect to track players' hands which served in game as the equivalent of a mouse. You can track the position of hands on screen and assign a cursor to them. In my game I used this in conjunction with a brainwave reading headset to blow up enemies on screen by tracking the players hands and also by reading their concentration levels.
You can see a video of all this in action here:
Spaced out gameplay
Spaced Out Gameplay from jon reid on Vimeo.
Different TUIO Trackers
There are a few different TUIO trackers for the Kinect right now. I'll explain briefly what each one does in a minute, but they all have their advantages and uses for different occasions. Being able to have the freedom to choose which one we can use is a great advantage for us.
The great thing about TUIO is that all of these will use the exact same AS3 code that I'm going to show you today. So for different uses, you can just run a different TUIO tracker and your game or app will need no rewriting whatsoever.
The downside to TUIO trackers for the kinect right now is that they are generally mac only. This is due to driver issues when the Kinect first got hacked. I'm sure though, that as time goes on that more developers will release TUIO trackers for windows, especially since the official Microsoft SDK for Kinect has been released on Windows. In fact, there is a TUIO client for Windows right now! But it's still in beta and you cant control the depth of the camera so I had to stand very far back to get it to work. It's not something thats as easy to use as the Mac versions.
So the first one, known as TUIOKinect, tracks blobs, which it determines are hands. I personally find this one great as it needs no setup or calibration from the user. A player can just walk up to your game and use it instantly. The downside is doesn't know exactly what a hand is, it's just guessing. so if you are standing too close to the Kinect, it could think that any part of you is your hand. Only a minor issue when working with the zero calibration.
The second tracker, OpenNI2TUIO, tracks the entire skeleton to determine which part of you is a hand. Brilliant, very accurate tracking of hands. This one however does require the "superman" pose, so it's good for singular apps that you are using for lengths of time, or ones where you require more accurate recognition of hands.
The Windows version, if you are interested in playing around with that, is called Open Exhibits TUIO Kinect. I got this running pretty quickly; you have to follow the readme.txt that comes with it and install some drivers, but it does just about work. It doesn't give you the same control over what it tracks like the other two do, but it tracks what it thinks are hands just like the Mac version of TUIO kinect. But you Windows guys can play along too.
Step 1: Downloading Software
For this tutorial, I'll be using TUIOKinect, just because I like being able to sit at my desk and still use the kinect. Of course, you can use one of the other trackers, but if you follow along as I cover the basics, then hopefully using another tracker will become nice and easy.
Download one of the trackers from the links above, for mac, the trackers dont need any installing and run straight away. For Windows you'll need to install some drivers. They cover that in their
readme.txt so I won't go over that here.
Step 2: Download the Bridge
Downloading TUIO kinect isn't the only step, you need a way to send that data specifically to flash. There is a bridge that does just that and is very simple to run. udp-flashlc-bridge by Georg Kaindl is the software I'll be using today. You can get that here:
NOTE, if you are running the Windows version, you don't need this as it comes bundled with similar software.
Step 3: Unzip Flash Bridge
Once you have downloaded that, extract to somewhere like your desktop or your documents folder, somewhere thats easy to get to. I say this because we have to run this bridge via a terminal command.
Step 4: Rename the Folder
Rename the folder to "flashBridge", without the quotes, just because its default name is very long winded.
Step 5: Run the Bridge
Lets do that now, we can start it now and it'll be running for the rest of the tutorial. Once it's up and running, you don't need to worry about it again, it's a very stable bit of software.
Open up terminal and enter the following, I use desktop here, but replace that with wherever you put your folder:
cd desktop/flashBridge ./udp-flashlc-bridge
Your terminal window should now show two extra lines and confirm that its running fine. If it isn't, it'll tell you, so make sure you go over the last couple of steps and confirm that your are following them exactly.
It should look something like this:
Step 6: Download the Necessary Libraries
Download the newest TUIO library from this link: TUIO Library
When you unzip the folder, you'll have two folders inside, one being demos, one being org. We just want the org folder. You don't need to move it anywhere just yet, I'll get to that in a minute, you just need to have it there ready.
You also need to download the greensock tweening library, TweenMax. You can find that here: TweenMax.
Create a new folder where we'll be saving all our code into. Call it KinectGame. You'll need to copy over the org folder from the TUIO library into this folder and also the com folder from the TweenMax library.
Step 7: Starting to Code
Download my source code and open up the folder KinectGameStart. You'll find a FLA called Main that contains all the assets you'll be needing. Copy that into your build folder and open it.
Type in the class box
Main and click the pencil icon.
Save the file as
Main.as.
Now, at the top of your file where we import other classes, add the following:
import org.tuio.ITuioListener; import org.tuio.TouchEvent; import org.tuio.TuioBlob; import org.tuio.TuioClient; import org.tuio.TuioCursor; import org.tuio.TuioObject; import org.tuio.connectors.LCConnector; import org.tuio.osc.OSCManager; import flash.events.Event;
This just imports the basic TUIO classes that we'll be working with for now.
Step 8: Adding Variables
Next we add in some variables and constants:
public static const TILT_MODE : int = 4; public static const SEND_DATA_SIZE : int = 6; protected var client : TuioClient; protected var connectionNameIn : String = "_OscDataStream"; protected var connectionNameOut : String = "_OscDataStreamOut"; protected var lcConnector : LCConnector; protected var oscManager : OSCManager;
These just handle the basic parameters to help set up our TUIO connection.
Step 9: The Constructor Method
Next you'll want to add the following into your constructor method:
lcConnector = new LCConnector(connectionNameIn, connectionNameOut); client = new TuioClient(lcConnector); client.addListener(this); oscManager = new OSCManager(null,lcConnector); oscManager.start();
You'll also want to alter the following line:
public class Main
to the following:
public class Main extends Sprite implements ITuioListener
This just allows us to create graphics and add in the listener function so that we can access the TUIO data directly in this class.
It's all been pretty straightforward at the moment, but nothing exciting has happened yet either. Let's change that shall we? Let's start accessing the Kinect data.
Step 10: Making a Cursor Class
When we move our hands around in the air, like we just don't care, we need to have something show up in Flash that shows us where our hands are. It's not much fun playing a game where you can't see the cursor.
For now we shall just make a simple class that adds a MovieClip to the stage every time it finds a hand.
Go ahead and create a new ActionScript file. Call it
Cursor and save it in the same directory you saved
Main.as.
Copy and paste the following code into this class:
package { import com.greensock.TweenMax; import com.greensock.easing.Quad; import flash.display.Shape; import flash.display.Sprite; import flash.events.Event; public class Cursor extends Sprite { private var cursor:Hand = new Hand(); public function Cursor(name : String, parent : Sprite, _x : int, _y : int, size : int) { x = _x; y = _y; this.name = name; parent.addChild(this); addChild(cursor); } public function destroy() : void { removeChild(cursor); } public function moveTo(_x : Number, _y: Number) : void { TweenMax.to(this, .6, {x : _x, y: _y, ease: Quad.easeOut}); } } }
All this code does is add a movieclip and allow you to delete it from the display list. Pretty straightforward stuff.
You'll also notice the
moveTo function. This, when we call the update in
Main.as class, will move the cursor around the screen and ease it to the new coordinates. It just makes it feel a bit nicer.
Step 11: Adding Cursors to the Screen
So now that we have something to add to the screen, lets code in the ability to add the ball to the screen and then remove it, depending on whether your hands are in or out of range of the Kinect.
Now, this isn't overly complicated, you just need to have several functions in your Main class. These grab the data from TUIO and then you tell the cursor object to update its x and y position every frame.
Updating the position is already done for you in the
Cursor.as file, you just need to tell it where to go.
Go back into your
Main.as file.
Copy and paste the following after the constructor method in your
Main.as file.
public function addTuioCursor(tuioCursor : TuioCursor) : void { new Cursor(tuioCursor.sessionID.toString(), this, tuioCursor.x * stage.stageWidth, tuioCursor.y * stage.stageHeight, 30); } public function updateTuioCursor(tuioCursor : TuioCursor) : void { try { var hands : Cursor = getChildByName(tuioCursor.sessionID.toString()) as Cursor; hands.moveTo(tuioCursor.x * stage.stageWidth, tuioCursor.y * stage.stageHeight); } catch(e : Error) { } } public function removeTuioCursor(tuioCursor : TuioCursor) : void { try { var hands : Cursor = getChildByName(tuioCursor.sessionID.toString()) as Cursor; hands.destroy(); } catch(e : Error) { } }
Now, if you try and run it as is, you'll get a handful of errors saying that it's missing some extra functions etc. That is because the TUIO library allows the use of all the different tracker output methods from cursor to blobs. What we need to do is edit a file within TUIO so that we only need these three functions.
In the folder org, you'll find the folder tuio. In that folder, open up the file lTuioListener.as
What you need to do is comment out the following lines:
function addTuioObject(tuioObject:TuioObject):void;
function updateTuioObject(tuioObject:TuioObject):void;
function removeTuioObject(tuioObject:TuioObject):void;
function addTuioBlob(tuioBlob:TuioBlob):void;
function updateTuioBlob(tuioBlob:TuioBlob):void;
function removeTuioBlob(tuioBlob:TuioBlob):void;
function newFrame(id:uint):void;
And now save the file.
It should look something like the following:
You can run the file now, but you'll see that you can't control anything. That's okay; run TuioKinect, you'll need to set the far threshold to something so that when your hands are outstretched, TuioKinect looks like this:
As a vague guide, I sit about half a meter away from my Kinect, and I need to set my far threshold to 90.
When this is set, you should be able to see balls appear in the flash file on screen where your hands are! Fantastic stuff, you're controlling a Flash file with your hands! Imagine the possibilities for what you can do with this.
Step 12: Pong Kinect
Now that you have control of Flash with your Kinect, what on earth can you do with it? Well, you can use the Kinect to control all sorts of things in Flash. Often you can just use it to replace an object that could be controlled by the mouse or keyboard.
What I'm going to show you now is how to make a pong game where both paddles are Kinect controlled. Two player Kinect pong! A great party game.
To control the paddles, you'll need to add in the following code into the
updateTuioCursor function in
Main.as; this goes in the
try { } method, after the
hands.moveTo line:
if(hands.x < stage.stageWidth/2 ) { _playervy = (hands.y - (_player.y + _player.height/2)) * _easing; _player.y += _playervy; } if(hands.x > stage.stageWidth/2) { _compvy = (hands.y - (_comp.y + _comp.height/2)) * _easing; _comp.y += _compvy; }
And add the following amongst the other variable definitions.
private var _player:Player = new Player(); private var _playervy:Number = 0; private var _comp:Comp = new Comp(); private var _compvy:Number = 0; private var _easing:Number = 0.8;
Also, add the following to the constructor method:
addChild(_player); _player.x = 10; addChild(_comp); _comp.x = (stage.stageWidth - _comp.width) - 10;
Go ahead and try that out, what you've got now is a control mechanism that controls the left paddle if your hand is on the left half of the screen, and controls the right paddle if your hand is on the right half of the screen. It's a very easy to use and intuitive way of controlling Pong with two players with zero setup, and thats what I'm all about.
I havn't done anything too special here, what I've done is setup a check to see if the position of the ball is on the left portion of the stage or the right. It then controls the appropriate paddle up and down in relation to your hand movements.
Some of you may be wondering what the following line is all about:
_playervy = (hands.y - (_player.y + _player.height/2)) * _easing;
That, is some basic easing code which I have learned from reading the brilliant Foundation Actionscript 3.0 Animation book by Keith Peters. It just figures out where you want the paddle to be in relation to itself then adds an amount, the "easing" number, to smooth the transition. Keith can explain it a lot better than I can though, I just figure out where to put it.
You might be wondering why you would put any easing code in at all when it will move up and down perfectly well without it. Well, thats all just to make it nicer to play, it adds a little weight to the paddle, just to give it some feeling to it. Just a little player psychology there, making the things you control in the game feel real.
Step 13: Bouncing Balls
Now, I'm not going to go through and explain what this code is doing thoroughly, since I'm not doing a tutorial about making Pong, this tutorial is all about controlling a game with a kinect.
This code makes a ball bounce off the paddles and the top and bottom of the screen, the basic components needed to make pong work. With this and the Kinect controls, you can see for yourself how easy it is to make a game work with the Kinect.
Add the following to your variables:
private var _ball:Ball = new Ball(); private var _ballvx:Number = 0; private var _ballvy:Number = 0;
In the constructor, add the following:
addEventListener(Event.ENTER_FRAME, update); _ball.addEventListener(Event.ADDED_TO_STAGE, startBall); addChild(_ball);
And then after the constructor, add the following:
private function startBall(e:Event):void { _ball.x = stage.stageWidth/2; _ball.y = stage.stageHeight/2; _ballvx = rand(-5, 5); _ballvy = rand(-15, 15); if(_ball.vy == 0) { _ballvy = rand(-15, 15); } } private function rand(min:int, max:int):int { return Math.floor(Math.random() * (max - min + 1) + min); } private function resetBall():void { removeChild(_ball); addChild(_ball); } private function update(e:Event):void { if(_ball.hitTestObject(_comp)) { _ball.x = _comp.x - _ball.width; _ballvx = _ballvx * -1.2; } if(_ball.hitTestObject(_player)) { _ballvx = _ballvx * -1.2; _ball.x = _player.x + _player.width; } if(_ball.x > stage.stageWidth) { resetBall(); } if(_ball.x < 0 - _ball.width * 2) { resetBall(); } if(_ball.y + this.height > 400) { _ball.y = 400 - this.height; _ballvy *= -1; } else if(_ball.y < 0) { _ball.y = 0; _ballvy *= -1; } _ball.x += _ballvx; _ball.y += _ballvy; }
What all that does is, it moves the ball around the stage and bounces it off the paddles and the top and bottom of the screen. The code also sets the starting point of the ball randomly and adds it to the stage. To save repetitive code, I add in an "added to stage" listener to the ball so that when we remove the ball, then add it to the stage again, it assigns a random velocity and direction to the ball's movement.
This works well for our small game, and it manages the ball for us.
You can go ahead and try this all out with the kinect. The ball will bounce around between the two paddles that you are controlling with your hands. Pretty nifty ay?
If you are having any trouble with this and are getting errors, have a look at my source code in the folder KinectGameFinished. There you can see all the source code for this little game and check it against what you have got already.
Step 14: Gestures
Some of you may remember the film Minority Report, specifically the scene where Tom Cruise stands in front of a huge transparent display and with a few waves of his hands, he's moving images and data around the screen.
source:
Look at that interface, it's gorgeous. It's very exciting that this can actually be made by us in our own homes now.
Well, with the help of TUIO, the Kinect and Flash, you can build such apps. I'm going to show you how to build a simple photo viewer app where you can move the photos around the screen, make them bigger and smaller with a "pinch" gesture, and rotate the images using another gesture.
It works in a similar way as gestures do on a touch screen phone or touch table, but bigger and involving two arms. In fact, the code I'll be showing you takes advantage of the built-in gesture handler in Flash.
In the source files I've provided for you, open up the folder KinectGesturesStart. This holds all the files that you need to get started with.
If you open up the class
Main.as, you'll be able to see that I've provided you with a base class that looks similar to the file we created for the kinect game earlier. This time, we won't actually be assigning an object to our hands, we'll be using the debug feature instead to see them.
In an earlier step, we only used the functions
add, remove and
updateTuioCursor. Now you can see that there are a lot of empty functions for object and blob, which I told you to comment out of TUIO earlier. You may be wondering why they are suddenly back. It's simply because gestures wont work without them. I'm not sure of the reason behind that, but when I've been playing about with this, it wouldn't work without all the functions running.
You don't need to worry about having to go back and uncomment anything, I've already taken care of that for you in the TUIO library in this folder.
So let's get you set up with some gesture control.
Step 15: Setting Up Gestures
Firstly, you need to add in some extra variables. Add the following to your variables:
private var debug : TuioDebug; private var img:Images;
Now add the following to the constructor method:
var gm:GestureManager = GestureManager.init(stage, client); gm.touchTargetDiscoveryMode = GestureManager.TOUCH_TARGET_DISCOVERY_MOUSE_ENABLED; GestureManager.addGesture(new DragGesture()); debug = TuioDebug.init(stage); client.addListener(debug); for (var c:int = 1; c < 8; c++) { img = new Images(100 + Math.random() * (stage.stageWidth - 200), 100 + Math.random() * (stage.stageHeight - 200), 220, 160, c); stage.addChild(img); }
Now let me guide you through what that chunk of code does.
var gm:GestureManager = GestureManager.init(stage, client);
This just sets up the gesture manager and initialises the stage with the TUIO connector client.
GestureManager.addGesture(new DragGesture());
This is quite a vital line as it tells the gesture manager to let you use the drag gesture on objects. Other gestures work without this line, but you wont be able to drag objects around the screen. So if you wanted stationary objects that you can rotate or resize, remove this line.
This would be useful for a virtual DJ app, where you rotate a virtual vinyl to alter the playback of music. You can do a lot with a few simple gestures.
debug = TuioDebug.init(stage); client.addListener(debug);
This sets up TUIO's debug feature which you'll get to see in action soon.
for (var c:int = 1; c < 8; c++ ) { img = new Images(100 + Math.random() * (stage.stageWidth - 200), 100 + Math.random() * (stage.stageHeight - 200), 220, 160, c); stage.addChild(img); }
Finally, this
for loop adds our images to the stage. Here I take advantage of how
for loops run to pass a number through to the class we are about to create which tells a movie clip which frame it needs to show. This way we use one
for loop to show all our images. Saves the hassle of writing all that extra adding and positioning code.
Step 16: In a Class All of Our Own
So we just referenced a class called
Images, which doesn't exist yet. We'd better do something to fix that.
Go ahead and create a new class file and call it
Images.
Let's start at the top of the file and work our way down.
Step 17: Importing Goods
Add the following imports to the start of the file:
import org.tuio.TouchEvent; import flash.display.Sprite; import flash.events.TransformGestureEvent; import flash.events.Event;
Step 18: Variable Success
You need to extend the class using the sprite class, so the main class definition line looks as follows:
public class Images extends Sprite
Now add the following variables:
private var id : int = -1; private var pic:Picture = new Picture();
Picture is a movieclip that is already set up and made for you in the file
Main.fla. It contains all the images that are shown when you load up the app. All the images are ones I've taken myself, so feel free to use them how you wish.
Step 19: Constructors
The main constructor line looks as follows, currently:
public function Images() {
Now, you may remember that we passed this function a bunch of parameters for setting up the image's position, size and frame. To handle the importing of these parameters, that line needs to change to this:
public function Images(x:Number, y:Number, width:Number, height:Number, image:Number)
Now you can add in the code that sets up the whole look of each image.
this.graphics.beginFill(0xffffff); this.graphics.drawRect(-width / 2, -height / 2, width, height); this.graphics.endFill(); this.addChild(pic); pic.gotoAndStop(image); this.x = x + width / 2; this.y = y + height / 2;
What this code does is draw a white box and drops the picture MovieClip inside it. This creates a nice photo-like effect as it gives each image a border.
You can also see that it changes the frame of the movieclip to be whatever position it currently is in in the
for loop.
Finally, we'll add the event listeners to set up the different gestures which we'll be handling in a second.
this.addEventListener(TransformGestureEvent.GESTURE_PAN, handleDrag); this.addEventListener(TransformGestureEvent.GESTURE_ZOOM, handleScale); this.addEventListener(TransformGestureEvent.GESTURE_ROTATE, handleRotate); this.addEventListener(TouchEvent.TOUCH_DOWN, handleDown);
Step 20: Scale That Building. Of Gestures.
Add in the following function after the constructor:
private function handleScale(e : TransformGestureEvent) : void { this.scaleX += e.scaleX; this.scaleY += e.scaleY; }
This function just handles scaling when Flash detects you are doing the scale gesture. This is Flash's built-in method for doing scale gestures, so you don't need to do any math to figure anything out, great!
Step 21: Rotate Me Right Round
Add this function underneath the last one:
private function handleRotate(e : TransformGestureEvent) : void { this.rotation += e.rotation; }
Again, this uses the built-in function for handling rotation gestures. Simple.
Step 22: Such a Drag
Add the following function underneath the last:
private function handleDrag(event : TransformGestureEvent) : void { this.x += event.offsetX; this.y += event.offsetY; }
This function handles the dragging behaviour so you are able to put your hand over an object and drag it around the screen.
Step 23: Up and Down
Add in the following functions:
private function handleDown(e : TouchEvent) : void { if (id == -1) { stage.setChildIndex(this, stage.numChildren - 1); id = e.tuioContainer.sessionID; stage.addEventListener(TouchEvent.TOUCH_UP, handleUp); } } private function handleUp(e : TouchEvent) : void { if (e.tuioContainer.sessionID == id) { id = -1; stage.removeEventListener(TouchEvent.TOUCH_UP, handleUp); } }
Right, these functions handle what happens when you move your hand over and object on screen. Basically, it brings the current object you are hovering over to the top of the display list so that it becomes the top most object. This just makes it easier to control objects that are underneath others.
Step 24: Work Those Arms
Give it a test! Hopefully, if you followed all the steps carefully, then you should now be able to move pictures around the screen, resizing and rotating them just by moving your hands around in thin air! Welcome to the future.
The gestures work as follows:
Zoom is two hands on one image and then you expand them away from each other.
Rotate is two hands on an image, one going up and the other going down.
Imagine all the cool apps you could build with this kind of tech. You could add in gesture recognition for your Kinect-enabled Flash games and merge the two things you've learned today together. Easily accessible menus by doing a zoom gesture to the stage, objects you can move around using the drag gesture, all kinds of stuff.
But, you have to be aware that it's very difficult to keep your arms extended in the air for long amounts of time. So if you are making a game, its best to keep any arm extension movements to short amounts of time.
It's no secret that I'm not a big fan of the Minority Report style interface because it's completely flawed as an interface. But, using it in small bursts with rest periods works well -- it's what I did in my game, Spaced Out: each round was one minute long with at least a minute if not more rest time in between. Any longer and people complained of arm fatigue.
Of course, what's even better is a touch table type app, where you use the Kinect to detect fingertip touches on tables and then build apps that works using that. Touch tables everywhere! You could make a touch table tower defence game using nothing more than a Kinect, Flash and a projector. Tabletop gaming with animations, which can react to your touches.
I have given you the knowledge, now use it responsibly and for good, not evil. Have fun playing with the future.
Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
Envato Market has a range of items for sale to help get you started. | http://code.tutsplus.com/tutorials/using-the-kinect-to-control-flash-an-introduction--active-9382 | CC-MAIN-2016-22 | refinedweb | 5,051 | 72.66 |
What Is Inheritance In Java?
Inheritance in Java is a process where a Java class get the properties of another class. The properties can be methods and fields. With the inheritance process in Java, the information is made manageable in a hierarchical order.
The class which inherits the properties of the other class is called subclass. It is also known as child class. And the class whose properties are inherited to subclass is called superclass or parent class.
See the simple structure of code below which shows you the superclass and subclass.:
class SuperClass { ... ... ... } class SubClass extends SuperClass { ... ... ... }
In the above code structure, SuperClass class is the parent class and the class SubClass is the child class.
Now I am going to show you a code so that you will understand the real purpose of using inheritance.
Inheritance example in Java
Below is the example Java code where you can see the parent class Person is inherited by its subclass Student:
public class Person{ private int age; private String name; public void setAge(int a) { age=a; } public void setName(String n) { name=n; } public int getAge() { return(age); } public String getName() { return(name); } } public class Student extends Person { private int rollNo; public void setrollNo(int roll) { rollNo=roll; } public int getrollNo() { return(rollNo); } }
In the above code, all the properties of Person class are now available in Student class.
Now look at the below code example:
public class Get { public static void main(String [] args) { Student p1= new Student(); p1.setAge(15); p1.setrollNo(10895); p1.setName("Faruque Ahamed"); System.out.println("Age is " +p1.getAge()); System.out.println("Roll No is " +p1.getrollNo()); System.out.println("Name: " +p1.getName()); } }
It will return you the given result:
Age is 15 Roll No is 10895 Name: Faruque Ahamed
So you can see that we can call all the properties from Person class via Student class as all the properties of Person class is available in Student class.
So in student class, you have not work for creating the age and name again. This is the real purpose of using inheritance in Java. Inheritance will help you to avoid rework. | https://www.codespeedy.com/what-is-inheritance-in-java/ | CC-MAIN-2019-47 | refinedweb | 359 | 63.09 |
28.14.
site — Site-specific configuration hook¶
Source code: Lib/site.py
This module is automatically imported during initialization. The automatic
import can be suppressed using the interpreter’s
-S option.
Importing this module will append site-specific paths to the module search path and add a few builtins...
New in version 2.6..
New in version 2.6..
New in version 2.6. (and possibly site-python).
New in version 2.7.
site.
getuserbase()¶
Return the path of the user base directory,
USER_BASE. If it is not initialized yet, this function will also set it, respecting
PYTHONUSERBASE.
New in version 2.7.
site.
getusersitepackages()¶
Return the path of the user-specific site-packages directory,
USER_SITE. If it is not initialized yet, this function will also set it, respecting
PYTHONNOUSERSITEand
USER_BASE.
New in. | https://docs.python.org/2/library/site.html | CC-MAIN-2018-26 | refinedweb | 133 | 54.39 |
Hide Forgot
The XML for a JDBC Cache Store is its own XML Schema, as referenced in the community doc here:
This documentation needs to be added to the enterprise documentation.
Without adding the appropriate cache store XML namespace/schema to your cache configuration, you will get XML parse errors.
In the 6.2 Admin Guide, Chapter has gives XML configurations that will not work without making the change to include multiple XML Schemas.
For a specific example look at 15.2.2. JdbcStringBasedStore Configuration (Library Mode)
The multiple XML schemas seems to be added in JDBC cache store Library configurations and not in server mode configs. The header of server mode configuration remains the same for configs with and without JDBC configured.
John, am I correct with what I stated above? Also, there is no schemas found in 6.3 related ISPN repo. So I assume that this is not relevant for JDG 6.3. So my 2 queries are:
1. Should the multiple XML schemas be added in server mode configs as well?
2. Should the multiple XML schemas be added in JDG 6.3 docs or only in 6.2?
I have resumed office from today. I consulted with Tomas about this and he will provide his findings about adding the XML schemas in 6.3 JDBC configs later today.
Hey Bobb,
I've confirmed working configuration for both JDG 6.2 and 6.3.
Into header:
<infinispan
xmlns:xsi=""
xsi:schemaLocation="urn:infinispan:config:6.0
urn:infinispan:config:jdbc:6.0"
xmlns="urn:infinispan:config:6.0">
And add xmlns="urn:infinispan:config:jdbc:6.0" definition into specific jdbc cache store element -- like this:
<stringKeyedJdbcStore xmlns="urn:infinispan:config:jdbc:6.0"
fetchPersistentState="false"
ignoreModifications="false"
purgeOnStartup="false" key2StringMapper="org.infinispan.loaders.keymappers.DefaultTwoWayKey2StringMapper">
This configuration works for me with both 6.1.1.ER1-redhat-1 and 6.0.3.Final-redhat-3.
Hope that helps.
Tomas
Hey Misha, could you please brew this and push it live?
I'll push the 6.2.1 changes now and the 6.3 ones will go out with the async update so we'll close off this bug once both are released.
Created ticket.
This content is now available on | https://bugzilla.redhat.com/show_bug.cgi?id=1122298 | CC-MAIN-2019-39 | refinedweb | 375 | 61.63 |
Rob,
On 2010-06-21 11:48, Sisyphus wrote:
>
> Yep, that's fine.
> However, I think you should find that both GCLP_HCURSOR and
> GWLP_USERDATA *are* defined for 32-bit compilers (and to the same values
> as their "P"-less counterparts). It's just that GCL_HCURSOR and
> GWL_USERDATA are not defined for Win64.
>
Hm, I founded my comment on this piece of code (from c:\program
files\microsoft sdks\windows\v6.0A\include\winuser.h):
#ifdef _WIN64
#undef GCL_MENUNAME
#undef GCL_HBRBACKGROUND
#undef GCL_HCURSOR
#undef GCL_HICON
#undef GCL_HMODULE
#undef GCL_WNDPROC
#undef GCL_HICONSM
#endif /* _WIN64 */
#define GCLP_MENUNAME (-8)
#define GCLP_HBRBACKGROUND (-10)
#define GCLP_HCURSOR (-12)
#define GCLP_HICON (-14)
#define GCLP_HMODULE (-16)
#define GCLP_WNDPROC (-24)
#define GCLP_HICONSM (-34)
#endif /* !NOWINOFFSETS */
As you can see, it undefines the GCL_* macros before redefining them
when _WIN64 exists. This may be peculiar to this version and to the
MicroSoft Visual Studio version it comes with.
The header file that comes with the GCC compiler does not undefine
the GCL_* macros. That is the difference.
>
>> As I have no way to test this in the absence of a Windows 64 bits system
>> that I can easily use, I would like to ask you to check these changes.
>
> Just checked out revision 11081. Builds fine using MinGW64, except for
> the other issues I raised in my original post (and worked around):
>
Excellent.
> 1) I need to tell the build process that CC=x86_64-w64-mingw32-gcc,
> C++=x86_64-w64-mingw32-g++ and AR=x86_64-w64-mingw32-ar instead of the
> usual gcc, g++ and ar.
> I notice there's some output at the start of the cmake process that
> looks very much like './configure' output, so hopefully there's a way of
> passing those values along (as there is with ./configure).
>
> 2) libgdi32.a and libcomdlg32.a can't be found because of changes to the
> directory structure in the mingw64 compiler.
>
> I'll try to work out how those issues can be addressed - any pointers
> are welcome, as I'm not at all familiar with the cmake process, I'm
> hopeless at tracking down relevant documentation, I'm not very good with
> Makefiles, and I'm generally a bit dim anyway.
>
Hm, you have solved (or tracked down the solution to) those issues,
so the dimness seems to be wearing off.
I guess Windows 64 bits is a relatively new platform as far as PLplot
is concerned. We are bound to run into more such issues.
> Btw, by chance, I happened to notice that x17c.exe (strip chart demo)
> works correctly in the svn version. Whenever I've looked at it before,
> the graphs have just super-imposed over each other, resulting in quite a
> mess. Now we get one chart after another - which I'm assuming is what's
> intended. Nice !
>
Yes, that was actually a small correction by me and great improvement
to the demo (the wingcc was lying about it cleaning up the window - it
said it would but did not). | https://sourceforge.net/p/plplot/mailman/message/25574080/ | CC-MAIN-2017-17 | refinedweb | 495 | 70.73 |
're going to have to tell me / us what's in the script or at least what you're asking Get-WmiObject to do.
Chris
$Wmi = Get-WmiObject -class "Win32_OperatingSystem" -namespace "root\cimv2" -computer Guest-Dev
However, things got worse today. When I ran it this morning I got the "RPC server is unavailable" error (0x800706BA). So I didn't get as far as I did yesterday. Any ideas on this error? I had overcome that error a few days ago by loading PowerShell 2.0 on the target machine (Guest-Dev)--but now it's back.
Thanks.
"RPC server is unavailable" is typically associated with a connection failure. Whether that's because the machine is offline or firewalled, or the name cannot be resolved.
Check you can resolve Guest-Dev to an IP address first (nslookup)?
In this context the client does not need to be running PowerShell itself. Although it will if you're remotely executing the command.
Chris
Protecting your business doesn’t have to mean sifting through endless alerts and notifications. With WatchGuard Total Security Suite, you can feel confident that your business is secure, meaning you can get back to the things that have been sitting on your to-do list.
Can you ping the system?
Either it's firewalled or switched off / not connected.
There's no much you can do about it in PowerShell though. It's a network level error after all.
Chris
Firewall then I'd guess. There's not a lot I can tell you about that particular error. Are there any other machines you can test the script against so we can look at the original error message?
Chris
Experts Exchange Solution brought to you by
Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.Start your 7-day free trial
Do you get the error when you run the script, or when you run this command on its own?
Get-WMIObject Win32_OperatingSystem -ComputerName Guest-Dev
And if so, what operating system are you running on Guest-Dev?
Chris
How odd. It's a WMI / DCOM error rather than anything to do with PS itself, we should get it regardless of the tool we use to query WMI.
It appears to be related to name resolution although there's not all that much information about it. For example, see this ancient KB article:
Are you able to log onto Guest-Dev and see if the error occurs if you run the command locally?
Chris
heh cross-posting.
Anyway, the suggestion is that it's name resolution. How is name resolution configured on your network? DNS or just NetBIOS?
Chris
Out of curiosity, which OS on Gembarowski-Dev?
You could always install a packet sniffer and see if that gives any indication of the failure. It's vague, the the problem is extremely obscure.
Chris
I'm going to be limited to Googling it. I haven't encountered the error before which makes suggesting how to fix it rather tricky.
I assume it persists across reboot?
Chris
- Object Server: SC Manager
- Object Type: SERVICE OBJECT
- Object Name: RasMan
- Process ID: 1044
Does this mean anything to you?
I know what the object name is, but I can't really see how it would break WMI connections between two computers very specific computers.
It's difficult to know where to point the finger. I would guess that other hosts are able to connect to Guest-Dev using the script without problem? And I would guess they're also able to connect to Gembarowski-Dev?
Chris
What you say is correct. We loaded Powershell 2.0 onto one of the other machines on our little network yesterday and ran the script from it. The script worked for both Guest-Dev and Gembarowski-Dev.
The only thing I can think of is if one system were a clone of the other. Anything that would potentially prevent them from agreeing on security, or would cause a failure to negotiate the connection.
Chris | https://www.experts-exchange.com/questions/25976412/Object-exporter-specified-was-not-found.html | CC-MAIN-2018-30 | refinedweb | 683 | 75.71 |
I have a string to be executed inside my python program and I want to change some variables in the string like x[1], x[2] to something else.
I had previously used eval with 2 arguments (the second being a dict with replaced_word: new_word) but now I noticed I can't use previously imported modules like this. So if I do this
from math import log
eval(log(x[1], {x[1]: 1})
Build your globals
dict with
globals() as a base:
from math import log # Copy the globals() dict so changes don't affect real globals eval_globals = globals().copy() # Tweak the copy to add desired new global eval_globals[x[1]] = 1 # eval using the updated copy eval('log(x[1])', eval_globals)
Alternatively, you can use three-arg
eval to use
globals() unmodified, but also supply a
locals
dict that will be checked (and modified) first, in preference to global values:
eval('log(x[1])', globals(), {x[1]: 1})
In theory, the latter approach could allow the expression to mutate the original globals, so adding
.copy() to make it
eval('log(x[1])', globals().copy(), {x[1]: 1}) minimizes the risk of that happening accidentally. But pathological/malicious code could work around that;
eval is dangerous after all, don't trust it for arbitrary inputs no matter how sandboxed you make it. | https://codedump.io/share/VLnKB33ZSFTz/1/use-eval-with-dictionary-without-losing-imported-modules-in-python2 | CC-MAIN-2016-50 | refinedweb | 223 | 51.41 |
B, GNU Free Documentation License.
Table of Contents
List of Examples
The SMK build tool automatically determines, which pieces of a
program need to be recompiled and issues commands to recompile
them. It is similar in purpose to the
popular make, scons
and jam utilities.
The smk utility works by reading one or
more smkfiles, usually
named
SMK. The smkfiles construct a
database of targets, their prerequisites and the commands, used
to update the targets, when one or more prerequisite changes.
This edition documents SMK version 0.4.1.
[TBD: Explain what the following items actually mean.]
The smk utility is distributed under the terms of the GNU General Public License, version 2. A verbatim copy of the license is given in Appendix A, GNU General Public License.
Table of Contents
In this section we will discuss a simple smkfile, which describes how to compile and link the venerable "Hello, World!" program.
We will have one source file,
hello.c, with
the following content:
$ cat hello.c #include <stdio.h> int main () { puts ("Hello, World!"); return 0; }
Here's a straightforward smkfile, which describes how the
program
hello is built from the source
file
hello.c
$ cat SMK smk.make_program ('hello', srcs = ['hello.c'])
The smkfile instructs the tool to make an executable program via
the function
smk.make_program. The first
parameter of the function is the name of the program and the
second is the list of the sources to the program. It can't be
any simpler, can it ?
Now run the smk to build the program.
$ smk
Surprisingly, nothing happens. The reason is simple - we haven't told smk what to build. Let's try again:
$ smk ./hello CC hello.o CC-LD hello $ ./hello Hello, World!
This time everything worked as expected, we have successfully
built the hello program and were able to
execute it. Note, however, how did we
tell smk what to build -
using
./hello instead of
simply
hello. The difference and the need
is explained in detail in the section called “Target specification”, for now it's sufficient to say
that
hello is taken as a literal name,
whereas
./hello is replaced with the full
path name of the target.
The smk command is not gratuitously verbose -
in the normal mode of operation progress is reported via a short
tag, designating the command being executed, and a short name of
the target being built. This behavior can be modified using
the
-v
(or
--verbose) flag, which
displays the full command line, and
the
-q
(or
--quiet) flag, which
suppresses all progress messages.
Just like we are able to build the project, we should be able to
clean it too. Fortunately smk can do this
for us, by determining what should be cleaned, following the
obvious rule - “Clean everything you can
rebuild” or put it the other
way, “Do not touch anything you CAN'T
rebuild”. The smk command
line option for cleaning is
-c
(or
--clean) and similarly to
building targets, we need to specify what to clean (ignore
the
SMK.cache file for now):
$ ls SMK SMK.cache hello hello.c hello.o $ smk -qc ./hello $ ls SMK SMK.cache hello.c
Explicitly naming the targets on the command line can become
tedious and, naturally, with smk we can do
better. With no targets given on the command line,
the smk command defaults to building
the
all target. This is
a phony target, one that is not really a
name of a file, but whose sole purpose is to provide a name to a
group of targets. This grouping is achieved by making our
targets prerequisites for the phony
one. Here's how:
$ cat SMK hello = smk.make_program ('hello', srcs = ['hello.c']) smk.env.all.depends (hello) $ smk -v cc -c -g -o hello.o hello.c cc -g -o hello hello.o $ ls SMK SMK.cache hello hello.c hello.o $ smk -qc $ ls SMK SMK.cache hello.c
What did we do here? The
function
smk.make_program has a return
value - a Python object
[1],
which represents our program. The variable
smk.env.all is the object, which stands for
the built-in
all target
[2].
By invoking the method
depends we set
the
hello program as a prerequisite for
the
all target, thus whenever we build or
clean
all we perform the corresponding
operation on
hello too.
Let's add another source file to our project, putting the key functionality of printing "Hello, World!" in a separate function:
$ cat hello-proc.c #include <stdio.h> void print_hello () { puts ("Hello, World!"); } $ cat hello.c extern void print_hello (); int main () { print_hello (); return 0; } $
As we have already learned, we can add the new file to the
project by simply listing it along with
the
hello.c name in
the
srcs parameter
to
smk.make_program as follows:
hello = smk.make_program ('hello', srcs = ['hello.c', 'hello-proc.c'])
An alternative way is to create a source object for the new source file:
hello_proc_c = smk.make_source ('hello-proc.c') hello = smk.make_program ('hello', srcs = ['hello.c', hello_proc_c])
As we can see, we can mix strings and objects in
the
srcs parameter
of
smk.make_program.
Yet another way to achieve our goal of adding more files to the
project is to compile the source file and
add the resulting object file object as a
prerequisite to the
hello program target
using the
objs parameter.
hello_proc_o = smk.compile_source ('hello-proc.c') hello = smk.make_program ('hello', srcs = ['hello.c'], objs = [hello_proc_o])
We will conclude the section by adding yet another source file
and using yet another way to create objects and specify
dependency relations, using the
function
smk.make_object. The new source
file is called
goodbye-proc.c and contains
the
print_goodbye function. Accordingly,
we modify the
main to call the new
function.
$ cat goodbye-proc.c #include <stdio.h> void print_goodbye () { puts ("Goodbye, World!"); } $ cat hello.c extern void print_hello (); extern void print_goodbye (); int main () { print_hello (); print_goodbye (); return 0; } $ cat SMK hello_proc_o = smk.compile_source ('hello-proc.c') goodbye_proc_o = smk.make_object ('goodbye-proc') hello = smk.make_program ('hello', srcs = ['hello.c'], objs = [hello_proc_o, goodbye_proc_o]) smk.env.all.depends (hello)
There are many more ways to create various source, object file, library, etc instances, which ways are specified in the section called “Factory functions”. Keep in mind that in all of the above cases, we actually ended up with the same set of objects [3], the only difference being in which objects we specified explicitly and which objects were created for us by smk.
Since the number of the source files started to grow, it's a
good idea to reorganize our project a bit. Let's put the
sources
hello-proc.c
and
goodbye-proc.c in the
subdirectory
libhello and the main program
source
hello.c in the
subdirectory
src:
$ mkdir libhello $ mv goodbye-proc.c hello-proc.c libhello/ $ mkdir src $ mv hello.c src/ $ smk Source file ``hello-proc.c'' cannot be located
The smk program is no longer able to find our files. We can fix the situation in several ways:
$ cat SMK hello_proc_o = smk.compile_source ('libhello/hello-proc.c') goodbye_proc_o = smk.make_object ('libhello/goodbye-proc.o') hello = smk.make_program ('hello', srcs = ['src/hello.c'], objs = [hello_proc_o, goodbye_proc_o]) smk.env.all.depends (hello) $ smk -v cc -c -g -o libhello/hello-proc.o libhello/hello-proc.c cc -c -g -o libhello/goodbye-proc.o libhello/goodbye-proc.c cc -c -g -o src/hello.o src/hello.c cc -g -o hello src/hello.o libhello/hello-proc.o libhello/goodbye-proc.o
$ cat SMK smk.env.source_path = smk.make_match_list ({r'\.c$' : ['libhello', 'src']}) hello_proc_o = smk.compile_source ('hello-proc.c') goodbye_proc_o = smk.make_object ('goodbye-proc.o') hello = smk.make_program ('hello', srcs = ['hello.c'], objs = [hello_proc_o, goodbye_proc_o]) smk.env.all.depends (hello) $ smk -v cc -c -g -o libhello/hello-proc.o libhello/hello-proc.c cc -c -g -o goodbye-proc.o libhello/goodbye-proc.c cc -c -g -o src/hello.o src/hello.c cc -g -o hello src/hello.o libhello/hello-proc.o goodbye-proc.o $ ls SMK SMK.cache goodbye-proc.o hello libhello src
The function
smk.make_match_list
[4]
takes as a parameter a dictionary, in which keys are regular
expressions and the values are lists of directory names.
Source files, whose names match the regular expression, are
searched in the corresponding directories.
The smk searches for the source files in
the paths specified by the global
variable
smk.env.source_path. Note,
however, that the object
file
goodbye-proc.o is still created in
the current directory, since we specified it that way.
We choose the first method, explicitly naming the parent directory for each target.
Next, instead of linking with the object files, we create
a static library and link to it. The
library is created with the
function
smk.make_static_lib as follows:
libhello = smk.make_static_lib ('libhello/hello', srcs = ['hello-proc.c', 'goodbye-proc.c'])
and we link to the library, by passing it to
the
smk.make_program via
the
libs parameter:
hello = smk.make_program ('hello', srcs = ['hello.c'], libs = [libhello])
Since we no longer need the
variables
hello_proc_o
and
goodbye_proc_o (which were put there for
illustrative purposes anyway), we can simply delete them, ending
up with the following smkfile:
$ cat SMK libhello = smk.make_static_lib ('libhello/hello', srcs = ['libhello/hello-proc.c', 'libhello/goodbye-proc.c']) hello = smk.make_program ('hello', srcs = ['src/hello.c'], libs = [libhello]) smk.env.all.depends (hello) $ hello src/hello.o -Llibhello -lhello
It is worth noting that SMK supports platform
independent naming of build objects - it transformed
the generic
hello library name into a name,
appropriate for the current platform and build tools,
namely
libhello.a. On Windows, it would
have created a library named
hello.lib.
Our smkfile contains build instructions for both a library and a program. With several such libraries or programs a single smkfile would become too unwieldy. We will separate the build instructions in the subdirectories we already have and let the top smkfile process the sub-smkfile instructions.
The subdirectory smkfiles look as follows:
$ cat libhello/SMK libhello = smk.make_static_lib ('hello', srcs = ['hello-proc.c', 'goodbye-proc.c']) $ cat src/SMK hello = smk.make_program ('hello', srcs = ['hello.c'], libs = [libhello]) smk.env.all.depends (hello)
Note that we have removed the parent directories from the target names. The reason is that non-absolute pathnames are taken relative to the current source or build directory.
The subdirectory smkfiles are processed by the
function
smk.subdirs. This function takes
a single parameter - a list of subdirectory names. Thus, the
top level smkfile becomes simply:
$ cat SMK smk.subdirs (['libhello', 'src'])
and we can see that everything works as expected:
$
We should immediately point out that what we have now are not recursive smkfiles. All our smkfiles work towards creating a single database of targets and prerequisites. We call this global view of all dependencies and its immensive utility will become obvious when we discuss building from within subdirectories.
Note, that in our case, the order of the directories in the
parameter to
smk.subdirs matters. In
the
src/SMK smkfile we refer to the
variable
libhello, which should already by
assigned to, thus it is important
that
libhello goes
before
src.
Following the rule “Do only one thing and do it
well” we decide to create not one, but two programs - one
outputting
Hello, World! and
another, outputting
Goodbye,
World!. The new program's source goes
to
src/goodbye.c and we
modify
src/hello.c accordingly. As usual
for the libraries, we add a header
file
libhello/hello.h defining the public
interface of the library, thus obtaining:
$ cat libhello/hello.h #ifndef libhello_hello_h #define libhello_hello_h 1 extern void print_hello (); extern void print_goodbye (); #endif /* libhello_hello_h */ $ cat src/hello.c #include "libhello/hello.h" int main () { print_hello (); return 0; } $ cat src/goodbye.c #include "libhello/hello.h" int main () { print_goodbye (); return 0; }
Since both the hello
and goodbye programs are built in a very
similar way, we can take advantage of the fact that our smkfiles
are fully fledged Python sources and
write
src/SMK thusly (cool, eh?):
$ cat src/SMK programs = ['hello', 'goodbye'] for prog in [smk.make_program (x, srcs = [x + '.c'], libs = [libhello]) for x in programs]: smk.env.all.depends (prog)
Unfortunately, if we attempt to build now, the compiler will fail for being unable to find the header file. We need to modify the way the source files are compiled.
Each project target has an associated command
object - an object, which holds all the options,
switches, settings, etc. necessary to execute the command,
which updates the target. The command objects need not be
unique, in fact, all the C source files in our project were
compiled using the same object - the default C compiler object,
referenced via the global
variable
smk.env.cc[2].
In order to solve our problem, we should add the top level
source directory to the include search path of the compiler.
The top level source directory is held in the global
variable
smk.env.top_srcdir and we can modify
the default include file search path via the
method
include of the default C
compiler,
smk.env.cc:
$ cat SMK smk.env.cc.include ([smk.env.top_srcdir]) smk.subdirs (['libhello', 'src']) $ smk -v cc -c -g -I../step6 -o libhello/hello-proc.o libhello/hello-proc.c cc -c -g -I../step6
When specifying the include search path, we are completely unaware of the command line syntax of the concrete compiler tool. This knowledge is encapsulated in the compiler object itself, thus our smkfile is portable across different compilers.
Since we modified the default (and only) C compiler object, the
include search path switch is passed unnecessarily when building
the library objects too
[5]. If this is considered undesirable, we should
create a new compiler object, modify only it by the means of
the
include method and specify that the
build of the
hello.o
and
goodbye.o object files should use that
compiler object. Probably the easiest way to obtain a new
compiler object is to copy an existing one, via the SMK
function
smk.clone. We can use either one
of the
smk.make_object
or
smk.compile_source functions to assign
the new compiler object the task of building the object files by
using the named parameter
build. Thus
the
src/SMK file becomes:
$.env.all.depends (prog)
and we remove the unnecessary include path setting in the top level smkfile, so it becomes simply:
$ cat SMK smk.subdirs (['libhello', 'src'])
And we will rebuild the project to make sure everything works as expected:
$ smk -qc $ smk -v
Having a header file, we would like to be able to recompile the sources and rebuild the targets, which are affected by changes to it, i.e:
$ touch libhello/hello.h $ smk -v cc -c -g -I../step6 -o src/hello.o src/hello.c cc -c -g -I../step6 -o src/goodbye.o src/goodbye.c cc -g -o src/hello src/hello.o -Llibhello -lhello cc -g -o src/goodbye src/goodbye.o -Llibhello -lhello
As we can see, smk solves this problem by
itself, relieving us from the need to explicitly specify the
header file dependencies. The smk scans the
source files for preprocessor include directives. It uses the
include file search path to find the header files and
automatically makes them prerequisites for the target object
file. Such a potentially time consuming operation is, of
course, not performed on each build, but instead the implicit
dependencies are stored (among other things) in a cache file
-
SMK.cache. SMK will
recompute the implicit dependencies only when the corresponding
source file changes. For all the other source files it will use
the information in the cache file.
The generation of the implicit dependencies is customizable on a
tool level. The generic C compiler
(
smk.tools.cc) uses a source file scan
for preprocessor directives, whereas the GNU C compiler tool
(
smk.tools.gcc) uses the built-in GCC
ability to automatically derive prerequisite header
files
[6].
A novel and, as of this writing, an unique feature of smk is its ability to support wide range of source and build directory dispositions with a single set of smkfiles, as well as its ability to perform builds from anywhere within the build tree, thus truly having a global view of all the dependencies.
In this configuration the source and the build trees are completely separate [7]. This configuration is achieved in the following ways:
-foption to point to the top level smkfile. The parent directory of the smkfile is taken to be the top level source directory.
$ mkdir build $ cd build/ $ smk -v -f ../SMK $ cd ..
-ooption to refer to the top level build directory. The current working directory should contain the top level smkfile and is taken to be the top level source directory:
$ smk -v -o build2 cc -c -g -o build2/libhello/hello-proc.o libhello/hello-proc.c cc -c -g -o build2/libhello/goodbye-proc.o libhello/goodbye-proc.c cc -c -g -I../step6 -o build2/src/hello.o src/hello.c cc -c -g -I../step6 -o build2/src/goodbye.o src/goodbye.c ar ruv build2/libhello/libhello.a build2/libhello/hello-proc.o build2/libhello/goodbye-proc.o ar: creating build2/libhello/libhello.a a - build2/libhello/hello-proc.o a - build2/libhello/goodbye-proc.o cc -g -o build2/src/hello build2/src/hello.o -Lbuild2/libhello -lhello cc -g -o build2/src/goodbye build2/src/goodbye.o -Lbuild2/libhello -lhello
-oand
-foptions
In this configuration we have multiple, identically named
subdirectories of each source directory, which subdirectories
contain the built targets. The subdirectories name is given
with the
-s option
[8]:
$ smk -v -s debug cc -c -g -o libhello/debug/hello-proc.o libhello/hello-proc.c cc -c -g -o libhello/debug/goodbye-proc.o libhello/goodbye-proc.c cc -c -g -I../step6 -o src/debug/hello.o src/hello.c cc -c -g -I../step6 -o src/debug/goodbye.o src/goodbye.c ar ruv libhello/debug/libhello.a libhello/debug/hello-proc.o libhello/debug/goodbye-proc.o ar: creating libhello/debug/libhello.a a - libhello/debug/hello-proc.o a - libhello/debug/goodbye-proc.o cc -g -o src/debug/hello src/debug/hello.o -Llibhello/debug -lhello cc -g -o src/debug/goodbye src/debug/goodbye.o -Llibhello/debug -lhello
This configuration is a combination of the above two, where
several trees, each named via
the
-s option, are contained
within a single build “supertree”.
$ smk -v -o build -s release cc -c -g -o build/libhello/release/hello-proc.o libhello/hello-proc.c cc -c -g -o build/libhello/release/goodbye-proc.o libhello/goodbye-proc.c cc -c -g -I../step6 -o build/src/release/hello.o src/hello.c cc -c -g -I../step6 -o build/src/release/goodbye.o src/goodbye.c ar ruv build/libhello/release/libhello.a build/libhello/release/hello-proc.o build/libhello/release/goodbye-proc.o ar: creating build/libhello/release/libhello.a a - build/libhello/release/hello-proc.o a - build/libhello/release/goodbye-proc.o cc -g -o build/src/release/hello build/src/release/hello.o -Lbuild/libhello/release -lhello cc -g -o build/src/release/goodbye build/src/release/goodbye.o -Lbuild/libhello/release -lhello
To summarize, the top level build and source directories
default to the current working directory.
The
-o options changes the
default build directory,
the
-f option changes the
default source directory and
the
-s option appends an
additional pathname component to each build directory.
Once we have initiated a build, smk writes a cache file to the top level build directory. The cache file contains all of the above information - the build directory, the source directory and the output subdirectories name. This is a key to another unique feature of smk - the ability to build projects from anywhere within the build tree. Upon startup, smk searches for a cache file in the top level build directory, or, if no build directory specified, in the current working directory and up. Then the parent directory of the cache file becomes the top level build directory and the top level source directory and output subdirectories name are recovered from the information in the cache file.
$ touch src/hello.c $ cd build/src/ $ smk -vs release cc -c -g -I../../../step6 -o release/hello.o ../../src/hello.c cc -g -o release/hello release/hello.o -L../libhello/release -lhello
In the section we'll create a simple double-precision postfix (a.k.a. “reverse polish notation”) calculator, using the example sources from the GNU Bison manual. The source files are as follows:
main.c.
#include <stdio.h> int main (void) { return yyparse (); } /* Called by yyparse on error. */ void yyerror (char const *s) { printf ("%s\n", s); }
rpnlex.c.
#include <stdio.h> #include <ctype.h> #include "rpncalc.h" /* The lexical analyzer returns a double floating point number on the stack and the token NUM, or the numeric code of the character read if not a number. It skips all blanks and tabs, and returns 0 for end-of-input. */; }
rpncalc.y
%{ #define YYSTYPE double #include <math.h> int yylex (void); void yyerror (char const *); %} ; } ; %%
In smk, Bison parsers are created with the
function
smk.make_parser. The function
takes as a first parameter the parser name and, optionally, as a
second parameter the grammar file name. The function returns an
object, which stands for the C source file of the parser. Since
the output of the bison consists of two
files, the header file with token definitions is available via
the
header member of the source file. The
names of the parser source and the parser header are set to the
value of the parser name with suffix
.c
or
.h, respectively. However, unlike the
case with other smk generated filenames, the
parser source and header files are generated in
the current source directory, unless the
parser name parameter is a full pathname.
Here's the initial version of our smkfile, building only the objects for the moment:
rpncalc_c = smk.make_parser ('rpncalc') smk.env.all.depends (smk.compile_sources (['main.c', 'rpnlex.c', rpncalc_c]))
However, when we attempt to build the project, we immediately get a problem:
$ smk -v cc -c -g -o rpnlex.o rpnlex.c rpnlex.c:4:21: rpncalc.h: No such file or directory ...
The file
rpncalc.h is supposed to be
generated by bison, but for some
reason bison didn't even run. Explanation is
simple - smk doesn't know it has to
build
rpncalc.h,
because
rpncalc.h is not a prerequisite to
any target. How about automatic dependencies ? Unfortunately,
the smk support for automatic dependencies
generation is designed not to complain about missing
dependencies, because it cannot in general decide whether a file
is generated or there's insufficient information for finding it
[9].
While it is possible in principle to keep building the project
until the set of generated files (and thus the set of automatic
dependencies) stabilizes, a design decision was made to
explicitly specify such dependencies instead, like this:
rpncalc_c = smk.make_parser ('rpncalc') rpnlex_o = smk.compile_source ('rpnlex.c') smk.depends (rpnlex_o, rpncalc_c.header) smk.env.all.depends (smk.compile_sources (['main.c', 'rpnlex.c', rpncalc_c]))
And we can build the project and see everything works fine:
$
However, if we attempt to clean the project, we get another nasty “surprise” - the generated files are not removed:
$ smk -qc $ ls SMK SMK.cache main.c rpncalc.c rpncalc.h rpncalc.y rpnlex.c
This is not an error though. Each build node can be marked
as precious, using
its
precious attribute. Precious nodes are
not cleared by
-c
or
--clean options, but only
by
-C
and
--realclean options
[10].
In this particular case, the generated parser files were marked
as precious by the
smk.make_parser
function.
We can continue now to building the calculator executable file:]) smk.env.all.depends (rpncalc)
But
[11]
if we attempt to build, the linking will fail due to an
unresolved reference to the function
pow.
We should link with the math
library
libm.a, by passing its name to the
function
smk.make_program via the named
parameter
xlibs. Note the difference
between
libs and
xlibs
parameters - we can pass either filenames or build node objects
via the former and the smk will create a
dependency between the libraries and the program, whereas we can
pass only library names via the latter
and smk will pass them unchanged to the link
editor tool for tool specific processing.
So, the smkfile becomes:], xlibs = ['libm.a']) smk.env.all.depends (rpncalc)
and we can finally build and run the calculator:
$ cc -g -o rpncalc main.o rpncalc.o rpnlex.o -lm $ ./rpncalc 6 7 * 42 10 2 5 ^ + 42
While one of the features of the SMK is having platform independent specification of commands and dependencies, it nevertheless allows for full customization of the build process.
The command line option
-f
/
--file can be specified more
than once. The smkfiles, specified thusly, are
executed in the same order as in the command
line. The last such file is taken to be
the “main” smkfile, in the sense that the top level
source directory is set to its parent directory. This feature
allows a project configuration to be separated between several
platform dependent smkfiles and a fixed set of generic smkfiles.
Consider the following (fictitious) examples:
$ smk -f conf/linux.cf -f ./SMK
$ smk -f conf/aix/main.cf -f conf/aix/xlc.cf -f ./SMK
$ smk -f conf/build/linux.cf -f conf/host/cygwin.cf -f conf/target/mips-elf.cf -f ./SMK
Of course, SMK does not limit one to static configurations.
After all the smkfiles, given by
the
-f options, are loaded,
but before loading the last one, the one,
which defines the top level source directory, SMK will look into
the top level build directory for a file
named
SMK.cf and load it, if found.
The
SMK.cf is intended to be produced by a
configuration tool, for example a configure
script, produced by GNU Autoconf. It is loaded after all the
static configuration files (if any), so it is able to override
any settings in them.
The SMK leaves the choice of automatic vs. static configuration
up to the package developer, without mandating any particular
approach, and even allowing mixing approaches. Taking the
Canadian cross example, the same configuration could be obtained
by getting an automatic configuration tool to generate
a
SMK.cf file, similar to the following
one:
smk.load (smk.env.top_srcdir + '/conf/build/linux.cf') smk.load (smk.env.top_srcdir + '/conf/host/cygwin.cf') smk.load (smk.env.top_srcdir + '/conf/target/mips-elf.cf')
For the purposes of the tutorial, we will write
our
SMK.cf files by hand, but remember, it
is intended to automatically generated.
We'll continue the tutorial by going back to the “Hello,
World!” example and making the static library into a
shared one. The function, which creates shared libraries
is
smk.make_shared_lib, thus
the
libhello/SMK file becomes:
libhello = smk.make_shared_lib ('hello', srcs = ['hello-proc.c', 'goodbye-proc.c'])
On most Unix systems, the shared objects must be compiled as
position independent code. The generic compiler tool
class,
smk.tools.cc, is not capable of
doing this, though, as POSIX does not specify ways to compile
PIC objects and link shared libraries. Building shared
libraries is inherently platform and tool specific, thus, in
general, one needs additional platform and tool configuration
settings. For the tutorial, we'll assume a generic ELF based
platform, using GCC and GNU ld
[12]
We can modify the default C compiler, by initializing the
variable
smk.env.cc to an instance of the
class
smk.tools.gcc. Likewise, we can
modify the default linker, by initializing the
variable
smk.env.ld to an instance of the
class
smk.tools.gcc_ld. An appropriate
place to put such initializations is the
file
SMK.cf in the top level build
directory.
$ cat SMK.cf smk.env.cc = smk.tools.gcc () smk.env.ld = smk.tools.gcc_ld ()
Each compiler tool must implement a boolean attribute,
named
pic and emit the necessary compiler
options, needed to produce a PIC file, whenever this attribute
is set to
True. We will set this attribute
in a copy of the default compiler setting object and build the
library objects with the newly obtained compiler object.
$ cat libhello/SMK cc = smk.clone (smk.env.cc) cc.pic = True objs = smk.compile_sources (['hello-proc.c', 'goodbye-proc.c'], build = cc) libhello = smk.make_shared_lib ('hello', objs = objs)
At this point we are ready with the smkfiles and can build the project:
$ smk -v gcc -c -g -fPIC -Wall -W -o libhello/hello-proc.o libhello/hello-proc.c gcc -c -g -fPIC -Wall -W -o libhello/goodbye-proc.o libhello/goodbye-proc.c gcc -c -g -Wall -W -I../step7 -o src/hello.o src/hello.c gcc -c -g -Wall -W -I../step7 -o src/goodbye.o src/goodbye.c gcc -shared -g -o libhello/libhello.so libhello/hello-proc.o libhello/goodbye-proc.o gcc -g -o src/hello src/hello.o -Wl,-rpath,/home/velco/smk-tutor/step7/libhello -Llibhello -lhello gcc -g -o src/goodbye src/goodbye.o -Wl,-rpath,/home/velco/smk-tutor/step7/libhello -Llibhello -lhello
Several things are worth noting here. The compiler emits the
proper compiler specific option for creating of PIC
objects,
-fPIC, as well as for
creating a shared
library,
-shared. When linking
the now dynamic executables, the compiler provided the proper
runtime library search path
(
-rpath), so we don't need any
environment variable settings or wrapper shell scripts or
whatever in order to execute or debug the resulting programs.
Building shared libraries on Windows is a little bit different,
but nevertheless we can make a portable smkfile. First we need
the usual
dllexport,
dllimport blurb in the library
header and the library source files.
In
libhello/hello.h we have:
#ifndef libhello_hello_h #define libhello_hello_h 1 #ifdef _WIN32 # ifdef HELLO_DLL # define HELLO_FUNC __declspec (dllexport) # else /* !HELLO_DLL */ # define HELLO_FUNC __declspec (dllimport) # endif /* HELLO_DLL */ #else /* !_WIN32 */ # define HELLO_FUNC #endif /* _WIN32 */ HELLO_FUNC void print_hello (); HELLO_FUNC void print_goodbye (); #endif /* libhello_hello_h */
And in
libhello/hello-proc.c
and
libhello/godbye-proc.c, respectively:
#include <stdio.h> #include "hello.h" HELLO_FUNC void print_hello () { puts ("Hello, World!"); }
and
#include <stdio.h> #include "hello.h" HELLO_FUNC void print_goodbye () { puts ("Goodbye, World!"); }
When we compile the DLL itself, we have to define the
macro
HELLO_DLL, so the functions are
exported from the library. This is done via
the
define method of the compiler object:
cc = smk.clone (smk.env.cc) cc.pic = True cc.define ('HELLO_DLL') objs = smk.compile_sources (['hello-proc.c', 'goodbye-proc.c'], build = cc) libhello = smk.make_shared_lib ('hello', objs = objs)
We can proceed with building now:
d:\smk-tutor-win\step7>smk -v cl.exe -nologo -c -Zi -W4 -DHELLO_DLL -Folibhello\hello-proc.obj libhello\hello-proc.c hello-proc.c cl.exe -nologo -c -Zi -W4 -DHELLO_DLL -Folibhello\goodbye-proc.obj libhello\goodbye-proc.c goodbye-proc.c cl.exe -nologo -c -Zi -W4 -I..\step7 -Fosrc\hello.obj src\hello.c hello.c cl.exe -nologo -c -Zi -W4 -I..\step7 -Fosrc\goodbye.obj src\goodbye.c goodbye.c link.exe -dll -nologo -incremental:no -debug -out:libhello\hello.dll -implib:libhello\ehlo.lib libhello\hello-proc.obj libhello\goodbye-proc.obj Creating library libhello\ehlo.lib and object libhello\ehlo.exp link.exe -nologo -incremental:no -debug -out:src\hello.exe src\hello.obj -libpath:libhello ehlo.lib link.exe -nologo -incremental:no -debug -out:src\goodbye.exe src\goodbye.obj -libpath:libhello ehlo.lib
By default, when building a Windows DLL, SMK generates an import
library with the same name as the DLL
[13]. We can use the
implib parameter
to
smk.make_shared_lib in order to control
the building of the import library. The default value of the
parameter is
True, which means that an import
library should be built and its name must be derived from the
name of the DLL. A value of
False means not
to build an import library at all, useful when the DLL will by
used by loading it with
LoadLibrary. And,
finally, a string value allows the developer to specify a name
and/or a location of the import library, as in the following
example:
libhello = smk.make_shared_lib ('hello', objs = objs, implib = 'ehlo')The import library object is available via the
implibmember of the shared library object. For the platforms, where the import libraries are meaningless, the value of the
implibparameter is ignored and the value of the
implibmember is
None.
SMK supports package installation as an integral part of the build process. Install destinations are targets, dependent on the files being installed. In other words, installation is a special case of building [14], and an attempt to install a file results in first making sure it is up to date.
Install targets are created with the
functions
smk.install
or
smk.install_as. In both functions, the
first parameter specifies where to install, while the second
parameter specifies what to install. We can install
the
libhello.so library as follows:
smk.install ('lib', libhello)
If the first parameter is a relative pathname (as in the above
example), SMK will append the pathname to the value of the
variable
smk.env.prefix, thus obtaining the
parent directory of the installation target. The target will be
installed under its base name, for example, if the value
of
smk.env.prefix
was
/usr/local, our library would be
installed as
/usr/local/lib/libhello.so.
Alternatively, if we wanted the library to be installed under a
different name, we could have used use the
function
smk.install_as, in which the first
parameter must denote a file name, as in the following example:
smk.install_as ('lib/libhello.so.0.2', libhello)
In addition to the library, we will install the header
file,
hello.h, and, if we have it at all,
the import library, thus obtaining the
following
libhello/SMK file:
$ cat libhello/SMK cc = smk.clone (smk.env.cc) cc.pic = True objs = smk.compile_sources (['hello-proc.c', 'goodbye-proc.c'], build = cc) libhello = smk.make_shared_lib ('hello', objs = objs) smk.install ('lib', libhello) smk.install ('include', ['hello.h']) if libhello.implib: smk.install ('lib', libhello.implib)
The two programs, hello and goodbye are installed thusly:
$.install ('bin', prog) smk.env.all.depends (prog)
The last thing left is to play an autoconfigurator again and
specify a proper value of
smk.env.prefix
in
SMK.cf
[15]
:
$ cat SMK.cf smk.env.cc = smk.tools.gcc () smk.env.ld = smk.tools.gcc_ld () smk.env.prefix = '/home/velco/local'
Now we can execute the smk command with
the
-i
(or
--install) option and see
what happens:
$ smk -qC $ smk -vi gcc -c -g -fPIC -Wall -W -o libhello/hello-proc.o libhello/hello-proc.c gcc -c -g -fPIC -Wall -W -o libhello/goodbye-proc.o libhello/goodbye-proc.c INSTALL /home/velco/local/include/hello.h gcc -c -g -Wall -W -I../step8 -o src/hello.o src/hello.c gcc -c -g -Wall -W -I../step8 -o src/goodbye.o src/goodbye.c gcc -shared -g -o libhello/libhello.so libhello/hello-proc.o libhello/goodbye-proc.o gcc -shared -g -o ../../local/lib/libhello.so libhello/hello-proc.o libhello/goodbye-proc.o gcc -g -o src/hello src/hello.o -Wl,-rpath,/home/velco/smk-tutor/step8/libhello -Llibhello -lhello gcc -g -o src/goodbye src/goodbye.o -Wl,-rpath,/home/velco/smk-tutor/step8/libhello -Llibhello -lhello gcc -g -o ../../local/bin/hello src/hello.o -Llibhello -lhello gcc -g -o ../../local/bin/goodbye src/goodbye.o -Llibhello -lhello
Since we cleaned the project, a request for installation
resulted in building the installation candidates and their
prerequisites (which happened to be the entire project in our
case). We have emphasized on the installation steps. Our
header file is installed via an internal “install
tool”, that's why its update command is denoted only by
the tag
INSTALL regardless of
setting the verbose mode. Note how the programs are installed -
by linking new executables, but, unlike when linking their
in-project counterparts, without the runtime library search path
options. The same would have happened to the shared library, if
it had in-project shared library dependencies. The reason is
that for installed dynamic executables and shared libraries, the
location of their shared libraries dependencies should be
specified in an operating system specific ways,
e.g. via ldconfig(8) in GNU/Linux, and is
considered beyond the scope of SMK. Targets without in-project
dynamic library dependencies or on platforms,
lacking
-rpath (e.g. Windows),
are installed by simply copying them, just like we did with the
above header file.
Uninstallation is performed via the command line
option
-u
(or
--uninstall):
$ smk -u UNINSTALL libhello.so UNINSTALL goodbye UNINSTALL hello.h UNINSTALL hello
All
--uninstall does is remove
the installed files. It is not meant as a replacement for a
real packaging tool.
[1] The available smk classes are specified in the section called “Build node classes”
[2] The available smk global variables are specified in the section called “Global variables”
[3] Throughout the guide, we will use the term object when talking about objects in the Object-Oriented Programming sense, whereas we will use the term object file when referring to filesystem entities.
[4] this and other functions are specified in detail in the section called “Miscellaneous functions”
[5]
Nevermind the
step6 directory name, it
is the top level directory in my source tree, on yours it
would be different.
[8]
the name, given to the
-s
option, is available for the smkfiles via the global
variable
smk.env.output_subdir
[9] e.g. the standard system include directories are usually not searched for dependencies for at least two reasons: a) it's generally hard to detect or specify this information to the smk in a platform and compiler independent manner, and b) the standard system headers usually do not change and thus contribute nothing essential to the build process.
[12]
On Windows, simply do not initialize the compiler and linker
in the
SMK.cf file, SMK defaults to
using cl and link
commands.
[13]
this knowledge is, of course, not hardcoded in the SMK
itself, but is rather determined by
the
smk.tools.msvc build tool.
[14] Likewise, SMK supports UN-installation as a special form of cleaning, although this fact is of no concern of ordinary users.
Table of Contents
The non-option arguments, i.e. the arguments, which do not begin with a dash (-) are considered to be target specifications. Target specifications come in three flavors:
Example 3.1. Target specifications
Suppose we have the following files in our project:
$ find build build build/src1 build/src1/x.o build/src1/y.o build/src2 build/src2/x.o build/src2/y.o
and the current working directory
is
build/src2.
The command
smk '*.o' builds
all the four objects.
The command
smk *.o
builds
src2/x.o
and
src2/y.o, due to shell
expansion.
The command
smk '*/x.o'
builds
src1/x.o
and
src2/x.o.
Table of Contents
smk.env.top_builddir- top level build directory.
smk.env.top_srcdir- top level source directory.
smk.env.builddir- current build directory (the directory, corresponding to
smk.env.srcdir).
smk.env.srcdir- current source directory (the directory, containing the current smkfile).
smk.env.output_subdir- generated files directory component (the value of the
-soption).
smk.env.source_path- source search path.
smk.env.cc- the default C compiler.
smk.env.cxx- the default C++ compiler.
smk.env.ld- the default link editor.
smk.env.ar- the default static librarian.
The archiver tool creates and modifies static library archives.
Generic C compiler.
cc- the name of the compiler program, default
cc.
debug- switch on/off the generation of the debugging information, default
True.
optimize- switch on/off optimization, default
False.
warning- high/low warning level, default
True.
pic- generate Position Independent Code, default
False.
cpu- target architecture.
tune- target processor (specific implementation of the target architecture). th | http://home.gna.org/smk/smk.xhtml | crawl-001 | refinedweb | 6,721 | 51.34 |
Subversion error object. More...
#include <svn_types.h>
Subversion error object.
Defined here, rather than in svn_error.h, to avoid a recursive #include situation.
Definition at line 180 of file svn_types.h.
Pointer to the error we "wrap" (may be
NULL).
Via this member, individual error object can be strung together into an "error chain".
Definition at line 204 of file svn_types.h.
Details from the producer of error.
Note that if this error was generated by Subversion's API, you'll probably want to use svn_err_best_message() to get a single descriptive string for this error chain (see the child member) or svn_handle_error2() to print the error chain in full. This is because Subversion's API functions sometimes add many links to the error chain that lack details (used only to produce virtual stack traces). (Use svn_error_purge_tracing() to remove those trace-only links from the error chain.)
Definition at line 198 of file svn_types.h.
The pool in which this error object is allocated.
(If Subversion's APIs are used to manage error chains, then this pool will contain the whole error chain of which this object is a member.)
Definition at line 210 of file svn_types.h. | https://subversion.apache.org/docs/api/latest/structsvn__error__t.html | CC-MAIN-2020-29 | refinedweb | 197 | 59.7 |
I'm working on a tool that's invoked primarily from the command line and scripts. The easiest way for me to interact with it is through the console. So in PyCharm, this is fairly easy to do, just Tools->Run Python Console... Then I can work with my code directly in there as per normal.
The problem is that PyCharm doesn't really seem to get me anything over running from the regular Python console! I really want to be able to set a breakpoint in my code, so I could do something like this:
D:\Tools\Python27\pythonw.exe -u C:\Program Files (x86)\JetBrains\PyCharm 1.2.1\helpers\pydev\console\pydevconsole.py 52626 52627
import sys; print('Python %s on %s' % (sys.version, sys.platform))
sys.path.extend(['C:\\Program Files (x86)\\JetBrains\\PyCharm 1.2.1\\helpers', 'D:\\Code\\Python\\PyHello'])
Python 2.7.1 (r271:86832, Nov 27 2010, 17:19:03) [MSC v.1500 64 bit (AMD64)] on win32
>>> from fibo import Fibo
>>> fib = Fibo(boundary=10)
>>> fib.show()
1
1
2
3
5
8
>>>
Pretty straightforward. But I'd like to set a breakpoint in the __init__ or show() functions and that doesn't work. Is there a way to start the console in debug mode?
I found this thread, but it's not quite what I want to do: that involves starting a debug session first, then opening the shell afterwards. I want to be able to debug from right within the console.
Hello Rick,
Right now there is no possibility to run the console in debug mode. Running
your script in the debugger, putting a breakpoint somewhere and then showing
the command line console is the only possibility at the moment.
>>>> from fibo import Fibo
>>>> fib = Fibo(boundary=10)
>>>> fib.show()
--
Dmitry Jemerov
Development Lead
JetBrains, Inc.
"Develop with Pleasure!"
Oh. Well, that'd be useful, so consider this a vote for that :-)
What I've done is to create a go.py script that contains the stuff I'd normally do in the console and then I run that. But it's OK, but not as flexible because of course I can't just try different things as I go along.
Thanks for the quick response.
Hi Rick,
There is already an issue for that so you can vote ;-) | https://intellij-support.jetbrains.com/hc/en-us/community/posts/205804329-Debugging-from-the-Python-Console | CC-MAIN-2020-29 | refinedweb | 390 | 75.81 |
Jen Garrett
2008-03-29
Anyone working with a custom tag library or idea on OpenCyc integration? Ive got some things working, but its all jerry rigged (no offense to anyone named Jerry). I am looking for a way to do queries, assertions, etc...
Great project Nicholas!
Jennifer
Nicholas Tollervey
2008-03-29
Jen,
Have you seen the following paper?
Very interesting "howto" for what you're proposing.
Nicholas.
Jen Garrett
2008-03-29
Yes, thats how I jerry rigged it, to use that as a proxy, I have started something similar in C#, as it will be much better to tie directly in as Cyn does, rather than using Cyn as a proxy. I can get basic stuff out of Cyc, but Im ready to take things to the next level, auto learning and whatnot.
Jennifer
Douglas R. Miles
2009-05-16
I am working on implementing these tags:
<cycterm> translates an English word/phrase into a Cyc symbol
<cycsystem> executes a CycL statement and returns the result
<cycrandom> executes a CycL query and returns one response at random
<cycassert> simple way to assert a CycL statement
<cycretract> simple way to retract a CycL statement
<cyccondition> controls the flow execution in a category template
- if then else if then else if then else
However this tag is giving me trouble:
<guard> processes a template only if the CycL expression is true
Kino did this for "Bitoflife AIML interpreter in java".
The way it worked is when a <pattern> was fully matched (stars bound), the <guard> tags from inside <template> had to be checked. This means a small side-effect free version if <template> has to be rendered.. So if a guard fails, then the <pattern> can be considered unmatched and the process moves onto the next <category>.
Now I'm looking to do this to the Program# SVN 2.5 Tagged version.
What I need to find is where in the Node.cs/SubQuery it commits that a <catagory>s <pattern> was matched well enough that it can move onto the <template> phase. and be able to back out and move onto the next <category>..
Maybe Graphmaster implemenation wise when all the nodes along the <pattern> is matched and its time to look at the "string Template" the <template> can be ran <guard> and the guard tag will have left a boolean arround saying this was a non matching node.
Advice?
Thanks in advance.
Nicholas Tollervey
2009-05-18
Take a look in Bot.cs around 639 in the "Chat" method. This basically controls the workflow of the reponse to a request. Perhaps this is what you are looking for...?
Douglas R. Miles
2009-05-20
Thank you, after inspecting Chat in Bot.cs,
It looks like the new <guard> tag i am working on would be implemented simular to a <that> tag.
I couldn't tell for sure, but do you check the conformity of <that> against the bot's last reponse before confirming the <pattern> against the user's input?
Nicholas Tollervey
2009-05-21
Douglas,
If I understand you correctly, you're asking if I validate "that" in some way before matching user input in the graphmaster.
Short answer: no. :-)
Sorry I can't be more helpful,
Nicholas.
Douglas R. Miles
2009-05-22
Actually that was helpfull.. I ended up getting <guard> to work as well! one thing i learned, is the words in sentences are matched backwards from the end.
In the form below, the <pattern> is only validated "I" first word is matched, and then I call <guard>'s OuterXML which is stored in the node where node.word is "I" evaluation is true, otherwise i return String.Empty
<category>
<pattern>I AM IN * </pattern>
<guard>(#$isa <cycterm filter="Location" ><star index="1"/></cycterm> #$Location)</guard>
<template>
<think><cycassert>(#$objectFoundInLocation #$TheUser <cycterm filter="Location" ><star index="1"/></cycterm> ) <get name="assertmt" default="#$AimlContextMt"/> )</cycassert></think> I will remember can be found at <star index="1"/>.
</template>
</category>
.
Nicholas Tollervey
2009-05-22
Douglas,
I have a question for you... Are you going to release the source code for your OpenCyc integration under an open-source license?
I'm about to start work on a release for version 3.0 of AIMLBot and having OpenCyc capabilities built in from the start (perhaps as an optional plugin) would be awesome.
I quite understand if you can't or don't want to. :-)
Best wishes,
Nicholas.
Douglas R. Miles
2009-05-22
That would be great if OpenCyc was part of the AIMLBot.
I ended up starting with the 2.5 tag first becasue the 3.0 seemed like it was in the early stages of a complete rewrite. But the design looked very good for 3.0 and would have rather used it.
My code is integrated with yours into a project opensim4opencog
I renamed the namespace AIMLBot -> RTParser (recursive template parser).
I was secretly afraid to have a top-level SVN directory containing the word "AIML" mainly becasue we wanted the bot to use some AIML methodology for selecting engine rules but not be yet another AIML bot for secondlife. For example, feeding low level server events and letting AIML translating them to high-level events.
Other renames in Code:
Bot --> RTBot
Three members called ".bot" became ".Processor",".Proc",".RProcessor"
Mainly to avoid accidental conflicts later.
It's totally opensrc:
The base DLL
The "Main"
The Interesting AIML files:
Some wilder changes:
<get>foo</get> is equiv to <get name="foo"/>
(this is so things like) <get><star/></get> was possible.
The same with:
<set>abc a b c</set> is equiv to <set name="abc">a b c</set>
Just the first token becomes the "name"
Added to Bot:
myBot.AddExecHandler(string lang,delegate string(string cmd, User user ));
for things like <system lang="lisp"> <system lang="subl"> <system lang="shell"> <system lang="c#"> <system lang="secondlife"> etc
(<guard> implicitly assumes it is also surrounded by a <system> tag)
<cycsystem> is pretty much a <system lang="subl">
That is a quick/incomplete overview of some of the things.
I tried to be a good boy at first using the CustomTagHandlers .. Added a the default constructor and things but couldn't quite get it working plus with some of the hacks to get/set ended up not loading the extra cyc tags thru the custom tag handlers. But possible I could give it a try again.
Although there will be some footprints anyways to AIMLBot such as the loading of custom <system> tag handler hooks I mentioned above.
The guard requires saving at each leaf node (where we save the template.OuterXML) maybe save the ref to additional XMLElements that were held inside <category> .
How ready is 3.0 right now? Can you write a quick ConsoleChat Program.cs wrapper around it? If so, I can probably help with some patches. We can go over decide how how much could be late loaded (Tag handlers or whatnot) vs the impact to the the final AIMLBot.dlls... Our hope is the lowest possible changes needed .. but any changes we do we make it even easier for people to support other things like SQL and other database interactions. | http://sourceforge.net/p/aimlbot/discussion/453390/thread/e355e84e | CC-MAIN-2015-40 | refinedweb | 1,202 | 72.36 |
CSC 321.01, Class 02: Getting started with Ruby
Overview
- Preliminaries
- Notes and news
- Upcoming work
- Good things to do
- Questions
- A brief introduction to Ruby
- Some exercises
- Duck typing
- Additional characteristics of Ruby.
- Reminder: Self Gov your workspace
Upcoming Work
Good things to do
Academic
- Community time today at 11:00 am in JRC 101.
- CS Table today in the Whale Room.
- CS Extras Thursday at 4:15 pm.
- Rosenfield Symposium next week.
Peer
- ???
Questions
A brief introduction to Ruby
- Computer scientists and computer programmers are passionate and opinionated about their programming languages.
- Two models:
- Programming languages are designed to help competent programmers get their work done as quickly as possible.
- Programming languages are designed to help competent programmers get their work done as correctly as possible.
- Model two: Think about Java
At compile time, Java checks carefully to make sure that what you are doing is legal.
public class Foo { int foo() { return 1; } } public class Bar extends Foo { int bar() { return 0; } } … public Foo f; f = new Bar(); // Does the object that f refers to have a
barmethod? Yes. // Can I use that method blithly? System.out.println(f.bar());
“Java is your nanny”
- Model one: We know what we’re doing. Having to deal with those complaints just slows us down. (When we want to know about possible problems, we’ll run a “look for problems” program.)
- Type theory snobbery says: We can protect programmers without getting in their way. But type theory snobs have lost lots of oxygen to their brain due to traingle holds.
- Ruby (and much of agile) comes from the “trust the programmer world”
- There are also different paradigms (ways of thinking about how you express algorithms): functional, imperative, object-oriented, (declarative)
- In functional programming - think in terms of functions - as first-class value (inputs to other functions, outputs from other functions); functions are pure - given the same input you get the same output
- In imperative programming, we think about explicit sequences of steps and the state of the system
- In object-oriented programming, we think about objects (things that have both methods and state) along with three core ideas of object-oriented programming: inheritance, encapsulation, and polymorphism.
- Inheritance lets us define new objects or new classes in terms of other objects or classes.
- Enapsulation lets you deal with the complexities of state by hiding the state of your object from your clients.
- Encapsulation lets you group related things (data and the methods applicable to those data) into a single whole.
- Polymorphism lets you write general functions (or types) that can be applied to a wide variety of values, provided they meet some specification.
- I can write a
doubleprocedure that will work with any object that knows how to add to elements of the same type.
- I can write a
HomogeneousList<T>class that will represent homogeneous lists of any underlying type.
- I can write a
SortedList<T>class that will represent lists that are stored in order from least to greatest, provided that T is a class that provides a compareTo(T) method.
- Ruby, like most modern languages, embodies/includes all three paradigms. Ruby’s core umwelt is object-oriented.
- Ruby is interesting, in part, because it has a single primary designer, Matsumoto.
- Ruby is very popular
- Programmers like the “I can write what I want, quickly” model.
- Ruby on Rails.
- Way too multi-expressive. We have two numbers, a and b. In the cases in which b is strictly larger than a, print “Hello”
if a < b then puts "Hello" end
puts "Hello" if a < b
puts "Hello" unless b <= a
unless b <= a [some syntax that Sam doesn't use] puts "Hello"
- Wow, that’s a pain because it becomes harder to read. Given four programmers, you’ll find five different approaches to doing the same thing, and that adds cognitive load.
- Wow, that’s awesome because it gives you a lot of options and lets you match how you might say things in English
- Lets us match the other langauges we think in.
- More variations - More chance that you’ll remember one
- Or if you’re Sam, you’ll remember one incorrectly.
Some exercises
We will explore the ways in which you solved some of the problems from the SaaSbook. Explain your high-level approach and then the details.
sum_to_n?
Define a method
sum_to_n?(array, n) that takes an array of integers
and an additional integer,
n, as arguments and returns true if any
two elements in the array of integers sum to n.
sum_to_n?([], n)
should return false for any value of n, by definition.
- Contextual question one: What is
sums_to_n?([2,3],4)?
- false
- Contextual question one: What is
sums_to_n?([2,2],4)?
- true
Contextual question two: What are some other “interesting” inputs?
What is the time and space overhead of each?
Strategy one: Nested loops
- Outer loop: For each index
- Inner loop: For each subsequent index
- Time efficiency: O(n^2)
- Space efficiency: O(n) for the arrays (given to us); O(1) for the additional values.
Strategy two: Nested loops, with additional stuff
- Outer loop: For each value
- Inner loop: Clone of the array, remove elements, for each
- Time complexity: O(n^2)
- Space complexity: O(n)
Detour
- How would you implement array.drop(k), which returns an array like the original but without the first k elements? (If you were in a pure language? If you were in an impure language? If you could define the semantics?)
- O(1) or in between or O(n) or worse
Option 1
- Create a new array
- Copy over elements
- Space: O(n-k)
- Time: O(n-k)
Option 2
- Return something like what you get from
arr+kin C.
- That is, it refers to a subset of the memory allocated to arr.
- Space: O(1)
- Time: O(1)
Strategy 3:
- Sort the array
- Start at the two ends, moving inward, and checking pairs
- Running time O(sort + n), which is almost certainly O(nlogn)
Strategy 4:
- Make all pairs
- Add all pairs
- See if any of them equal the desired sum, using find.
- About two lines of Ruby!
- Making all pairs might by O(n^2)
Code
def sums_to_n? arr, n arr.combinations(2).find({|x,y| x + y == n}) end
This, to me, is like the inner product you should write in Scheme.
Inner product: Given two lists, A and B, of the same length, compute a0b0 + a1b1 + a2*b2 ….
(reduce + (map * A B))
Short code
- Is likely to have fewer mistakes
- May be clearer
- Is likely faster to write
- Makes assumptions about efficiency of underlying operations
binary_multiple_of_4?
Define a method
binary_multiple_of_4?(s) that takes a string
and returns true if the string represents a binary number that is a
multiple of 4. NOTE: be sure it returns false if the string is not a
valid binary number!
Solution 1
def binary_mutiple_of_4? return false if s == "0" return s.to_i % 4 == 0 end
Solution 2
def binary_multiple_of_4? return false if s.length==0 || s.match(/[^01]/); s.to_i(2) == 0 end
Solution 3
def binary_multiple_of_4? s last = s.length - 1 i = 0 until i == last if (s[i] != "0") && (s[i] != "1") return false end i += 1 end return s.end_with?("00") end
Solution 4
def binary_multiple_of_? s n = 0 for i in 0..s.length if (s[i] != '0' && s[i] != '1') return false end n *= 2 n += s[i].to_i end return n%4 == 0 end
Duck typing
Not covered
Additional characteristics of Ruby
Not covered | http://www.math.grin.edu/~rebelsky/Courses/CSC321/2017F/eboards/eboard02.html | CC-MAIN-2018-51 | refinedweb | 1,253 | 64.1 |
We think of our matrix multiplication example as “Hello world”, but it didn’t sit quite well with me. It was missing the essential ingredients for a “Hello world” demo which are: to be simple, require a complete code listing, fit on a single slide, and output the text “Hello world”. So in this blog post I’ll share an alternative one that has those ingredients.
Preparation
In Visual Studio 11 Beta (or later)
- Select “File” -> “New” -> “Project…” menu
- Select the “Empty Project” template and press “OK”
- Select the “Project” -> “Add New Item…” menu
- Select the “C++ File (.cpp)” item template and press “Add”
You now have an empty “Source.cpp” code file and have made no changes to any configuration of the project system or the compiler.
“Hello world” code
Type in (or copy paste) the following 14 lines of simplistic code and you are done!
#include <iostream> #include <amp.h> using namespace concurrency; int main() { int v[11] = {'G', 'd', 'k', 'k', 'n', 31, 'v', 'n', 'q', 'k', 'c'}; array_view<int> av(11, v); parallel_for_each(av.extent, [=](index<1> idx) restrict(amp) { av[idx] += 1; }); for(unsigned int i = 0; i < 11; i++) std::cout << static_cast<char>(av[i]); }
…and this is the “Hello world” output when run from the command line.
Note
The naive code above does not exhibit any performance benefits compared to a serial CPU implementation. The reason is that the amount of data is not large enough and the body of the loop is not performing expensive operations, to pay for the synchronization and data transfer overhead. The code does serve its purpose of understanding the main elements of the C++ AMP programming model, and that is left as an exercise to the reader; to aid you with that exercise, the program is using: index, extent, restrict, parallel_for_each, and array_view. You can also watch a screencast introducing these with the “Hello World” example.
Nice. These type of posts are good for us just trying to wrap our head around AMP problems.
1 IntelliSense: illegal parameter type "void *" for amp-restricted function "Concurrency::details::_Texture_descriptor::_Texture_descriptor(Concurrency::details::_Texture *_Texture_ptr) restrict(cpu,amp)" (declared at line 538) d:Program Files (x86)Microsoft Visual Studio 11.0VCincludeamprt.h 1466 16 Project1
@Castaa, glad you liked it.
@ “I tried these instructions in the newest beta and I get an error”, that is a harmless IntelliSense error that you can ignore (or filter out in that window) – sorry, known Beta error. If you get an actual compiler error that prevents building and executing, please let us know.
You can eliminate the IntelliSense error if you just include <amprt.h> after the #include <amp.h> line
Failed to generate debug information when compiling the call graph for the concurrency::parallel_for_each at:
what't wrong
by the way, i have installed vs ultimate 2012
Version 11.0.50727.1 RTMREL
Do you get this error when compiling the "Hello world" code from the blog post above?
Also, what is the configuration you are building "Win32/x64"? Also some details about your environment would be helpful.
it seems my own computer can build and run Hello World, but the desktop with the same dev environment can not. I could find the reason.
Never mind, thanks for your rely.
so good, thanks | https://blogs.msdn.microsoft.com/nativeconcurrency/2012/03/04/hello-world-in-c-amp/ | CC-MAIN-2016-50 | refinedweb | 550 | 61.97 |
- Test Design
- RSpec
-.
Obviously we should reduce test dependencies, and avoiding capabilities also reduces the amount of set-up needed.
:js is particularly important to avoid. This must only be used if the feature
test requires JavaScript reactivity in the browser, since
build,
build_stubbed,
attributes_for,
spy, or
instance_doublewill do. will use this
namespace, without us having to pass
it as
namespace: namespace. In order to make it work along with
let_it_be,
factory_default: :keep
must be explicitly specified. That will keep the default factory for every example in a suite instead of
recreating it for each example.
Maybe we don’t need to create 208 different projects - we can create one and reuse it. In addition, we can see that only about 1/3 of the projects we create are ones we ask for (76/208), so fact that the most expensive examples here are in shared examples means that any reductions are likely to within since it’s the default.
- On
beforeand
afterhooks, prefer it scoped to
:contextover
:all
- When using
evaluate_script("$('.js-foo').testSomething()")(or
execute_script) which acts on a given element, use a Capy since it makes the tests more brittle
Debugging Capybara
Sometimes you may need to debug Capybara tests by observing browser behavior.
Live debug
You can pause Capybara and view the website on the browser by using the
live_debug method in your spec. The current page will
CHROME_HEADLESS=0, e.g.:
CHROME_HEADLESS=0 and you should be able to test them
without the overhead added by the Rails environment and Bundler’s
:default
group’s gem loading. In these cases, you can
require 'fast_spec_helper'
instead of
require 'spec_helper' in your test file, and your test should run
really fast since:
- Gems loading is skipped
- Rails app boot is skipped
- GitLab Shell and Gitaly setup are skipped
- Test repositories setup are skipped
fast_spec_helper also support autoloading classes that are located inside the
lib/ directory. It means that as long as your class / module is using only
code from the
lib/ directory you will gem, to make this requirement explicit, or you can add it to the
spec itself, but the former is preferred.
It takes around one second to load tests that are using
fast_spec_helper
instead of 30+ seconds in case of a regular
spec_helper.
letwill suffice.
Feature flags in tests
This section was moved to developing with feature flags.
Pristine test environments
The code exercised by a single GitLab test may access and modify many items of data. Without careful preparation before a test runs, and cleanup afterward, data can be changed by a test in such a way that it affects the behavior of following tests. This should be avoided at all costs! Fortunately, the existing test framework handles most cases already.
When the test environment does get polluted, a common outcome is
flaky tests. Pollution will often manifest as an order
dependency: running spec A followed by spec B will reliably fail, but running
spec B followed by spec A will reliably succeed. once the test completes. Certain specs will will have
id=1, while the second willpath you are interested in.
- Call the original implementation for other filepaths.)
Filesystem
Filesystem data can be roughly split into “repositories”, and “everything else”.
Repositories are stored in
tmp/tests/repositories. This directory is emptied
before a test run starts, and after the test run ends. It is not emptied between
specs, so created repositories accumulate within this directory over the
lifetime of the process. Deleting them is expensive, but this could lead to
pollution unless carefully managed.
To avoid this, hashed storage is enabled in the test suite. This means that repositories are given a unique path that depends on their project’s ID. Since the project IDs are not reset between specs, this guarantees that will
not conflict; but if two specs create a
:legacy_storage project with the same
path, they similar to the
example for global variables, above, can be used, but this should be avoided if
at all possible.
Test Snowplow events
Gitlab::Tracking.
To catch runtime errors due to type checks, you can enable Snowplow in tests by marking the spec with
:snowplow and use the
expect_snowplow_event helper which will check for
calls to
Gitlab::Tracking#event.
describe '#show', :snowplow do it 'tracks snowplow events' do get :show expect_snowplow_event( category: 'Experiment', action: 'start', ) expect_snowplow_event( category: 'Experiment', action: 'sent', property: 'property', label: 'label' ) end end
When you want to ensure that no event got called, you can use
expect_no_snowplow_event.
describe '#show', :snowplow do it 'does not track any snowplow events' do get :show expect_no_snowplow_event end end (e.g. features, requests etc.) subfolder if they apply
to a certain type of specs only (e.g. features, requests etc.), 'master').
Fixtures
All fixtures should be placed under
spec/fixtures/.
Repositories
Testing some functionality, e will appear in the
master branch
of the project’s repository. For example:
let(:project) do create( :project, :custom_repo, files: { 'README.md' => 'Content here', 'foo/bar/baz.txt' => 'More content here' } ) end
This will create a repository containing two files, with default permissions and the specified content.
Configuration
RSpec configuration files are files that change the RSpec configuration (i.e.
RSpec.configure do |config| blocks). They should be placed under
spec/support/.
Each file should be related to a specific domain, e.g.
spec/support/capybara.rb,
spec/support/carrierwave.rb, etc. | https://docs.gitlab.com/13.7/ee/development/testing_guide/best_practices.html | CC-MAIN-2021-10 | refinedweb | 892 | 54.52 |
Eric W. Biederman wrote:> Oleg Nesterov <oleg@tv-sign.ru> writes:> >> Without rcu/tasklist/siglock lock task_pid_nr_ns() may read the freed memory,>> move the callsite under ->siglock.>>>> Sadly, we can report pid == 0 if the task was detached.> > We only get detached in release_task so it is a pretty small window> where we can return pid == 0. Usually get_task_pid will fail first> and we will return -ESRCH. Still the distance from open to > > There is another bug in here as well. current->nsproxy->pid_ns is wrong.> What we want is: ns = dentry->d_sb->s_fs_info;Actually I thought about this recently - if we produce the listof tasks based on the sb's namespace, then we should fill thetasks' files according to the sb's namespace as well, not accordingto the current namespace.> Otherwise we will have file descriptor passing races and the like.Can you elaborate?> We could also do: proc_pid(inode) to get the pid, which is a little> more race free, and will prevent us from returning pid == 0.> > In either event it looks like we need to implement some proper> file operations for these proc files, maybe even going to seq file> status.> > Eric> -To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at | https://lkml.org/lkml/2007/11/19/43 | CC-MAIN-2016-30 | refinedweb | 224 | 65.01 |
stm32plus: ILI9481 TFT driver
The TFT panel
The ILI9481 is a driver IC for 480×320 (HVGA) TFT panels. These panels are typically found in mobile phones (for example the iPhone 3G although the display in that phone probably does not have a controller) and other portable devices. HVGA panels contain double the number of pixels of the common 320×240 (QVGA) panels.
I picked up one of these panels on ebay. It didn’t come with a breakout board so I had to create one of my own.
The bare 3.5″ panel with the FPC tail
I could have tried to track down a connector for the 37-pin, 0.5mm FPC connector but decided against it and just soldered the FPC directly to the adaptor board that the e-bay seller included with the display.
Pinout
The data sheet for the panel included the pinout. It’s a familiar 8080-style interface that is easily connected to the FSMC of the STM32 microcontroller.
The pinout
The pins include the outputs from the resistive touch screen. It would be possible to create a breakout board that includes the popular ADS7843 to handle the decoding but I’m not going to do it here.
Backlight
The datasheet also included a schematic for the backlight. Since this is a relatively large 3.5″ panel it has a total of 6 white LEDs connected in parallel to act as a backlight.
The backlight is a parallel LED array
Having the LEDs connected in parallel means that I can power them directly from a 3.3V supply without the need for a step-up DC-DC converter that would be required had the LEDs been connected in series.
Driving the LEDs at 20mA means that my voltage regulator is going to have supply 120mA to the backlight. It’s always worth double-checking your development board’s specification at times like these to ensure that the voltage regulator can cope.
If you’re creating a real application that has a backlight like this then you should be using a dedicated constant-current backlight controller.9481 does).
The ILI9481
The datasheet for the ILI9481.
stm32plus driver
stm32plus 2.0.0 comes with an updated ILI9481 demo application. Here’s an extract from the source code that shows how to set it up.
#include "config/stm32plus.h" #include "config/display/tft.h" using namespace stm32plus; using namespace stm32plus::display; class ILI9481Test { protected:]); // declare a panel _gl=new LcdPanel(*_accessMode); // apply gamma settings ILI9481Gamma gamma(0,0xf3,0,0xbc,0x50,0x1f,0,7,0x7f,0x7,0xf landscape mode, 16 bit colour (64K). If you take a look at TftInterfaces.h you will see that following modes are available:
ILI9481_Portrait_64K ILI9481_Landscape_64K ILI9481_Portrait_262K ILI9481_Landscape_262K
The predefined drivers are just C++ typedefs that bring together the necessary combination of template instantiations to create a coherent graphics library.
Gamma correction
// apply gamma settings ILI9481Gamma gamma(0,0xf3,0,0xbc,0x50,0x1f,0,7,0x7f,0x7,0xf
It’s all in stm32plus 2.0.0 (or later) available from my downloads page.
Watch the videos
I’ve uploaded a pair of short, out of focus and badly produced video that you can waste some of your valuable time by watching if you want to see the ILI9481 panel in action.
First the STM32F103. In the video I’m running the debug version of the code. The optimised build runs about 2.5x faster. For example, a full-screen clear takes 25ms when optimised and 65ms when in debug mode.
Secondly the STM32F4 Discovery board. This is running a build optimised for speed (-O3). | http://andybrown.me.uk/2012/04/09/stm32plus-ili9481-tft-driver/ | CC-MAIN-2017-17 | refinedweb | 603 | 65.12 |
Documentation > Installation > Demo Instance
Demo Instance
We run a demo instance which serves the Crossbar.io demos and can be used for light development work and testing.
Using the Demo Instance¶
The demo instance runs a WAMP-over-WebSocket listening transport at
wss://demo.crossbar.io/ws
This offers a single realm:
realm1
and accepts authentication as anonymous with permissions for all four WAMP roles set (for any URI).
There is no possibility to configure anything from your side.
This means that
- You have to carefully namespace your topics & call names to avoid conflicts with those of other users (e.g. have some part in the path unique to your project).
- You cannot try out any of the more advanced features such as authorization or component hosting.
The instance is solely intended for ephemeral use during initial testing and development. It's supposed to make your life easier when you start out with WAMP/Crossbar.io, but not to become a permanent fixture in your workflow.
Note: Other realms may exist on the demo server, but these are not intended for public use, and we may make breaking changes there at any time without announcement.
Rules of Use¶
The demo instance is a small virtual machine. This means that you shouldn't expect wonders regarding performance when you really hammer it (think thousands of connections and messages per second). For normal development and testing workloads, performance should be fine.
We don't want to give any specific usage rules. Just bear in mind that this is a free service for you, but that we have to pay for the traffic. So use common sense regarding the amount of usage. A rule of thumb: When you start thinking about whether this is still sensible usage it probably isn't! By this time you're obviously past initial exploration, and it's not hard to set up your own instance locally or in the cloud. | http://crossbar.io/docs/Demo-Instance/ | CC-MAIN-2017-26 | refinedweb | 323 | 64.3 |
I'm pretty new to python, but I'm trying to create a simplistic version of Mineweeper. I have developed functions that randomly place mines in a 9x9 matrix (nested lists) and then I have functions that assign values to every cell that doesn't contain a mine corresponding to the number of mines a given cell touches. This should make sense to anyone familiar with the game.
Anyway, I've started creating the GUI using Tkinter, which I am completely new to, and have run into a problem. As you can see, I have two nested for loops that create 81 buttons in Tkinter, these make up the grid that the user will interact with to "mine sweep". I've assigned them all to a list and, on mouse click, I have it call a function called "swept" which is where I will place all of the logic for determining if the user has clicked on a mine, etc. The problem is that I don't know how to figure out just which button has been clicked from inside the "swept" function. Any thoughts?
I'm sure there's a much more elegant way to create this game, but I'd like to keep going with the way I have come up with so far. Below I've added the code that I have written for the GUI.
from Tkinter import * class App: def __init__(self,master): global rows global grid frame = Frame(master) frame.grid() r1 = []; r2 = []; r3 = []; r4 = []; r5 = []; r6 = []; r7 = []; r8 = []; r9 = []; grid = [r1,r2,r3,r4,r5,r6,r7,r8,r9] for x in range(0,9): for y in range(0,9): grid[x].append(Button(frame, text=" ", command = self.swept, height = 1,width = 2)) grid[x][y].grid(row=x, column=y) ## for x in range(0,9): ## print rows[x] def swept(self): ## Don't know how to determine which button has been pressed from inside this function. print self root = Tk() app = App(root) root.mainloop() | https://www.daniweb.com/programming/software-development/threads/240519/getting-text-values-from-buttons-in-tkinter-to-make-a-version-of-minesweeper | CC-MAIN-2018-30 | refinedweb | 335 | 77.27 |
!
0. Prerequisites
I’m assuming we are working on Linux with Python 2.7, database sqlite3 and the pip software for managing Python package. First step is to install Django and south from pip by typing in a terminal as administrator:
# pip install django
Or, if you have Django already installed…
# pip install --upgrade django
Then south:
# pip install south
Check the version of Django in a Python console writing:
import south import django django.get_version()
1. Starting a Django project
Now, the first step will be to create the project with django-admin.py utility (sometime it is available without the .py extension) so please, type in a console (not as admin, just your regular user):
$ django-admin.py startproject djangoblog
The command creates the folder djangoblog as a container for your Django project:
djangoblog/ ├── djangoblog │ ├── __init__.py │ ├── settings.py │ ├── urls.py │ └── wsgi.py └── manage.py
One of the main differences from Django 1.4 and Django 1.3 is the change in project structure. The outer folder is irrelevant, you can rename it whenever you want. It is only a convenient place to put your custom applications. A project is nothing more than a set of Django applications working together. Normally the main application of your site is not very reusable but the rest of them could be installed separately and reused for other projects.
Starting a Django project is as easy as this. You can start a test server typing:
$ cd djangoblog $ chmod +x manage.py $ ./manage.py runserver
Check the output to navigate to the URL with the test server (normally) Stop the server by pressing Ctrl+C
But let’s edit the settings.py for our needs. We first apply some useful tricks I learned along time to make our applications more flexible and improving re-usability.
Enter folder djangoblog, choose your favorite code editor and open settings.py, add these lines at the beginning:
import os PROJECT_PATH = os.path.dirname(__file__)
This variable is useful to allow relative paths in our settings file.
Now let’s configure DATABASES variable to use SQLite3. Look for DATABASES variable and make it look like:
DATABASES = { 'default': { 'ENGINE': 'django.db.backends.sqlite3', # Add 'postgresql_psycopg2', 'mysql', 'sqlite3' or 'oracle'. 'NAME': os.path.join(PROJECT_PATH, 'db/database.db'), # Or path to database file if using sqlite3. # The following settings are not used with sqlite3: 'USER': '', 'PASSWORD': '', 'HOST': '', # Empty for localhost through domain sockets or '127.0.0.1' for localhost through TCP. 'PORT': '', # Set to empty string for default. } }
We should modify following variables as well:
TIME_ZONE = 'Europe/Madrid' LANGUAGE_CODE = 'en-eu' STATIC_ROOT = os.path.join(PROJECT_PATH, 'static_files')', 'south' )
Note you are adding the own djangoblog application to the list of installed applications. This is because it is convenient to find only-application-related content inside the djangoblog folder. For instance, customized templates or static files so by declaring djangoblog as an installed application we are letting Django know there is relevant content inside djangoblog application.
Now, inside djangoblog folder, create db, static and templates sub directories:
$ mkdir db static templates
To automatically enable administration portal lets touch urls.py file and remove the commentaries in the lines related with the admin tool:
from django.conf.urls)), )
2. Synchronizing the database
Any Django project contains some default installed applications. Sometimes applications need their custom databases. We are going to create these databases. Do:
$ ./manage.py syncdb
Answer yes to the first questions and enter the admin username, e-mail and password to end configuring your Django site.
Do you remember we have installed an application called south and add it to the installed applications of our project? This is a schema evolution tool to assist us, developers, to evolve our databases. Using south we are not able to sync applications inside the project folder in a normal way. We need to aware south we are creating, deleting or changing the application model but this management is not very difficult and we will see it soon.
3. Testing the project
Just start the test server in the same way we did before:
$ ./manage.py runserver
If you want to customize where the project is started, use:
$ ./manage.py runserver IP:PORT
If you want to expose the server through your public IP, use:
$ ./manage.py runserver 0.0.0.0:PORT
The server shows a 404 error because there is no view associated to the default address but you can access /admin/ URL (normally) and enter username and password you provided in the previous step to get access to the administrator portal. Get used to it by modifying your user and filling name and surname.
4. Adding a new application
It’s time to create our first application. We will use it to manage blog posts and it will be named (in a rapture of originality and ingenious) posts. So we go to the container folder and type:
$ ./manage.py startapp posts
The commando will create a folder called posts with three files:
- models.py will keep the application data model
- tests.py will keep tests to verify application correctness
- views.py will keep application’s views
So the complete directory structure looks like:
djangoblog/ ├── djangoblog │ ├── db │ │ └── database.db │ ├── __init__.py │ ├── settings.py │ ├── static │ ├── static_files │ ├── templates │ ├── urls.py │ └── wsgi.py ├── manage.py └── posts ├── __init__.py ├── models.py ├── tests.py └── views.py
Now we must let know our project that application posts exist and is available for use. Modify settings.py to add posts to the list of installed applications before south:', 'posts', 'south' )
¡It is very important south is always the last installed application!
5. The posts application data model
One of the distinctive characteristics of Django is that you first model the solution and write it into models.py file and then lets the Django ORM to map it to a database automatically. We are going to model a blog post as follows, edit posts/models.py and write:
# -*- encoding: utf-8 -*- from django.db import models from django.contrib import auth class Post(models.Model): title = models.CharField(max_length=255) machine_name = models.SlugField(max_length=255, primary_key=True) content = models.TextField(blank=True) publication_date = models.DateTimeField(auto_now_add=True) def __unicode__(self): return self.title def excerpt(self): return self.content[:300] + u'…' class Meta: ordering = [u'-publication_date']
A model is no more than a regular class inheriting from models.Model base class able to be serialized by the Django ORM. Post class defines a title as a field of 255 characters, a primary key machine_name with the tipical_slug_format where only american alphanumeric characters, underscores and hyphens are allowed; the content of the post will be in the content field that is a text field which accepts the empty string as a valid value and the publication_date, with a default value being the moment when a new post is saved.
Furthermore we provide the excerpt() method and the __unicode__() magic method. The excerpt() method returns the first 300 characters of the post followed by the ellipsis character. Note the ellipsis symbol is only one character. As we are using unicode characters in the source file we need to specify the encode of the file with (see line 1):
# -*- encoding: utf-8 -*-
All strings in Django are unicode strings so my advice is to force all your strings to be unicode as well. To do this, prefix your string literal with a u as in the example:
u'An unicode string'
Internal class Meta allows the developer to set some metadata related with the model. In this case we establish the default order to be by publication date from earliest to latest (descending order).
6. Synchronizing database: the first snapshot
Now we have a new application declared, all that remains to get it working is to map models to the database. Without south application, this can be achieved by issuing the following command (don’t do it, you have south installed):
$ ./manage.py posts syncdb
But fortunately south is installed. South acts like a sort of version control software for DB schemes registering changes (snapshots) in the models, then applying them. To take the initial snapshot, type:
$ ./manage.py schemamigration posts --initial
The command for later changes is the same without the –initial parameter. Now you have the snapshot for the current state of posts application, apply it by writting:
$ ./manage.py migrate posts
7. Adding the post model to the administrator site
Before creating new posts, we need register Posts model into the administrator site. To do this, create an admin.py file inside posts folder and add these lines:
from posts.models import Post from django.contrib import admin admin.site.register(Post)
8. Creating posts
Now you have registered Posts model, an automatic panel to manage Posts instances is automatically added to the admin site. Try it by running the server and going to /admin/ URL:
$ ./manage.py runserver
You can create a new post. Don’t forget machine name and observe how Django transform class members’ names into humanized names for the fields in the form replacing underscores by spaces, lowering model names and adding an ‘s’ when used in plural. You can make errors (i.e. adding spaces o special characters to the machine name field) to see how validation works. Admin application from Django is really powerfull and we will return to it in a while. Lets create 4 – 6 moderate length posts.
9. Django views
In Django, views and templates conform the basis for user / application interaction. The procedure is as follows:
- The Django applications receive an request for an URL.
- The URL is compared with a pattern inside urls.py which determines which view will be used.
- The view processes the request, generates a dictionary called context and pass it to some template.
- The template is rendered by the Django template interpreter and filled with values in the context dictionary if proceeds.
- The rendered template is sent as the http response.
With class-based views introduced in Django 1.3 and the provided framework, to implement views is something extremely easy 🙂
Edit views.py inside the post folder and add the following code to create a list of posts:
# Create your views here. from django.views.generic import ListView from posts.models import Post class PostList(ListView): template_name = 'postlist.html' model = Post
ListView instances pass a context object called object_list to the template specified in template_name.
Now we need to associate this view with an URL, we will allow two ways of accessing the list of posts:
- / (as a shortcut)
Modify urls.py inside djangoblog folder to bind these URL to the former view:
from posts.views import PostList from django.conf.urls.defaults), name='adminpage'), url(r'^$', PostList.as_view(), name=u'mainpage'), url(r'^posts/', PostList.as_view(), name=u'postlist'), )
Please, pay attention to the import in line 1 and name parameters in lines 19 and 20.
10. Django templates
When designing user interfaces, webpages, some elements in the webpage are fixed. Usually main navigation bar, header and footer are not altered despite the section you are in the site. This introduces the idea of having HTML templates with placeholders for variable content and this is how Django templates language works.
It is important to note Django templates are only text templates, they are not coupled to any technology and they ignore if the render process is producing HTML, JavaScript or whatever.
So, let’s write a template using Django template language on a bootstrap-powered HTML in order to make our blog looking good. Download bootstrap and decompress it in the folder static you create on step 1:
djangoblog/ ├── djangoblog │ ├── db │ ├── static │ │ ├── css │ │ ├── img │ │ └── js │ ├── static_files │ └── templates └── posts └── migrations
Bootstrap is no more than some CSS and JS acting like a reduced but flexible webpage framework. We are writing some HTML 5 using CSS from bootstrap so in djangoblog/templates folder add a new file called postlist.html and paste this content:
{% load staticfiles %} <!DOCTYPE html> <html> <head> <title>My Django blog | {% block title %} Default title {% endblock %}</title> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <!-- Bootstrap --> <link href="{% static "css/bootstrap.min.css" %}" rel="stylesheet" media="screen"> <style>body { padding-top: 60px; }</style> </head> <body> <div class="navbar navbar-inverse navbar-fixed-top"> <div class="navbar-inner"> <div class="container"> <a class="brand" href="#">My Django blog</a> </div> </div> </div> <div class="container"> {% block content %} <h1>Default title</h1> <p>Default content</p> {% endblock %} </div> <script src=""></script> <script src="{% static "js/bootstrap.min.js" %}"></script> </body> </html>
Template instructions are called tags in Django slang and they have the following form: ‘{% <tag_name> [parameters] %}’. When they are blocks, the starting tag follow the same schema described but the block is ended by ‘{% end<tag_name> %}’. A tag can take parameters separated by commas. Some tags are available by default but other require to load special modules. That is exactly what the tag load does (see line 1). Tag block (lines 5 and 20) is a content placeholder. It takes a parameter, the name of the block, and this can be used in other templates to be referred and replaced by new content. Tag static, part of the module loaded in line 1; it takes a parameter and return the URL for the static content suffixed by the parameter.
Before running the server, you need to collect all the static content and put inside static_files folder you created in step 1. But do not despair, this is done automatically by Django. Just run the following command, answering yes:
$ ./manage.py collectstatic
Now you can run the server as usual. Then go to the root URL (normally) and check the result. Compare the previous template’s source code with the HTML source in the browser (keyboard shortcut Ctrl+U in Chrome and Firefox).
11. A useful view
We named former view as postlist.html for testing purposes but you probably noticed it is not listing any post. It looks more like the base page with the header and content sections so rename it to base.html:
$ mv djangoblog/templates/postlist.html djangoblog/templates/base.html
Now create a new postlist.html document and add the following content:
{% extends "base.html" %} {% block title %} Index {% endblock %} {% block content %} <h1>Index</h1> {% for post in object_list %} <article class="post"> <header> <h2>{{ post.title }}</h2> </header> <section> {{ post.excerpt }} </section> <aside> <p class="label label-info"> Published on <time>{{ post.publication_date|date:"r" }}</time> </p> <!-- Here will be commentaries --> </aside> </article> {% endfor %} {% endblock %}
Now we are extending a template because the first line is saying so. Tag extends take as parameter the template to be extended. Extending a template means replacing its blocks by the blocks in the current template. Look at lines 3 and 5 and see how we refer to the same block names as those in file base.html. Now the content is distinct and will replace blocks with the same name on the base template.
Do you remember the view from step 9? It was passing a context object called object_list to the template. You can iterate on lists by using the tag for … in as in line 9. The text inside the block will be executed once per iteration.
Inside Django templates, dictionaries and objects are read-only and they are accessed in a normalized form by using ‘dot.notation’ so getting the title of the post is:
post.title
Furthermore, you can call object’s methods as if they were attributes (see line 15) if they don’t take parameters:
post.excerpt
To print the content of a variable inside the template you must use ‘{{ <variable_name> }}’.
If you want to transform content before printing it, it is possible to pass the content of a variable through a pipe by using pipe notation:
{{ <variable_name>|<filter_name>[:<parameter>] }}
Filters are Django special functions that transform content. In line 20 we use filter date to format the publication date.
So now you can run the server again to see living results. Enjoy!
Conclusions
Now I finish to write the updated tutorial, it seems to me a lot bigger than the original and I realize I need to update the Spanish version as well because there are significant changes from Django 1.3 to Django 1.5. Content increasing is logical, the Spanish version I did was for a living code session meanwhile this update is focused on the mentoring program (say «Hi!» Tom). I’m not restricted to an hour so I extend the explanations trying to be clearer and I added bootstrap which I think is an original point and help people to feel more satisfied with the results.
Stay tuned for the second part of the tutorial!
6 comentarios en “Django 1.5 in a nutshell I”
Tom says Hi! 🙂
The tutorial is very easy to follow, and I think it would be hard to shorten it that much, it would have to be a step by step «do this do that, copy this, paste that» – which in turn would be even less appealing since it wouldn’t be teaching you anything.
I liked the ration of code snippets to text. I’ve found an error though. The tutorial doesn’t rune since django is searching for templates and statics in the default django folders (inside django/contrib. To make it work I had to add:
os.path.join(PROJECT_PATH, ‘static’), to STATICFILES_DIRS and
os.path.join(PROJECT_PATH, ‘templates’), to TEMPLATE_DIRS
in the settings.py file.
Although, this is possible, what happened here is that probably you forgot to include «djangoblog» as an installed app:djangoblog’, # <– Very important!
'south'
)
This way we can pack the static and templates files with the "main application" with minimum code.
True, I didn’t have it added to the INSTALLED_APPS. It works 🙂
Thanks tom.. I had worked out the TEMPLATE_DIRS addition but not the STATICFILES_DIRS .. Suddenly looking a lot better..! | https://unoyunodiez.wordpress.com/2013/03/16/django-in-a-nutshell-i/ | CC-MAIN-2022-27 | refinedweb | 2,986 | 56.86 |
The first level of the DOM focuses on defining the underlying structure of HTML and XML documents. DOM Levels 2 and 3 build upon this structure to introduce more interactivity and support for more advanced XML features. As a result, DOM Levels 2 and 3 actually consist of several modules that, although related, describe very specific subsets of the DOM. These modules are as follows:
DOM Core — Builds upon the Level 1 core, adding methods and properties to nodes
DOM Views — Defines different views for a document based on stylistic information
DOM Events — Explains how to tie interactivity to DOM documents using events
DOM Style — Defines how to programmatically access and change CSS styling information
DOM Traversal and Range — Introduces new interfaces for traversing a DOM document and selecting specific parts of it
DOM HTML — Builds upon the Level 1 HTML, adding properties, methods, and new interfaces
This chapter explores each of these modules except for DOM Events, which are covered fully in Chapter 12.
DOM Level 3 also contains the XPath module and the Load and Save module. These are discussed in Chapter 15.
The purpose of the DOM Levels 2 and 3 Core is to expand the DOM API to encompass all of the requirements of XML and to provide for better error handling and feature detection. For the most part, this means supporting the concept of XML namespaces. DOM Level 2 Core doesn't introduce any new types; it simply augments the types defined in ...
No credit card required | https://www.oreilly.com/library/view/professional-javascript-for/9780470227800/ch11.html | CC-MAIN-2018-47 | refinedweb | 252 | 54.46 |
The Q3CanvasItemList class is a list of Q3CanvasItems. More...
#include <Q3CanvasItemList>
This class is part of the Qt 3 support library. It is provided to keep old source code working. We strongly advise against using it in new code. See Porting to Qt 4 for more information.
Inherits Q3ValueList<Q3CanvasItem *>.
The Q3CanvasItemList class is a list of Q3CanvasItems.
Q3CanvasItemList is a Q3ValueList of pointers to Q3CanvasItems. This class is used by some methods in Q3Canvas that need to return a list of canvas items.
The Q3ValueList documentation describes how to use this list.
See also QtCanvas and Porting to Graphics View.
Returns the concatenation of this list and list l. | http://doc.trolltech.com/4.5-snapshot/q3canvasitemlist.html | crawl-003 | refinedweb | 110 | 52.97 |
Variable is a storage place to hold data which has some memory allocated to it. It is used to store data. it can be reused many times. We can change value of a variable during execution of a program. Different types of variables require different amounts of memory.
C is a strongly typed language. This means that the variable type cannot be changed once it is declared.
Syntax to declare a variable:-
type variable_name;
type variable_neme_1,variable_neme_2,variable_neme_3;
Example:-
int a b,c; float d; char e;
Here a, b,c,d,e are variables and int, float, char are data types.
The line int a, b, c; declares and defines the variables a, b, and c; which instruct the compiler to create variables named a, b and c of data type int.
Rules for naming a variable
- A variable name can start with the alphabet, and underscore only. It can’t start with a digit.
- No whitespace is allowed within the variable name.
- A variable can have alphabets, digits, and underscore.
- A variable name must not be any reserved word or keyword, e.g. int, char, void, goto , etc.
**There is no rule on how long a variable name (identifier) can be. However, you may run into problems in some compilers if the variable name is longer than 31 characters.
Types of Variables in C
- Local Variable
- Global Variable
- Static Variable
- Automatic variable
- External variable
Local Variable
A variable that is declared and used inside the function or block is called local variable. It’s scope is limited to function or block. It cannot be used outside the block. We must have to initialize the local variable before it is used.
#include <stdio.h> void function1() { int a = 5; // local variable printf ("value of local variable a is = %d",a); } int main() { function1(); return 0; }
Global Variable
A variable that is declared outside the function or block is called a global variable. It is declared at the starting of program. It is available to all the functions. Any function can change the value of the global variable.
Example
#include <stdio.h> int b = 10; // Global Varoable void function1() { int a = 5; // local variable printf ("value of local variable a is = %d\n",a); printf ("value of Globle variable b is = %d\n",b); b= b+a; printf ("value of Global variable b is = %d\n",b); } int main() { function1(); return 0; }
Output
value of local variable a is = 5 value of Globle variable b is = 10 value of Global variable b is = 15
Static Variable
A variable that retains its value between multiple function calls is known as static variable. It is declared with the
static keyword.
Example
#include <stdio.h> void function1(){ int a = 10;//local variable static int b = 20;//static variable a = a + 10; b = b + 10; printf("\n%d,%d",a,b); } int main() { function1(); function1(); function1(); function1(); function1(); return 0; }
Output
20,30 20,40 20,50 20,60 20,70
Automatic Variable
Automatic variables are similar as local variables. All variables in C that are declared inside the block, are automatic variables. We use
auto keyword to define Automatic variable.
External Variable
External Variable can be shared in multiple C source files. We need to use an
extern Keyword to declare an external variable.
Example
externvariable.h extern int a=5;//external variable testprogram.c #include "externvariable.h" #include <stdio.h> void main(){ printf("external variable: %d\n", a); }
Output:-
external variable: 5 | https://www.programbr.com/c-programming/variables-in-c/ | CC-MAIN-2022-27 | refinedweb | 580 | 65.52 |
This article discusses a small-scale benchmark test run on nine modern computer languages or variants: Java 1.3.1, Java 1.4.2, C compiled with gcc 3.3.1, Python 2.3.2, Python compiled with Psyco 1.1.1, and the four languages supported by Microsoft’s Visual Studio .NET 2003 development environment: Visual Basic, Visual C#, Visual C++, and Visual J#. The benchmark tests arithmetic and trigonometric functions using a variety of data types, and also tests simple file I/O. All tests took place on a Pentium 4-based computer running Windows XP. Update: Delphi version of the benchmark here..
Designing good, helpful benchmarks is fiendishly difficult. This fact led me to keep the scope of this benchmark quite limited. I tested only math operations (32-bit integer arithmetic, 64-bit integer arithmetic, 64-bit floating point arithmetic, and 64-bit trigonometry), and file I/O with sequential access. The tests were not comprehensive by any stretch of the imagination; I didn’t test string manipulation, graphics, object creation and management (for object oriented languages), complex data structures, network access, database access, or any of the countless other things that go on in any non-trivial program. But I did test some basic building blocks that form the foundation of many programs, and these tests should give a rough idea of how efficiently various languages can perform some of their most fundamental operations.
Here’s what happens in each part of the benchmark:
32-bit integer math: using a 32-bit integer loop counter and 32-bit integer operands, alternate among the four arithmetic functions while working through a loop from one to one billion. That is, calculate the following (while discarding any remainders):
1 – 1 + 2 * 3 / 4 – 5 + 6 * 7 / 8 – … – 999,999,997 + 999,999,998 * 999,999,999 / 1,000,000,000
64-bit integer math: same algorithm as above, but use a 64-bit integer loop counter and operands. Start at ten billion and end at eleven billion so the compiler doesn’t knock the data types down to 32-bit.
64-bit floating point math: same as for 64-bit integer math, but use a 64-bit floating point loop counter and operands. Don’t discard remainders.
64-bit floating point trigonometry: using a 64-bit floating point loop counter, calculate sine, cosine, tangent, logarithm (base 10) and square root of all values from one to ten million. I chose 64-bit values for all languages because some languages required them, but if a compiler was able to convert the values to 32 bits, I let it go ahead and perform that optimization.
I/O: Write one million 80-character lines to a text file, then read the lines back into memory.
At the end of each benchmark component I printed a value that was generated by the code. This was to ensure that compilers didn’t completely optimize away portions of the benchmarks after seeing that the code was not actually used for anything (a phenomenon I discovered when early versions of the benchmark returned bafflingly optimistic results in Java 1.4.2 and Visual C++). But I wanted to let the compilers optimize as much as possible while still ensuring that every line of code ran. The optimization settings I settled on were as follows:
Java 1.3.1: compiled with
javac -g:none -O to exclude debugging information and turn on optimization, ran with
java -hotspot to activate the just-in-time compiler within the JVM.
Java 1.4.2: compiled with
javac -g:none to exclude debugging information, ran with
java -server to use the slower-starting but faster-running server configuration of the JVM.
C: compiled with
gcc -march=pentium4 -msse2 -mfpmath=sse -O3 -s -mno-cygwin to optimize for my CPU, enable SSE2 extensions for as many math operations as possible, and link to Windows libraries instead of Cygwin libraries.
Python with and without Psyco: no optimization used. The
python -O interpreter flag optimizes Python for fast loading rather than fast performance, so was not used.
Visual Basic: used “release” configuration, turned on “optimized,” turned off “integer overflow checks” within Visual Studio.
Visual C#: used “release” configuration, turned on “optimize code” within Visual Studio.
Visual C++: used “release” configuration, turned on “whole program optimization,” set “optimization” to “maximize speed,” turned on “global optimizations,” turned on “enable intrinsic functions,” set “favor size or speed” to “favor fast code,” set “omit frame pointers” to “yes,” set “optimize for processor” to “Pentium 4 and above,” set “buffer security check” to “no,” set “enable enhanced instruction set” to “SIMD2,” and set “optimize for Windows98” to “no” within Visual Studio.
Visual J#: used “release” configuration, turned on “optimize code,” turned off “generate debugging information” within Visual Studio.
All benchmark code can be found at my website. The Java benchmarks were created with the Eclipse IDE, but were compiled and run from the command line. I used identical source code for the Java 1.3.1, Java 1.4.2, and Visual J# benchmarks. The Visual C++ and gcc C benchmarks used nearly identical source code. The C program was written with TextPad, compiled using gcc within the Cygwin bash shell emulation layer for Windows, and run from the Windows command line after quitting Cygwin. I programmed the Python benchmark with TextPad and ran it from the command line. Adding Psyco’s just-in-time compilation to Python was simple: I downloaded Psyco from Sourceforge and added
import psyco and
psyco.full() to the top of the Python source code. The four Microsoft benchmarks were programmed and compiled within Microsoft Visual Studio .NET 2003, though I ran each program’s
.exe file from the command line.
It should be noted that the Java
log() function computes natural logarithms (using e as a base), whereas the other languages compute logarithms using base 10. I only discovered this after running the benchmarks, and I assume it had little or no effect on the results, but it does seem strange that Java has no built-in base 10 log function.
Before running each set of benchmarks I defragged the hard disk, rebooted, and shut down unnecessary background services. I ran each benchmark at least three times and used the best score from each component, assuming that slower scores were the result of unrelated background processes getting in the way of the CPU and/or hard disk. Start-up time for each benchmark was not included in the performance results. The benchmarks were run on the following hardware:
Type: Dell Latitude C640 Notebook
CPU: Pentium 4-M 2GHz
RAM: 768MB
Hard Disk: IBM Travelstar 20GB/4500RPM
Video: Radeon Mobility 7500/32MB
OS: Windows XP Pro SP 1
File System: NTFS.
Click the thumbnail or here for a full-sized graph of the results
Analysis
Let’s review the results by returning to the five questions that motivated these benchmarks. First, Java (at least, in the 1.4.2 version) performed very well on most benchmark components when compared to the .NET 2003 languages. If we exclude the trigonometry component, Java performed virtually identically to Visual C++, the fastest of Microsoft’s languages. Unfortunately, the trigonometry performance of Java 1.4.2 can only be described as dismal. It was bafflingly bad–worse even than fully interpreted Python! This was especially puzzling given the much faster trigonometry performance of Java 1.3.1, and suggests that there may be more efficient ways to code the benchmark in Java. Perhaps someone with more experience with 1.4.2 can suggest a higher-speed workaround.
Java performed especially well (when discounting the strange trigonometry performance) compared to Microsoft’s syntactically equivalent Visual J#. This discrepancy may be due to the additional overhead of the CLR engine (as compared to the overhead of the JVM), or may have something to do with Visual J# implementing only version 1.1.4 of the Java spec.
Second, Microsoft’s claim that all four .NET 2003 languages compile into identical MSIL code seemed mostly true for the math routines. The integer math component produced virtually identical scores in all four languages. The long math, double math, and trig scores were identical in Visual C#, Visual Basic, and Visual J#, but the C++ compiler somehow produced impressively faster code for these benchmark components. Perhaps C++ is able to make better use of the Pentium 4’s SSE2 SIMD extensions for arithmetic and trigonometry, but this is pure speculation on my part. The I/O scores fell into two clusters, with Visual Basic and Visual J# apparently using much less efficient I/O routines than Visual C# or Visual C++. This is a clear case where functionally identical source code does not compile into identical MSIL code..
Fourth, fully interpreted Python was, as expected, much slower than any of the fully compiled or semi-compiled languages–sometimes by a factor of over 60. It should be noted that Python’s I/O performance was in the same league as the fastest languages in this group, and was faster than Visual Basic and Visual J#. The Psyco compiler worked wonders with Python, reducing the time required for the math and trig components to between 10% and 70% of that required for Python without Psyco. This was an astonishing increase, especially considering how easy it is to include Psyco in a Python project.
Fifth, Java 1.4.2 was much faster than Java 1.3.1 in the arithmetic components, but as already mentioned, it lagged way behind the older version on the trigonometry component. Again, I can’t help but think that there may be a different, more efficient way to call trigonometric functions in 1.4.2. Another possibility is that 1.4.2 may be trading accuracy for speed relative to 1.3.1, with new routines that are slower but more correct.
What lessons can we take away from all of this? I was surprised to see the four .NET 2003 languages clustered so closely on many of the benchmark components, and I was astonished to see how well Java 1.4.2 did (discounting the trigonometry score). It would be foolish to offer blanket recommendations about which languages to use in which situations, but it seems clear that performance is no longer a compelling reason to choose C over Java (or perhaps even over Visual J#, Visual C#, or Visual Basic)–especially given the extreme advantages in readability, maintainability, and speed of development that those languages have over C. Even if C did still enjoy its traditional performance advantage, there are very few cases (I’m hard pressed to come up with a single example from my work) where performance should be the sole criterion when picking a programming language. I would even argue that that for very complex systems that are designed to be in use for many years, maintainability ought to trump all other considerations (but that’s an issue to take up in another article).
Expanding the Benchmark
The most obvious way to make this benchmark more useful is to expand it beyond basic arithmetic, trigonometry, and file I/O. I could also extend the range of languages or variants tested. For example, testing Visual Basic 6 (the last of the pre-.NET versions of VB) would give us an idea how much (if any) of a performance hit the CLR adds to VB. There are other JVMs available to be tested, including the open-source Kaffe and the JVM included with IBM’s SDK (which seems to be stuck at version 1.3 of the Java spec). BEA has an interesting JVM called JRockit which promises performance improvements in certain situations, but unfortunately only works on Windows. GNU’s gcj front-end to gcc allows Java source code to be compiled all the way to executable machine code, but I don’t know how compatible or complete the package is. There are a number of other C compilers available that could be tested (including the highly regarded Intel C compiler), as well as a host of other popular interpreted languages like Perl, PHP, or Ruby. So there’s plenty of room for further investigation.
I am by no means an expert in benchmarking; I launched this project largely as a learning experience and welcome suggestions on how to improve these benchmarks. Just remember the limited ambitions of my tests: I am not trying to test all aspects of a system–just a small subset of the fundamental operations on which all programs are built.
About the author:
Christopher W. Cowell-Shah works in Palo Alto as a consultant for the Accenture Technology Labs (the research & development wing of Accenture). He has an A.B. in computer science from Harvard and a Ph.D. in philosophy from Berkeley. Chris is especially interested in issues in artificial intelligence, human/computer interaction and security. His website is.
What about a mono or portable.net test with the same benchmark ? I would be *very* curious of knowing which level of performance they can achieve.
I’m curious as to how much better gcc and python would perform in a POSIX environment, especially gcc linked to glibc rather than to the windows C libraries.
Also, what happened to perl?
.. Any chance of trying the intel and the openwatcom compilers?
.. It would also be interesting to try the benchmarks on another operating system to see if the same level of differences are observed.
All .net languages will perform exactly the same because they are all compiled down to the CLR comman language runtime.
So your VB.net app will perform the same as C# and so will your Delphi.net, cobol.net etc etc etc.
You only really need to benchmark C#
Perl is not a compile language… perhaps thats why it was left out.
Mike
>Perl is not a compile language… perhaps thats why it >was left out.
>Mike
Python is interpreted and it wasn’t left out.
All .Net languages are not the same! while they might in simple cases produce the same MSIL code. In more complex situations they will not, leading ofcourse to diffenet results.
>In more complex situations they will not, leading ofcourse to diffenet results.
Well said Yoni.
>> Article: I first tried to eliminate the CLR from the Visual C++ benchmark by turning off the language’s “managed” features with the #pragma unmanaged directive, but I was surprised to see that this didn't lead to any performance gains.
Just start a new, unmanaged project and add your Standard C++ code to it.
Actually, if you read the posting, he benchmarked Python with Psyco, and also without, getting both. And, if you read the posting, he also mentioned it would be interesting to see what Perl, PHP, and Ruby results would look like.
Comparing server VM with client VM is invalid.
He had to put both in both cases.
I tested 1.4.2 vs. 1.3.1 (both SUN VMs) and 1.4.2 is 3 times slower than 1.3.1 (they rewrote the VM itself and System.arraycopy() that is being used everywhere got 3x slower).
I have failed a bug report on that. Reply was: “Too late”, which disappointed me as this is a major regression and had to be caught by their QA people, not me.
Well, let’s hope that 1.5 they improve performance to 1.3.1 level.
The author states that he is surprised that java performs better than compiled code…. This really shouldn’t be a surprise. The Java virtual machine compiles its code just like a c++ compiler. There is just one big difference.++.
I suggest he should benchmark Common Lisp (perhaps using Corman Common Lisp, Allegro or LispWorks on Windows). Common Lisp is a native-compiled language which provides even more dynamism than the popular interpreted languages like Perl, Python, Ruby and PHP. It’s not hard to learn for someone used to these languages and may provide a nice surprise for the benchmark results.).
Perl is compiled in a similar manner to python – it just isn’t written to disk. Perl 6 more so – read about Parrot if you’re interested. Why perl isn’t included is really a question to the author and if I were to guess it would perform similarly to C++ since it generally wraps the standard libraries unless i/o is included in the bench – in which case there’s a penalty for parsing the file…Then it should be comparable to python. How ’bout it Christopher – run a perl bench?
Will
Hello,
Given the large differences between VB.net and C#, it is very likely you are doing something wrong. You may be mistakenly using a different construct. The `native’ VB IO functions may be much slower than the standard CLR classes (System.IO). If you are using the Visual Basic library, you are not fairly testing the language. Again, you must post your source code to allow independant review.
As well, I would love to run the C# tests on Mono.
Another thing I should point out, most applications *DO NOT* involve intensive IO or Math alone. This is not a measure of true application performance. You are mearly measuring how well the JIT or compiler emits code for a specific case. I am sure any of the JIT developer could optimize for this specific test case. I think prehaps the more interesting view you could take is `what language provides highspeed building blocks — such as collections classes, callback functionality, and object creation.’ The answer to this question is *MUCH* less of a micro-benchmark.
Also, I would add, for JIT’d languages, you should call the function you are requesting once before you do the call that you time. Depending on how you structure your run, you may end up counting JIT time. Although JIT time can matter in a 60 second benchmark, when running a web server for days, weeks or even months at a time it really does not matter. In fact, many applications use a native code precompiler to reduce startup time (under Mono, Miguel de Icaza often reports performance improvements of over 30% by using AOT compilation of our C# compiler mcs.exe [times are for the compilation of our mscorlib library, consisting of 1000 C# source files]). However, AOT does loose out in a large bench mark like this because it is forced to generate sub-optimal code (like a C++ compiler). So, it is much more fair to allow for a warm up runt to allow the runtime to JIT the code.
— Ben
It’s funny that there are so many hardware sites that benchmark every aspect of CPU’s, chipsets, and graphics cards, but few people bother to benchmark software, programming languages, and operating systems.
I would also like to see mono and portable .NET benchmarks. Also gcc in a POSIX environment.
By the way, I liked the article – much higher quality than what you normally see here on osnews.
Firstly, Java code should, in the general best cases perform in the same manner as a well compiled C++ program. If we are doing pure loops and integer/FP tasks, there should be virtually nothing in it. A C++ compiler doing this properly should produce the same output as Java as a base case. A good C++ compiler using architecture optimisations should be able to do even better, though. The Java has the overhead of the VM and the JIT process, the additional predictiveness of which should be negated by a repetitive looping test anyway. Similarly, a well compiled benchmark from C and C++ should always be faster than a managed .Net application. The distance between the two will vary, but it should still be faster.
These benchmarks are rather daft, anyway, since they manage to avoid using any sort of objects. Java is meaningless for most real tasks without creating and manipulating objects (otherwise you’re basically writing C anyway), and objects are where Java really does slow down.
Very interesting results. Sadly the sorting criteria (using the total instead of a geometric average) is unusual, and favors the languages that optimize the slow operations. The results of double math and trig show some big variations between languages (3:1 for double, more than 15:1 for trig) but this is not properly reflected in the results (in my humble opinion).
Here are the numbers with the geometric average. Notice how Java 1.3.1 suddenly appears much slower than Visual J# or Java 1.4.2, and Python/Psyco is far ahead of Python (the arithmetic average doesn’t show the improvement on the trig test).
Visual C++: 8.4
Visual C#: 11.1
gcc C: 13.2
Visual Basic: 13.9
Visual J#: 14.2
Java 1.4.2: 14.8
Java 1.3.1: 18.6
Python/Psyco: 47.9
Python: 145.5
Ignoring Python for the moment, it’s interesting to see that Java 1.3.1 is the only one that is far off the lead on most tests. gcc only need to improve on trig and long math, Visual C#/Basic/J# all have issues with long math and double math, with Visual Basic and J# suffering from slow I/O.
Java 1.4.2 has a very obvious and sever issue with trig. If that test was as fast as 1.3.1, Java 1.4.2 would score 12.2, very close to the lead. If it could be made to score 4.2 like Visual J#, that score would fall to 8.7, barely slower than Visual C++.
The differences between the various MSIL/CLR languages is also very interesting. It’s obvious that VC++ manages to issue better 64-bit code than the rest of the pack, and that I/O is the only differentiator between Visual C#, Basic and J#.
If we’re freely violating EULAs and all the rest of it, can anyone test the C++ code on Linux and the Java code on IBM’s VM? Both should be quite different.
I’ve been strongly considering picking Ruby up as my next language to learn, but it’s hard to find a lot of information (recent info, at least) on it that’s in English.
I was hoping to see it benchmarked as well… That could have been the push I need to get learnin’ it.
Does anyone here know both Java, Python & Ruby? Any thoughts as to speed, or reccomendations one way or the other?
He did, on the second page of the article:
I would have ran VC++ 6 instead of VC++ .NET (or whatever it is called). Since the author didn’t know how to create an unmanaged project in VC++, I don’t believe his results when it comes to VC++.
I addition I run STLPort, instead of MS’s STL, which is much faster.
My results for mingw (instead of cygwin) on my athlon xp 2.4
(i just felt like i should test it
Start C benchmark
Int arithmetic elapsed time: 6125 ms with intMax of 1000000000
i: 1000000001
intResult: 1
Double arithmetic elapsed time: 5687 ms with doubleMin 10000000000.000000, doubleMax 11000000000.000000
i: 11000000000.000000
doubleResult: 10011632717.388229
Long arithmetic elapsed time: 20016 ms with longMin 10000000000, longMax 11000000000
i: 11000000000
longResult: 776627965
Trig elapsed time: 6750 ms with max of 10000000
i: 10000000.000000
sine: 0.990665
cosine: -0.136322
tangent: -7.267119
logarithm: 7.000000
squareRoot: 3162.277502
I/O elapsed time: 5484 ms with max of 1000000
last line: abcdefghijklmnopqrstuvwxyz1234567890abcdefghijklmnopqrstuvwxyz12345678 90abcdefgh
Total elapsed time: 44062 ms
Stop C benchmark
Great article btw.
Doing so simple maths and IO tests is completly useless.
Real differences lies within string/character manipulations, memory allocations, searching, sorting, garbage collecting, virtual calls througs classes, etc..
If you have some more time to spend on this benchmark, try thoses. Mileages will be much more differents.
Please don’t post the best of only 3 runs, it’s silly to do so because you are not getting a good sample of data.
You should to have run more like 10 runs, especially because the micro-benchmarks that you have produced can be run automatically in the background many times withyou user intervention.
Then you should provide the mean and median of the runs, along with all the data from each run.
On the topic of the Java benchmark:
The Java tests you should use both the server vm and the client VM and compare results. The 2 vms are actually very different. The java benchmark isn’t really doing much but testing the interpreter as I doubt that much of that code is actually being compiled to native code. I think that it’s fairly well know that for short running programs Java is slow ( but for long running programs it’s fairly competive). In the jave benchmark you shouldn’t be using new Date().getTime() to get the time, you should instead use System.currentTimeMillis() as it is faster and doesn’t involve the creation of more objects.
I think, it would be best to wait until after 2.0… Because, Ruby is kind of little slow, not thread-safe and many others. It forced me to learn the different language, but I will come back to Ruby when the 2.0 is released. I love Ruby, it’s easy and clean to
If that is true, it’s rather astonishing! “BTW, if you use our product, you are forbidden to discuss its performance publicly.” It sounds like they have no faith in their product
Small synthetic benchmarks are generally not representative of real programs. Typically a benchmark suite of real applications that compute real things people are interested in are the best indicator, but unfortunately it is hard to find a large enough suite implemented well in a large enough number of languages to matter.
Even so, I will say this.
Java is the real star of this benchmarking effort. The conventional thinking of people who say “Java? Bytecode? VM? It will always be slow!” is clearly in error. A huge (and I do mean huge) amount of engineering effort by thousands of smart people from all kinds of institutions has gone into designing and building high-performance virtual machines these days, and Java, through mainly SUN and IBM’s efforts, has been the principal recipient of those benefits. JIT compilers are extremely advanced, far ahead in many areas than static compilers. It is no wonder that you see the performance gap rapidly closing–though it shouldn’t be called a gap because the potential to also exceed static compilation is huge.
The speed of the language has less and less to do with the speed of the resulting application these days. What matters most now (and it has always mattered) is smart designs and efficient algorithms. For integer and float math, the design space is small, but for an application the size of a webserver, a graphics program, a web browser, etc, the design space is huge. Even if it did break down to one language is X% slower than another (which kind of thinking is complete rubbish anyway), what does it matter?
Virtual machines get better every generation. And every single program ever written for that VM–anytime, anywhere, no matter who wrote it, how it was compiled, what platform it was on–gets faster right along with it. Static compilation is static–it has long slowed its evolution and stabilized. But dynamic compilation is evolving at an amazing rate.
Don’t be a naysayer, be excited about what the future brings for new languages!
Hello everybody,
In case anyone is interested, there is a very interesting benchmarking site (many languages, many tests) at:
It doesn’t include the Microsoft new CLR language/language implementations, iirc, so the tests in the articles are still interesting.
It’s weird to see gcc performing so badly. maybe the cygwin overhead is to blame?
I think Python was misrepresented a bit here, since most Python programmers will either write the ‘number crunching’ parts of their programs as a c library or use more low level python modules such as numpy or scientific python.
Serious mathematical operations in pure Python are a rarity.
I would be interested to see how FORTRAN does in a similar benchmark.
Heck, FORTRAN 77, even. Unfortunately, I could only do it for Linux and Tru64. I don’t have a F95 compiler for my WinXP box at home.
If anyone out there is using ifort on WinXP, please try out the program.
Java is the real star of this benchmarking effort. The conventional thinking of people who say “Java? Bytecode? VM? It will always be slow!” is clearly in error.
There seems to be general agreement that Java is fast on the server-side, but most of the complaints about Java’s speed relate to it’s performance in desktop apps, something not tested in this benchmark.
Grrr…*shakes fist at Nate*
Actually, it’d be cool to see how gfortran does. I’m sure gcc would finish a hojillion times faster, but still.
Well, these benchmarks aren’t really very indicative. The I/O benchmark is, well, I/O bound, which is why interpreted Python performed as fast as compiled C++. The numeric benchmarks are just that, numeric benchmarks.. Even for scientific code, this benchmark probably isn’t representative, because you often need proper mathematical semantics for your calculations, which C/C++/Java/C# don’t provide.
A more telling test would be to get higher-level language features involved. Test virtual C++ function calls vs Java method calls (which are automatically virtual). Test the speed of memory allocation. Test the speed of iterators in various languages. Do an abstraction benchmark (like Stepanov for C++) to test how well the compiler optimizes-away abstraction.
@Brian: I can tell you how a Common Lisp result of the same benchmark would turn out. Given proper type declarations, and a good compiler (SBCL, CMUCL), you will get arbitrarily close to C++ for this task. The compiler should generate more or less the same code. See this thread for some good numbers:…
Note that cmucl is very competitive with gcc. Intel C++ blew both cmucl and gcc away, but that has nothing to do with the language. Intel C++ has an auto-vectorizer that will automatically generate SSE code if it finds algorithms (like the dot-prod and scale in this benchmark) that can be vectorized. GCC and CMUCL don’t support his feature.
Interestingly, there is evidence that Lisp performs extremely well for large programs:
See this links:
In the study, the fastest programs were C++, but the average of the Lisp programs was faster than the average of the C++ programs. The Java stats on the study are a bit outdated, because it was done with JDK 1.2.
Ok, I am an idiot for not seeing the sources (OTOH, I would usually expect to find them at the *END* of the article)
For VB’s file routines, it is no wonder they are so slow. You are, as I suspected, using the VB file routines. Just to give you an idea, here is what happens EVER TIME you call PrintLine:
1) An object [1] array is created (PrintLine takes a param array). This requires an allocation, and then requires copying to the array. Given the number of items you write, you will trigger quite a few GCs
2) The VB runtime must walk the stack to find out what assembly you are calling from. This requires quite a bit of reflection, and involves acquiring a few locks. This is done to prevent conflicts between assemblies.
3) The VB runtime must find which filestream is refered to by the specified handle
4) Stream.WriteLine is called.
Well, its no small wonder it is so slow… Something similar may be happening for J#.NET
I would suggest you consider rewriting the VB file IO routines, and resubmitting your data.
As well, you should be aware that you are putting C++/C at a HUGE advantage in the IO tests. In the read portion of the test, you do the following in
while (i++ < ioMax) {
myLine = streamReader.ReadLine();
}
While in C you do:
char readLine[100];
stream = fopen(“C:\TestGcc.txt”, “r”);
i = 0;
while (i++ < ioMax)
{
fgets(readLine, 100, stream);
}
This is very much unfair to the C# language. You are forcing it to allocate a new buffer for every line that is read. C is not forced to allocate the new array, which saves it alot of time. If you would like to make the test fair, please use the StreamReader.ReadChars method to read to a char [] buffer. This will prevent the repeated allocations, which should make the test more fair. A similar technique should be used for the other languages.
Really, you should have posted these items for review before claiming that you had a fair benchmark. Really, the article should have been split into two postings, the first two sections in one, and then after commenting, the third. I would also encourge OSNews to not post benchmarks that have not obtained peer review.
So, Rayiner has is right. These benchmarks are mostly testing parts of the operating system, not necessarly the runtimes all that much. That’s why the I/O scores are all so close. The only anomaly here is that Java is probably using strict IEEE arithmetic for the trig stuff, which is why it’s so slow. I think another poster mentioned how to turn that off.
It’s these kinds of benchmarks that I get nervous about when people start saying “Java is just as fast as C.” Well, I’m a Java programmer, and I love the language, and it really has gotten a LOT faster over the last few years, but there are some things in Java that are just inherently difficult to optimize away. I’m talking about things like array bounds checking, every data structure being a reference to an object, GC, and all data types being signed (try working a lot with byte values in Java and see how much ANDing you end up doing to combat sign extension issues). These structural choices in the language design cause extra overhead that C programs just don’t suffer. Now, those are also some of Java’s greatest strengths in terms of making programming safer, but they do have a price. Fancy VMs can reduce that price, sometimes to zero for certain sections of code (the latest Hotspot does an excellent job getting rid of array bounds checking in many instances), but it’s asymtotic.
Now, here’s what I think: for most types of programs, C or Java are just fine, particularly given today’s fast CPUs and spacious memory supplies. Both of those favor Java and tend to make the difference small for anything real-world. Even the purely interpreted languages like Perl or Python are fine for all but the heaviest workloads.
… people never include Delphi in any benchmarks? It’s always C, C++, Java etc … but I never see any of these benchmarks with Delphi included.
I believe that -fomit-frame-pointer is not enabled at any optimization, therefore it’s curious that it’s specifically enabled for Visual C++, but not for gnu. Also, there’s the potential that better performance would result from -O2, and then selectively optimizing from there, rather than going all the way to -O3.
Someone said that python guys would’ve used C libraries or they would’ve written their own c routines, if that is true, maybe in Java I would used JNI or in VB I would called a C DLL or I would even used assembler methods in C (not sure if any of those methods are faster, just making my point).
But he also tried straight uncompiled python. I thought Perl was intepreted like Python?
Hello,
some years ago I’ve found a site whith a collection of identical bench for every progamming language, one for forth, one for c, one for c++…
the results were organised in a chart viewing all the spec, the machine, the system, the compiler…
I wanted to send the url to this guy but I’m not able to find it again….
(there’were some for perl and prolog for example)
if someone could send me the url, it would be nice
Cheers,
Dj
Who cares, the worst they can do is send a letter to the author to stop by which time the benchmarks are out. After that someone else can take over, etc.
I’m talking about things like array bounds checking, every data structure being a reference to an object, GC
———-
These three are not necessarily that bad. Analysis shows that most bound checks can be eliminated ahead of time. I’m sure the Java compiler does this optimization. On the other hand, every data structure being a reference is something that is slow in Java, but doesn’t need to be..
It would seem that the vb code DOES not use filestream for the file IO like C#. But that he was using the old method of file manipulation which would be alot slower. Try using a System.IO.File and a System.IO.StreamReader the performance will be closer to that of C#.
Good Job, interesting results.
But, as someone else said, on these small programs,
I’m not sure the Java HotSpot compiler/profiler even kicked in. I think the Java JIT only profiles and compiles after it’s sure the job isn’t going to end soon?
Interesting article Christopher! It was quite informative to see MS VC++ at the top of the speed marks, and also interesting to see the file IO in Python didn’t seem to show much significant difference with many of the other languages.
So, I’ll be the one to mention it: would it be possible to provide a benchmark for Python using the Numeric/NumArray libraries? These were written specifically for numerical operations (the benchmarks you used were bread and butter to Numeric), and they do provide a speed boost. I should imagine that the results still wouldn’t approach the fastest languages here, but it would probably improve the performance, possibly even faster than the Psyco compiled Python.
Or maybe you should include some Fortran/Forth too? (nag! nag!)
You’re looking for Doug’s Great Computer Language Shootout
Its a bit outdated, and some of the code isn’t well-written (I know a lot of people on c.l.lisp complained about sub-optimal CL code) but its overall pretty good.
I would actually be more interested in how many lines/characters of code he has to write to achieve in each language. Then let a third party read the code, and say which one was more “readable”.
Rambus,
Your assertion that ” is incorrect.
When you compare one language against another, it is not fair to take effort to optimize one language (like C) and not take time to optimize another (VB).
However, the real issue is this: the author is attempting to compare how the IO in C# stacks up to the IO in VB (in the IO test). He ended up, on the other hand, comparing how fast reflection is! A benchmark should generally be optimized as much as possible. Benchmarks are meant to stimulate real life, high expense computations. In such a situation, people do not just write `common code’. They profile code, make it faster, and profile some more. This benchmark is not representative of such a situation, and thus is not valid for its intended purpose.
Here is the output of wc -l on his code. Although it is pretty useless IMO, it would be more interesting on a more complex program with +100 classes.
Benchmark.py 160
Benchmark.c 180
Benchmark.cpp 181
Benchmark.vb 186
Benchmark.java 211
Benchmark.jsl 212
Benchmark.cs 215
I appreciate all the comments–keep them coming! I’ll try to write up a response to the major (or recurring) points later tonight. — Chris
I would really like to see Watcom C, Intel C and Perl added.
Also the thing about IBM’s JDK and Kaffe would be very interesting. Maybe (GNU) Ada could also be added. I think GCJ would have support for basic math, could be very interesting also! I would really like to see more!++.
And again, in performance critical code (for which you probably wouldn’t be implementing in Java in the first place) you can always compile your C/C++ code using profile guided optimizations, which allow a process to be run and a report generated at run-time of how the code could be better optimized, at which point the code can be fed through the compiler again. Depending on how long it takes your codebase to compile (obviously it will be quite a pain of that’s in the several hour range), this really is a trivial process, especially before the final gold master release of a particular piece of software.
The only cases where Java consistently outperforms native code compiled with profile guided optimizations (which allow for runtime optimization of compiled languages) are cases where a large number of virtual methods are used in C++ code. Java can inline virtual methods at run-time, whereas in C++ a vptr table lookup is incurred at the invocation of any virtual method.
Of course, the poor performance of virtual methods is usually pounded into the heads of C++ programmers during introductory classes (at least it was for me). If you are using virtual methods inside loops with large numbers of iterations, you are doing something wrong. In such cases, reimplementing with templates will solve the performance issues.
I would also really like to see all the same programs run on a AMD CPU (not to see if AMD beats Intel), but to see how each compiler generates code, which is generally efficient or just on one CPU.
@Raynier.
You can add C# into that mix too.
@LinuxBuddy. C# also has the auto-boxing as Raynier mentioned.
I would like to see comparison between MS c# and Java(I guess you could add Mono and Pnet into the mix too) to see how well they optimize out bounds checking and also see what kind of a performance hit each takes from it.
I compiled the C and C++ benchmarks on an iBook G4 800MHz. I used the best possible optimization. I edited out the IO test because it was giving me a “Bus error” after creating a 70Mb file. Weird.
C (gcc -fast -mcpu=7450 -o Benchmark Benchmark.c)
Integer: 8.8s
Double: 17.2s
Long: 56.2s
Trig: 12.0s
C++ (g++ -fast -mcpu=7450 -o BenchmarkCPP Benchmark.cpp)
Integer: 8.7s
Double: 16.9s
Long: N/A
Trig: 12.0s
(I wag getting a “integer constant is too large for “long” type” warning, so I left it out)
I didn’t have the patience to wait for the Python program to complete..
Yes, this benchmark could really give people the wrong idea about Java. Obviously HotSpot is doing its job, and it performs comparable to native code.
Even for scientific code, this benchmark probably isn’t representative, because you often need proper mathematical semantics for your calculations, which C/C++/Java/C# don’t provide.
Fortran is a wonderful language for the scientific community, not only for its language semantics but also its optimization potential. While this potential is not fully realized on most platforms (the Alpha is the only ISA where the Fortran compiler has been optimized to the point that for scientific computing Fortran is the clear performance winner over C) Fortran does have a distinct advantage in that many mathematical operations which work as library functions in languages like C (i.e. exponents, imaginary numbers) are part of the language syntax in Fortran, and thus complex mathematical expressions involving things like exponents can be highly optimized, as opposed to C where a function invocation is required to perform exponentation. Algorithms for doing things like modular exponents can be applied to cases where they are found in the language syntax and be applied to the code at compile time, whereas C requires programmers to implement these sort of optimizations themselves. With more and more processors getting vector units, a language which allows such units to be used effectively really is in order..
Lack of operator overloading is the biggest drawback to mathetmatical code in Java. Complex mathematical expressions, which are hard enough to read with conventional syntax, become completely indecipherable when method calls are used in their stead. I had the unfortunate expreience of working on a Java application to process the output of our atmospheric model, which is an experience I would never like to repeat. Working with a number of former Fortran programmers, everyone grew quickly disgusted with the difficulty of analyzing matrix math as method calls, and were quite amazed when I told them that with C++ operator overloading such code could be written with conventional mathematical syntax (although there are some issues differentiating dot products from cross products, but it’s still much less ugly than method invocations)
A more telling test would be to get higher-level language features involved. Test virtual C++ function calls vs Java method calls (which are automatically virtual).
Java will almost certainly win on the speed of virtual methods because it can inline them at run-time. Again, the solution in C++ is not to use virtual methods within performace critical portions of the code, especially within large loops, and the simple solution is to replace such uses with templates where applicable.
His C++ is straight C. Iy even uses printf instead of cout… Where are the classes? Where is the standard c++ library usage??
Hmmm…..
Agreed. One of the first things I ever wrote in Java (about 9 years ago) was an implementation of IDEA, and I quickly learned why lack of unsigned types was a bad thing. I ended up using signed 32-bit integers to emulate unsigned 16-bit integers, and of course this was done in conjunction with a great deal of masking. This revealed to me one of the many hacks which were thrown into the Java syntax, the unsigned shift operator >>>. Sun, wouldn’t it have been simpler to support unsigned types?
I don’t know if auto-boxing is really the same thing. If it is, then why are there structs in C#? And why is there a distinction between allocating a struct on the heap vs the stack? It might be, but I’m not familiar enough with C# to make the comparison.
Of course, C#’s compiler might have such analysis. Microsoft has some smart compiler guys. They’re not very innovative, but they’ve got their fingers in some nifty pies. But I hear C# 2.0 will get lambdas with proper closures! At that point, it would be cool to do a benchmark to see how good their closure-elimination optimizations are compared to CL/ML compilers. Is type-inference too much to ask for in C# 3.0. 😉
(Numerical accuracy is not just a theoretical concern. Early versions of Lotus 1-2-3 implemented calculation of standard deviation wrong, and consequently got the wrong answer for the set of numbers {999999, 1000000, 1000001}.)
Forth is a powerful and expecialy, very fast compiled programming language, too often forgotten.
With more and more processors getting vector units, a language which allows such units to be used effectively really is in order.
——–
APL anyone?
In addition to the advantages of Fortran you mentioned, I was also thinking about a full numeric tower like some languages have. Standard machine integers and floats are nice for a lot of scientific computing (and accounting, as I’m told), but for some computations, you need things like infinite precision rationals, arbitrary precision integers, etc.
I bet ocaml would kick all their asses.
Comparing only arithmetic/math operations is far from being representative of the performancec of any language. What about the other instructions found in languages such as tests or assignations, real memory allocations, object manipulation, GUI, file system and network access, etc, etc.
Plus we all know that GCC by default generates a terrible code on Intel. It generates a very clean (and makes good use of the x86 instruction set) only in optimised mode, which was not used for the benchmark.
This benchmark could lead one to think that Java is just 2 times slower than C/C++. Something that anyone who has used a large application written in Java will know can’t be exact.
This benchmark has a very limited scope and its results are not representative of the real world.
The following code:
FileOpen(1, fileName, Microsoft.VisualBasic.OpenMode.Output)
Do While (i < ioMax)
PrintLine(1, myString)
i += 1
Loop
FileClose(1)
could be much improved by using the native .NET methods. In fact, the code should be identical to that of C#.
In addition, the C# IO code is using try..catch construct that could slow the code down. It would be good to retest the code using these suggestions.
You can grab the binaries I made here:
These are optimized for Athlon XP/MP and will require SSE
b-gcc is compiled with gcc 3.3 with -O3 -march=athlon -msse
b-icc is compiled with icc 8.0 with -tpp6 -xiMK -O3
b-icc-opt has been optimized with Profile Guided Optimization. First, Benchmark.c was compiled with -prof_gen to create an “instrument” executable. Next, the instrument executable was executed, and a run-time profile was generated (in the form of a .dyn file). Finally, b-icc-opt itself was compiled with -prof_use -tpp6 -xiMK -O3.
Respective scores when executed on a dual Athlon MP 2.0GHz:
gcc 3.3:
Int arithmetic elapsed time: 6550 ms
Double arithmetic elapsed time: 6250 ms
Long arithmetic elapsed time: 16760 ms
Trig elapsed time: 3640 ms
I/O elapsed time: 1090 ms
Total elapsed time: 34290 ms
icc 8.0:
Int arithmetic elapsed time: 6740 ms
Double arithmetic elapsed time: 5560 ms
Long arithmetic elapsed time: 27140 ms
Trig elapsed time: 2510 ms
I/O elapsed time: 1230 ms
Total elapsed time: 43180 ms
icc 8.0 (with profile guided optimization):
Int arithmetic elapsed time: 6340 ms
Double arithmetic elapsed time: 5540 ms
Long arithmetic elapsed time: 27460 ms
Trig elapsed time: 2430 ms
I/O elapsed time: 1190 ms
Total elapsed time: 42960 ms
Ouch! Clearly icc has trouble with 64-bit math. But otherwise, icc clearly outperforms gcc 3.3 in all other respects being tested, definitively when profile guided optimization is used.
If I recall correctly, the Camal book says that Perl is also byte compiled internally before execution, like Python (and even Tcl nowadays) Benchmarks usually show Perl being a bit faster than Python, though I don’t know if Perl has an equivalent of Psycho. (The Python native compiler used in the benchmark.)
The C benchmark compiled -O2 on AthlonXP 1.4ghz. Fedora core1.
gcc version 3.3.2
gcc -o2 Benchmark.c
Int arithmetic elapsed time: 8330 ms
Double arithmetic elapsed time: 7850 ms
Long arithmetic elapsed time: 20810 ms
I/O elapsed time: 21750 ms
(I could not get the trig benchmark working, so left it out)
It’s interesting how much faster the int, double and long benchmarks are than his results…. Though the can really crunch those numbers compared to Pentium 4M 2GHz. I/O is slower.
Compiled with -O3 it gets slightly slower!
Int arithmetic elapsed time: 8320 ms
Double arithmetic elapsed time: 7860 ms
Long arithmetic elapsed time: 20840 ms
I/O elapsed time: 21850 ms
Could these better results be due to running gcc on native Linux, or is it the different processor?
Structs and primitives are value types and allocated on the stack, but they are also objects too. The compiler automatically creates an object if needed instead of you having to do it. The main benefit, obviously, you only pay for what you use. I thought you were referring to the way in java that you have to use wrapper classes for the primitives. Java, obviously, doesn’t have structs. Classes are always on the heap in C#
Tested using J2SE v 1.4.2_03 on a dual Athlon MP 2.0GHz running Linux 2.6.0.
The code was compiled with javac -g:none and executed with java -server:
Int arithmetic elapsed time: 7271 ms
Double arithmetic elapsed time: 11501 ms
Long arithmetic elapsed time: 23017 ms
Trig elapsed time: 77649 ms
IO elapsed time: 3418 ms
Total Java benchmark time: 122856 ms
Well, Java trumps icc on 64-bit math, but thoroughly loses everywhere else, especially the floating point and trig benchmarks.).
I assume you’re refering to floating point precision, Java by default follows the IEE 754 international specification, however, java has also allowed for EXTENDED PRECISION on platforms that support it.
I assume when you mean “faster/less accurate version of Math”, I assume you are refering to the standard library that is used..
gcc -lm -O2 Benchmark.c
Int arithmetic elapsed time: 6240 ms
Double arithmetic elapsed time: 5920 ms
Long arithmetic elapsed time: 16370 ms
Trig elapsed time: 3370 ms
I/O elapsed time: 890 ms
Total elapsed time: 32790 ms
gcc -lm -O0 Benchmark.c
Int arithmetic elapsed time: 8780 ms
Double arithmetic elapsed time: 9470 ms
Long arithmetic elapsed time: 18920 ms
Trig elapsed time: 3650 ms
Total elapsed time: 41930 ms
Looks like Cygwin is a lot slower.
I only wish. I’ve been on three large C++ version 6 project.
( 100 to 300 classes ). The VC++ compiler generated broken release code. We shipped the “Debug build”.
Could not convince product manager to buy better tools or allocate any time to find the problem. Many memory leaks were from Microsoft’s MFC classes.
My point is, Java code profiling on the fly, is the better solution.
Mostly because 99% of the programmer’s out there will never get the chance to profile their code, if they even know how to do it. Manger’s won’t spend the time or money.
gcj-3.3 -O2 –main=Benchmark Benchmark.java
Int arithmetic elapsed time: 6220 ms
Double arithmetic elapsed time: 5914 ms
Long arithmetic elapsed time: 16485 ms
Trig elapsed time: 26012 ms
IO elapsed time: 10229 ms
Int, Double and Long at the same speed of GCC, io en trig a lot slower than GCC.
I’ve compiled my results into a more easy to interpret format, and drawn some different conclusions than I posted here:
In reply to MikeDreamingofabetterDay…
My point is, Java code profiling on the fly, is the better solution.
The primary drawback of Java’s run-time profiling is that all optimizations are discarded when the application exits. Profiling really helps optimize code which spends most of its time executing in a small number of places within the executable. Consequently, large applications which do an elaborate amount of startup processing take an additional performance hit from run-time optimization in that the startup code will only be touched once, but the run-time’s optimization code still attempts to determine how best to optimize. Eclipse and NetBeans certainly come to mind… their start-up times are an order of magnitude worse than any others IDEs I’ve used.
Profile guided optimization, on the other hand, is a one-time process, and the optimizations are permanent to the binary, thus no performance loss is incurred.
Mostly because 99% of the programmer’s out there will never get the chance to profile their code, if they even know how to do it.
Profiling should be (and often is) an additional automated function of the unit testing process. Intel’s icc can take a number of profiles from a number of different test runs and compile the collective results (a separate .dyn file is generated for each run of the instrument executable) to determine the best way to optimize the given module when a release build is performed.
I’ve never used Microsoft Visual C++ on a large project, but your woes there are not really pertainent to the use of profile guided optimization.
Object Oriented performance is really important with oo languages. Creating and destroying objects. casting and stuff like that..
Your tests are indeed interesting, but what I think is the main point is, Java, generally speaking, doesn’t lag behind significantly! We’re not talking orders of magnitude here, it’s the same ballpark!
@Rayiner
@RoyBatty
On boxing/unboxing in Java, yes, you are right that this can be certainly be done. I belive the JDK 1.5 Hotspot is going to be doing this at some level. As I said, it isn’t the case that Java can’t go faster with better optimizations, just that such optimizations have to be done, thus adding to the complexity of the runtime.
These are language-level, structural issues that C just doesn’t have to deal with. C’s simple, “assembler with loops” sort of orientation is both a blessing and a curse. It’s a blessing when it comes to optimization as you don’t have these sorts of constraints to deal with, and frankly, the language leaves lots of implementation-dependent behavior to exploit. Java is more constrained, which eliminates broad classes of bugs that are very difficult to debug, but in return, the language exacts an overhead which the JVM compilers all seek to reduce to near-zero. Put another way, it’s a lot easier to write a passable C compiler than a passable Java VM (though very difficult to write sophisticated versions of either).
Again, I love Java. It’s my main programming language. I love its relatively small and simple language design with resemblance C (probably my next favorite language). With CPU speeds increasing and Java JVMs just getting better and better, I find myself programming almost exclusively in Java now.
mcs Benchmark.cs
Int arithmetic elapsed time: 9955 ms
Double arithmetic elapsed time: 21385 ms
Long arithmetic elapsed time: 55066 ms
Trig elapsed time: 3707 ms
IO elapsed time: 20949 ms
Total C# benchmark time: 115636 ms
“Agreed. One of the first things I ever wrote in Java (about 9 years ago) was an implementation of IDEA, and I quickly learned why lack of unsigned types was a bad thing. I ended up using signed 32-bit integers to emulate unsigned 16-bit integers….”
Characters are unsigned 16 bit quantities. They are the only unsigned types in Java. Why didn’t you use them?.
You’re right from a syntax perspective, but there is no reason that speed has to suffer. A given JVM may optimize various library calls to inlined, optimal instruction sequences. This is done in some JVMs for basic java.lang classes (like String handling, etc.). Your point about not having inline operators that make your code readable is very true, however.
Visual c++ is fast on windows – is that surprizing ?
I just hope that people will not conclude that gcc is slow in general – gcc is a lot faster on Linux : for example the benchmark for c (amd1800+, linux, gcc3.3.1mdk) took a total of 54ms (41ms with -O2 -march=athlon-xp)..
Right. In most every Lisp implementation, every value travels along with its type. Typically, a few of the low-order bits are used to encode the type. There is no real distinction between “primitive type” versus other types when it comes to function calls, etc.
Hotspot is heavily optimized for Solaris-sparc, being Sun’s flagship platform and all. GCC is targetted towards x86 mostly (although I will stop short of saying it is heavily optimized, because honestly, it isn’t).
Compare the same benchmarks on a Solaris-sparc system, especially a large-scale system, and you might find some very interesting results.
Right. In most every Lisp implementation, every value travels along with its type. Typically, a few of the low-order bits are used to encode the type.
————–
This isn’t necessarily correct. In the general case, every object has a header describing its type, just like Java/C# classes. However, there are a number of optimizations to this general case.
– Some implementations store certain special types (integers, cons cells, etc) right in the pointer word, with some bits reserved as a type tag.
– Some implementations don’t bother with tag bits, and instead use an analysis that determines when an object doesn’t need to be a full object. For example, when you use an integer as a loop counter, you can just use a regular (untagged) machine word.
– Some implementations support type specialization, and generate type-specialized versions of functions, like C++ templates do.
Thus, even though the programmer always deals with objects, the generated machine code will often deal directly with machine types. So its not strictly true that every value travels with its type. For the numeric benchmarks in these articles, for example, the machine code would would deal with regular floats.
I have put on line the famous unixbench from the magazine byte :
you have to tweak the makefile in order to get the best optimisation. For myself : I put that :
OPTON = -O3 -fomit-frame-pointer -fforce-addr -fforce-mem -ffast-math
-march=i686 -mcpu=i686 -pipe -malign-loops=2 -malign-jumps=2 -malign-functions=2
it can be found at
I know this is a bit off-topics but I would like to see results from various system, I will put mine in this forum for a celeron 450 running mandrake 8.0 and for a powerbook (old one) running linuxppc2K…..
if someone could make this bench run under mac os X, it would be great
Cheers
Djamé
I’m curious to know whether anyone has checked out the Java 1.5 Alpha?
Just one point, Java long startup time is caused by Java doing what most languages don’t: class verification. In essesnce, a scan of the class files to be sure they haven’t been hacked.
Again, if you “get” Java you put up with the “time” issue for the benefit you get from the language. The Java Security model and the productivity of it’s huge class library.
The trig test is pointless, Windows libraries aren’t compiled with gcc. So is the I/O test.
I am astonished that c# performs so well though.
Your code does not test for successful completion and
accurate results!
All that is needed to win your benchmark is a library
like this:
double tan (double x) {return 1;}
double sin (double x) {return 1;}
double cos (double x) {return 1;}
and so forth. What is the value of fast but wrong
answers?
testing gcc under cygwin and against windows libraries isnt actually fair is it? You test Visual C++, which is quite good native, why not test gcc under a POSIX-environment, for example linux, too? Perhaps Testing Visual C++ under Linux with wine should be done too?
Of course if you write crappy code in VB.net and good code in C# that does the same thing, yes you will get different results. The point is if you write similar code that takes advantage of a particular .net language you are going to get almost identical results, which is what this benchmark reported.
Understand….
He left out three of the best languages – DELPHI, EUPHORIA and Assembly. Believe me they are really fast as hell – Especially DELPHI.
Can anyone give some benchmarks with these three languages?
How not to write a benchmark…. Your code does not test for successful completion and accurate results!
(…much like the real Paris Hilton, a basic conceptual understanding is present but a knowledge of details is lacking…)
As long as you’re using standard runtimes or linking against standard libm’s, there really isn’t going to be a problem.
Attempting to check the results may be especially problematic in certain areas, due to floating point round off error, unless you’re doing all your testing on platforms with IEEE floats. | https://www.osnews.com/story/5602/nine-language-performance-round-up-benchmarking-math-file-io/ | CC-MAIN-2022-27 | refinedweb | 10,757 | 63.39 |
Copyright © 2007 of an EXI document and introduces the notions of EXI header, EXI body and EXI grammar which are fundamental to the understanding of the EXI format. Additional details about data type representation, compression, and their interaction with other format features are presented. Finally, Section 3. Efficient XML Interchange by Example provides a detailed, bit-level description of a schema-less EXI Primer Types
2.2.2 String Table
2.3 Compression
3. Efficient XML Interchange by Example
3.1 Notation
3.2 Options
3.3 Encoding without a schema
3.4 Schema-less Decoding
The intended audience of this document includes users and developers with basic understanding of XML and XML Schema. This document provides an informal description of the EXI format; the reader is referred to the Efficient XML Interchange (EXI) Format 1.0 for further details. Hereinafter, the presentation assumes that the reader is familiar with the basic concepts of XML and the way XML Schema can be used to describe and enforce constraints on XML document families.
The document is comprised of two major parts. The first part describes the structure, flexibility, and performance streams are the basic structure of EXI documents. As shown below, an EXI stream consists of an EXI header followed by an EXI body.
The EXI header conveys format version information and may also include the set of options that were used during encoding; if these options are omitted, then default settings can be represented in a single byte. This keeps the overhead and complexity to a minimum and does not sacrifice compactness, especially for small documents where a header can introduce a large constant factor.
The structure of an EXI header is depicted in the following figure. Note that even though EXI is a bit aligned format, the header is padded to the next byte to support fast header interpretation.
The EXI header, and hence every EXI document, starts with a pair of Distinguishing Bits that can be used to recognize an EXI document from a textual XML document. The two bit-sequence (1 0) is interpreted as 15 + 2 or version 17.
The EXI Options specify how the body of an EXI stream is encoded and, as stated earlier, their presence is controlled by the present bit earlier in the header. The overhead introduced by the EXI options is comparatively small given that they are formally described using an XML schema and can therefore be encoded using EXI as well. The following table describes the EXI options that can be specified in the EXI header. is really a family of options that control what XML items are preserved and content.
For named XML items, such as element and attributes, there are two types of events: SE(qname) and SE(*) as well as AT(qname) and AT(*). These events differ in their associated content: when SE(qname) or AT(qname) are used, the actual qname of the XML item is not encoded as part of event. The decision to use one type of event over the other will be explained later after introducing the notion of EXI grammars.
The fidelity options introduced in Section 2.1.1 EXI Header may be used to prune EXI events like NS, CM, PI, DT (DocType) or ER (Entity Reference). Grammar pruning simplifies the encoding and decoding process and also improves compactness by filtering out unused event types.
Consider a simple XML document from a notebook application:
<?xml version="1.0" encoding="UTF-8"?> <notebook date="2007-09-12"> <note date="2007-07-23" category="EXI"> SD and ends with an ED. The order in which attributes are encoded may be different in schema-less and schema-informed modes, as is the exact content associated with each event.
The actual number of bits used to represent each type of event, excluding its content, differs depending on context. The more event types that would).
An event code is represented by a sequence of one to three parts, where each part is a non-negative integer. Event codes in an EXI grammar are assigned to productions in such a way that shorter event codes are used to represent more likely to occur productions. Conversely, longer event codes are used to represent less likely to occur productions. EXI grammars are designed so popularity, a 4-bit code is needed to represent each entry. In the second table, on the other hand, code lengths vary from 2 bits to 6 bits after productions are group documents in the wild, it is easy to verify that attributes occur more frequently than processing instructions and should therefore receive shorter event codes.
Further improvements in how grammars are designed are possible if schema information is also known at encoding time. In this case, we can not only take advance string ("category"), AT("date") valid with respect to the schema; any deviation from the schema will result in an encoding error. In non-strict mode,. Schema-Informed Grammar for SE(note)
Note that AT("category") is accepted before AT("date") even though their order is reversed in the schema. This is because attributes in schema-informed grammars must be lexicographically sorted.
EXI uses built-in types to represent so called content value items in an efficient manner. In other words, all attribute and character values are encoded according to its type information. Type information can be retrieved from available schema information. The following table lists the mapping between XML Schema Types and Built-in Types in EXI.
Note:
The datatype QName is used for structure coding only, such as qualified names for XML elements and attributes.
The interested reader is referred to the EXI specification which describes in details the encoding rules for representing built-in EXI datatypes. In the absence of external type information (no available schema information) or when the preserve.lexicalValues option is set to true, all attribute and character values are typed as String.
String tables are used in memory-sensitive areas allowing a compact representation of repeating string values. Re-occurring string values are represented using an associated compact identifier rather than encoding the string literally again. When a string value is found in the string table (string table hit) the value is encoded using a compact identifier. Only if a string value is not found in the associated table (string table miss) the string is encoded as String and a new compact identifier is introduced.
EXI uses four different string table partitions to reflect the different uses of strings in the XML Infoset:
URI
Prefix
LocalName
Value
An EXI string table partition is optimized for more frequent use of either compact identifiers or string literals depending on the purpose of the partition. The URI and Prefix partition is optimized for frequent use of compact identifiers while LocalName and Value partition is optimized for frequent use of string literals.
The table below shows EXI content items used in section 2.1.2 EXI Body to describe the content of EXI events and their mapping to Built-in Datatypes. In addition relations to the string table partitions are annotated (e.g. content item prefix is assigned to the Prefix partition).
In the subsequent paragraphs more details about the different partitions are given by making use of the previously introduced Notebook example. The XML example is inserted inline once again to facilitate a better understanding.
<?xml version="1.0" encoding="UTF-8"?> <notebook date="2007-09-12"> <note date="2007-07-23" category="EXI"> XML Schemas are used (schema-informed mode) there is an additional entry that is appended to the URI partition.
Figure 2-4. URI Partition
The LocalName portion of qname content items and
prefix content items are assigned to the LocalName
or respectively Prefix partition. Both partitions are initially
pre-populated with likely entries (see figure below). The partitions are
further subdivided in sections according to the associated namespace URI. In
our notebook example no prefixes are used and the default namespace URI
(
"" [empty-string]) is in charge.
Figure 2-5. Prefix and LocalName Partition
The figure above shows in highlighted form the URI and LocalName items used
throughout the entire example document. For instance the notebook sample
assigns six local-name entries, such as
notebook and
date, to the empty URI namespace. Whenever the LocalName and/or
URI portion of a qname occur again, the compact identifier is
used instead.
The last and probably most interesting partition is the Value Partition which is initially empty and grows while processing an XML instance. Attribute and Character content-values are assigned to this partition.
Figure 2-6. Value Partition
The figure above illustrates that value content items can be referenced in two different ways, namely as "global" and "local" value. When a string value is neither found in the global nor in the local value section its string literal is encoded as String and the string value is added to both, the associated local and the global string index.
When a string value is found in the local value section EXI uses, and results in a global value hit encoded in 3
bits. Due to the different table sizes a global compact identifier is less
compact than a local compact identifier hit. Nevertheless global value hits
avoid encoding string literals again.
The number of bits needed to encode a compact identifier depends on the actual number of entries of the associated table at that time. Since all tables are growing while parsing an XML instance, the number of bits EXI specification document.
EXI can use additional computational resources to achieve higher compaction. Instead of compressing the entire stream EXI combines the knowledge of XML and the application of a standard compression algorithm. Homogeneous data is combined and fed separately to the compression engine.
The mechanism used to combine homogeneous data is simple and flexible enough so that it can be used in schema-informed and schema–less mode. Element and attribute values are grouped according to their qualified names while structure information like Event Codes is combined. To keep compression overhead at a minimum, smaller QName channels are combined while larger channels are compressed separately.
Figure 2-7. EXI Body Stream
The figure above uses grey buckets for structure information and colored buckets for content information. The color is determined by the associated QName (e.g. date, category, subject, body).
XML instances can be seen as a combination of structure and content information. The content information can be further divided in different sections according to the context (surrounding structure as indicated by a QName). EXI treats XML instances this way and uses these implied partitions, so called channels, to provide blocked input to a standard compression algorithm. This blocking of similar data increases compression efficiency.
Note:
An alignment reader is referred to the EXI specification for further details.
The notebook example falls in the first category and is encoded as a single compressed deflate stream containing first the structure channel, followed by the QName channels in the order they appear in the document (date, category, subject, body).
This section walks through the coding of the Notebook Example explaining the concepts previously introduced in a step-by-step approach.
The table below shows the notations that are used in the description of EXI encoding in subsequent sections.
We do not make use of specific encoding options like using compression or pluggable codecs to encode the body.
The table below shows the fidelity options used in the presented example throughout the document.
The fidelity options setting shown above prunes those productions EXI stream body when the sample XML document is transcoded into an EXI document in the absence of any schemas.
Steps to decode an EXI document
Decode Event Code (according to grammar-rule context)
Decode event content (if present, see EXI Event Types)
Move forward in grammar according to current grammar rules
Return to step 1 if last event was not EndDocument (ED)
[Done]
The resulting XML instance is shown below.
The WG has crafted a tutorial page EXI 1.0 Encoding Examples that explains the workings of EXI format using simple example documents. At the time of this writing, the page only shows a schema-less EXI encoding example. A schema-informed EXI encoding example is expected to be added to the page soon. | http://www.w3.org/TR/2007/WD-exi-primer-20071219/ | CC-MAIN-2014-23 | refinedweb | 2,047 | 53.31 |
I am solving the Baby Blocks problem. I have a peice of java code that I want to translate to Haskell:
Java:
for (int i = 1; i <optHeight.length ; i++) {
int maxHeightIndex = 0;
for (int j = i-1; j >=0 ; j--) {
// Need help from here
if(boxes[j].width>boxes[i-1].width && boxes[j].depth>boxes[i-1].depth) {
if(optHeight[maxHeightIndex]<optHeight[j+1]) { <-- How do I write this condition
maxHeightIndex = j+1;
}
}
}
optHeight[i]=optHeight[maxHeightIndex] + boxes[i-1].height;
}
optHeight
boxes
height, width, depth
b list = do
forM_ [1..length list] $ \i -> do
let maxHeight = 0
forM_ [0..(i-1)] $ \j -> do
if list!!j!!1 > list!!i-1!!1 && list!!j!!2 > list !!j!!2 then
maxHeight = j + 1
The way to solve this problem (I think), is to consider every rotation of every box (so you have
3n total rotations). Then, you order these based on increasing size of their base. The problem then is just to pick the longest subsequence of boxes that "fit" with each other (you don't need to worry about picking the same box twice because a box could never fit on itself). This sounds a lot like the canonical longest increasing subsequence problem, which suggests we will want a dynamic programming solution. We will have an array of length
3n+1 where the ith element represents the maximum height of the stack you can make with the ith box at the top.
maxHeight(i) = { height(i) + max[ maxHeight(j) ] such that width(j) > width(i), depth(j) > depth(i), 0 <= j < i }
Now, lets get started on the Haskell solution. I will assume your input is a list of the dimensions. Notice how closely the code mirrors the solution I described - the trick is to write things declaratively.
import Data.List (sortOn) import Data.Vector (fromList, (!), generate) import Data.Ord (Down(..)) babyBlocks :: [(Int,Int,Int)] -> Int babyBlocks blocks = maxHeights ! (3*n - 1) where -- get the number of blocks n = length blocks -- generate the `3n` blocks formed by rotating the existing blocks, -- sort them by their base size, and store them in a vector for -- fast retrieval sortedBlocks = fromList . sortOn (\(x,y,z) -> Down (x*y)) . concatMap (\(x,y,z) -> [(x,y,z),(y,z,x),(z,x,y)]) $ blocks -- we'll make ourselves a couple helper functions, just so -- our translation of the recurrence relation looks better height n = let (_,_,z) = sortedBlocks ! n in z width n = let (_,y,_) = sortedBlocks ! n in y depth n = let (x,_,_) = sortedBlocks ! n in x maxHeight n = maxHeights ! n -- state the dynamic programming maxHeights = generate (3*n) (\i -> height i + maximum ([ maxHeight j | j<-[0..(i-1)] , width j > width i , depth j > depth i ] ++ [0]))
The part you seemed to be having trouble with is the dynamic programming in the last part. Because Haskell is lazy, it is actually completely OK for me to use
maxHeight while defining
maxHeights, even if I don't know what order my vector is going to be initialized in! | https://codedump.io/share/SLhCYNXrSeGQ/1/translating-a-piece-of-java-code-to-haskell | CC-MAIN-2017-04 | refinedweb | 510 | 72.16 |
Wellone high quality factory price 100% cotton men custom logo unbranded basic overseas t shirts
US $1-5 / Piece
150 Pieces (Min. Order)
High Quality Apparel Overseas Tee Shirts 100% Cotton Short Sleeve T Shirt Printing Logo Clothes With Custom
US $4.3-8.2 / Piece
100 Pieces (Min. Order)
Personalized Blank Clothing Overseas Man Polo Shirts Casual Polo T-shirt
US $2.2-5.6 / Piece
5 Pieces (Min. Order)
wholesale overseas tee shirt personalised printing custom t shirts
US $1.5-3 / Piece
50 Pieces (Min. Order)
custom children t shirt print in china overseas diy t shirts
US $2.3-3.2 / Piece
10 Pieces (Min. Order)
2018 print your own brand logo long tee mens custom labels overseas t shirts
US $2-8 / Piece
200 Pieces (Min. Order)
OEM bulk mens combed cotton t shirt black t shirt custom wholesale overseas t shirts
US $0.99-10 / Piece
100 Pieces (Min. Order)
clothing manufacturers overseas contrast sleeve digital camo t shirts
US $6.8-8.9 / Piece
100 Pieces (Min. Order)
fashion tshirt clothing manufacturers overseas 100% cotton black longline mesh t shirt
US $4.5-12.3 / Piece
100 Pieces (Min. Order)
high quality 60% cotton 40% polyester overseas t shirts wholesale
US $2-10 / Piece
50 Pieces (Min. Order)
Wolesale Blank T shirts For Custom Print Design T shirt Overseas T shirts
US $1.9-2.5 / Piece
10 Pieces (Min. Order)
Wholesale anti-wrinkle non-fading short sleeve overseas t shirts
US $5.7-6.1 / Piece
20 Pieces (Min. Order)
t shirts made in china t shirt men overseas t shirts
US $1-5.66 / Piece
10 Pieces (Min. Order)
Personalized Bulk Clothing Manufacturers Overseas Mens Polo Shirts Men Casual Polo T-shirt
US $3.78-8.65 / Piece
100 Pieces (Min. Order)
Overseas t shirts army blank military green t shirts
US $1.5-3.8 / Piece
50 Pieces (Min. Order)
Two Tone Blue T-shirt Cheap 100% Combed Ringspun Cotton Overseas TShirts Plain
US $2.3-4 / Piece
1000 Pieces (Min. Order)
Cheap Men 100% Cotton White Plain Overseas Bulk T shirts
US $0.85-4.85 / Piece
100 Pieces (Min. Order)
2017 hot sale produce high quality factory price 100% cotton men custom logo unbranded basic overseas t shirts
US $4.0-7.0 / Pieces
10 Pieces (Min. Order)
2017 Fashion OEM custom wholesale cheap overseas t shirts
US $1.35-3 / Piece
50 Pieces (Min. Order)
Clothes factory custom overseas gym screen printing seamless t shirt oem
US $4-8 / Piece
1000 Pieces (Min. Order)
High Quality Top Sales Overseas T Shirts 95 Cotton 5 Spandex Hip Hop T Shirt
US $4.1-4.7 / Pieces
3 Pieces (Min. Order)
Oversized Blank T Shirts Overseas Wholesale V Neck T Shirts Men
US $2.99-6.99 / Piece
10 Pieces (Min. Order)
Custom T Shirt Printer White T Shirt Overseas T Shirt For Man
US $0.99-9.99 / Piece
100 Pieces (Min. Order)
Overseas printed t-shirt printing 100% cotton
US $2.99-4.99 / Piece
500 Pieces (Min. Order)
custom T shirt wholesale logo printed overseas t shirts
US $2-5 / Piece
50 Pieces (Min. Order)
import fashion overseas 2017 printing latest kids boys cotton t shirt
US $1.25-5 / Piece
1200 Pieces (Min. Order)
New arrival hot topic Specialized in t-shirt 15 years overseas t shirts for boy
US $0.98-4.5 / Piece
300 Pieces (Min. Order)
Black Fancy T-Shirts , Overseas T Shirts Wholesale , Customised T Shirts
US $3.9-9.9 / Piece
100 Pieces (Min. Order)
overseas t shirts extra long tall t-shirts wholesale OEM your charm shirt
US $1.8-3.2 / Piece
100 Pieces (Min. Order)
Bulk Blank T Shirts Wholesale Blank T Shirts Custom Cheap Overseas Tee Shirts
US $1.5-7 / Piece
100 Pieces (Min. Order)
custemers Your Own Design 2017 Men's Casual Overseas T Shirt for Younger
US $1.67-6.71 / Piece
120 Pieces (Min. Order)
Wholesale Cheap Overseas Custom Print 100% Cotton Soft Cotton T Shirt
US $2.15-5.75 / Piece
100 Pieces (Min. Order)
Fashion design couple t shirts,overseas t shirts,brands t shirts
US $2.5-3.25 / Pieces
120 Pieces (Min. Order)
Hot sale overseas blank custom logo t shirts
US $1.99-6.99 / Piece
500 Pieces (Min. Order)
- About product and suppliers:
Alibaba.com offers 81,366 overseas t shirts products. About 1% of these are men's t-shirts, 1% are women's t-shirts, and 1% are plus size shirts & blouses. A wide variety of overseas t shirts options are available to you, such as anti-pilling, plus size, and quick dry. You can also choose from 100% cotton, 100% polyester, and 100% silk. As well as from in-stock items, oem service. And whether overseas t shirts is garment dyed, plain dyed, or embroidered. There are 81,366 overseas t shirts suppliers, mainly located in Asia. The top supplying country is China (Mainland), which supply 100% of overseas t shirts respectively. Overseas t shirts products are most popular in South America, North America, and Mid East. You can ensure product safety by selecting from certified suppliers, including 8,656 with ISO9001, 7,590 with Other, and 620 with ISO14001 certification.
Buying Request Hub
Haven't found the right supplier yet ? Let matching verified suppliers find you. Get Quotation NowFREE
Do you want to show overseas t shirts or other products of your own company? Display your Products FREE now!
Related Category
Product Features
Supplier Features
Supplier Types
Recommendation for you | https://www.alibaba.com/countrysearch/CN/overseas-t-shirts.html | CC-MAIN-2018-22 | refinedweb | 928 | 77.84 |
5357 2011/09/09 21:31:03 bdeegan" """142 14374 """(value) 135 continue 136 137 elif type == TYPE_OBJECT: 138 value = node_conv(value) 139 if value: 140 result.append(value) 141 return tuple(result)144 -class PathListCache(object):145 """ 146 A class to handle caching of PathList lookups. 147 148 This class gets instantiated once and then deleted from the namespace, 149 so it's used as a Singleton (although we don't enforce that in the 150 usual Pythonic ways). We could have just made the cache a dictionary 151 in the module namespace, but putting it in this class allows us to 152 use the same Memoizer pattern that we use elsewhere to count cache 153 hits and misses, which is very valuable. 154 155 Lookup keys in the cache are computed by the _PathList_key() method. 156 Cache lookup should be quick, so we don't spend cycles canonicalizing 157 all forms of the same lookup key. For example, 'x:y' and ['x', 158 'y'] logically represent the same list, but we don't bother to 159 split string representations and treat those two equivalently. 160 (Note, however, that we do, treat lists and tuples the same.) 161 162 The main type of duplication we're trying to catch will come from 163 looking up the same path list from two different clones of the 164 same construction environment. That is, given 165 166 env2 = env1.Clone() 167 168 both env1 and env2 will have the same CPPPATH value, and we can 169 cheaply avoid re-parsing both values of CPPPATH by using the 170 common value from this cache. 171 """ 172 if SCons.Memoize.use_memoizer: 173 __metaclass__ = SCons.Memoize.Memoized_Metaclass 174 175 memoizer_counters = [] 176 179221 222 PathList = PathListCache().PathList 223 224 225 del PathListCache 226 227 # Local Variables: 228 # tab-width:4 229 # indent-tabs-mode:nil 230 # End: 231 # vim: set expandtab tabstop=4 shiftwidth=4: 232180 - def _PathList_key(self, pathlist):181 """ 182 Returns the key for memoization of PathLists. 183 184 Note that we want this to be pretty quick, so we don't completely 185 canonicalize all forms of the same list. For example, 186 'dir1:$ROOT/dir2' and ['$ROOT/dir1', 'dir'] may logically 187 represent the same list if you're executing from $ROOT, but 188 we're not going to bother splitting strings into path elements, 189 or massaging strings into Nodes, to identify that equivalence. 190 We just want to eliminate obvious redundancy from the normal 191 case of re-using exactly the same cloned value for a path. 192 """ 193 if SCons.Util.is_Sequence(pathlist): 194 pathlist = tuple(SCons.Util.flatten(pathlist)) 195 return pathlist196 197 memoizer_counters.append(SCons.Memoize.CountDict('PathList', _PathList_key)) 198200 """ 201 Returns the cached _PathList object for the specified pathlist, 202 creating and caching a new object as necessary. 203 """ 204 pathlist = self._PathList_key(pathlist) 205 try: 206 memo_dict = self._memo['PathList'] 207 except KeyError: 208 memo_dict = {} 209 self._memo['PathList'] = memo_dict 210 else: 211 try: 212 return memo_dict[pathlist] 213 except KeyError: 214 pass 215 216 result = _PathList(pathlist) 217 218 memo_dict[pathlist] = result 219 220 return result | http://www.scons.org/doc/2.1.0/HTML/scons-api/SCons.PathList-pysrc.html | CC-MAIN-2015-06 | refinedweb | 522 | 53.51 |
Jul 14, 2009 10:12 AM by 843830
Can't import XSD (without Namespace) into WSDL
843830
Jul 14, 2009 10:12 AM
Hi
We are developing a BPEL process that takes a XML message - that adheres to the new XSD (has a namespace) - and steps the message (translates) down in to the old XML message - that adheres to the current production XSD (does not have a namespace).
Our issue is that is seems that you cant import a XSD without a namespace into a WSDL using JCAPS 6. Is there a way around this as we dont want to update the old XSD to include a namespace, knowing that there will be alot of additional dev to upgrade our exisiting 4.5.3 components and the XSD will be decommisioned in the future.
Any help would be greatly appreciated.
Regards,
This content has been marked as final.
Show 0 replies
Actions | https://community.oracle.com/message/6646878 | CC-MAIN-2016-50 | refinedweb | 152 | 75.54 |
As a continue to some responses from a previous thread (Some code optimization and thanks again, everybody, despite the criticism - no hard feelings :)) I decided to try rewrite a small piece of prel code in java, in order to get some feeling about the performance differences.
I wrote the java equivalent real quickly, and it's not so "beautiful", but I think it works OK.
The results:
perl: 20.8 seconds
java: 3.9 seconds
Here is the perl code:
use strict;
use warnings;
use List::Util qw(max min);
use Time::HiRes qw(time);
# this builds a structure that is usually retrieved from disk.
# in this example we will use this structure again and again,
# but in the real program we obviously retrieve a fresh structure
# at each iteration
my $simulation_h = {};
for ( 1 .. 70000 ) {
my $random_start = int( rand(5235641) );
my $random_length = int( rand(40000) );
push @{ $simulation_h->{$random_start} }, $random_length;
}
my $zone_o = {
_chromosome_length => 5235641,
_legal_range => [ { FROM => 100000, TO => 200000 } ]
};
my $start_time = time;
scenario();
print "total loop time: " . ( time - $start_time ) . " seconds\n";
my $temp_gene_to_legal_range;
my $gene_to;
sub scenario {
for ( my $i = 0 ; $i < 50 ; $i++ ) {
print "i=$i time=" . ( time - $start_time ) . " seconds\n";
# originally there was a retreive of $simulation_h from disk h
+ere
# iterate genes
foreach my $gene_from ( keys %{$simulation_h} ) {
foreach my $gene_length ( @{ $simulation_h->{$gene_from} }
+ ) {
### inlining gene_to_legal_range
$gene_to =
( ( $gene_from + $gene_length - 1 )
% ( $zone_o->{_chromosome_length} ) ) + 1;
if ( $gene_to < $gene_from ) {
# split
# low range first
$temp_gene_to_legal_range = [
{ FROM => 0, TO => $gene_to },
{
FROM => $gene_from,
TO => $zone_o->{_chromosome_length}
}
];
}
else {
# single
$temp_gene_to_legal_range =
[ { FROM => $gene_from, TO => $gene_to } ];
}
}
}
}
}
[download]
And here is the java code:
import java.util.ArrayList;
import java.util.HashMap;
import java.util.HashSet;
import java.util.List;
import java.util.Map;
import java.util.Random;
import java.util.Set;
public class InLine {
public static void main(String[] args) {
int count = 0;
Random rand = new Random();
Map<Integer, List<Integer>> genes = new HashMap<Integer, List<
+Integer>>();
for (int i = 0; i < 70000; i++) {
Integer randStart = rand.nextInt(5235641);
Integer randLength = rand.nextInt(40000);
if (!genes.containsKey(randStart)) {
List<Integer> list = new ArrayList<Integer>();
genes.put(randStart, list);
}
genes.get(randStart).add(randLength);
}
int chromosomeLength = 5235641;
SimpleRange simpleRange = new SimpleRange(100000, 200000);
Set<SimpleRange> setSimpleRanges = new HashSet<SimpleRange>();
setSimpleRanges.add(simpleRange);
long startTimeMillis = System.currentTimeMillis();
for (int i = 0; i < 50; i++) {
System.out
.println("Iter = "
+ i
+ " Time from start: "
+ ((double) (System.currentTimeMillis() -
+startTimeMillis) / 1000));
for (int geneFrom : genes.keySet()) {
for (int geneLength : genes.get(geneFrom)) {
int geneTo = (geneFrom + geneLength - 1) % chromos
+omeLength
+ 1;
if (geneTo < geneFrom) {
// split
// low range first
SimpleRange lowSimpleRange = new SimpleRange(0
+, geneTo);
SimpleRange highSimpleRange = new SimpleRange(
+geneFrom,
chromosomeLength);
Set<SimpleRange> setSimpleRanges1 = new HashSe
+t<SimpleRange>();
setSimpleRanges1.add(lowSimpleRange);
setSimpleRanges1.add(highSimpleRange);
count++;
} else {
// single
SimpleRange simpleRange1 = new SimpleRange(gen
+eFrom,
geneTo);
Set<SimpleRange> setSimpleRanges1 = new HashSe
+t<SimpleRange>();
setSimpleRanges1.add(simpleRange1);
count++;
}
}
}
}
System.out
.println("Time from start: "
+ ((double) (System.currentTimeMillis() - star
+tTimeMillis) / 1000)
+ " count=" + count);
}
}
class LegalRange {
Set<SimpleRange> set;
public LegalRange(Set<SimpleRange> set) {
this.set = set;
}
}
class SimpleRange {
int from;
int to;
public SimpleRange(int from, int to) {
this.from = from;
this.to = to;
}
}
[download]
Am I missing anything? Is my perl code highly inefficient in some mysterious way?
I was quite surprised by the difference.
You can optimize your code a little bit by transforming the C-like for( my $i = 0 ; $i < 50 ; $i++ ) into for(0..50) and keys %{$simulation_h} into an array outside the for loops with all that I've reduced execution time 17% ( aprox ~ 3.5 secs ).
In general the more you try to optimize your code the uglier and less maintainable it will be. On the other hand usually strong typed languages like C/Java can be optimized more than weak typed ones.
You choose Perl not because its faster than Java or C you choose it because your productivity gets boosted in Perl and you can come up real quick with working prototypes or you use the flexibility it gives you.
If execution speed is the only thing you care about then:
I'd recommend you to read High Performance Perl it is an interesting thread
if ( ]
1. The time to create the hash to begin with is very fast(negligible). But I never see anywhere where the random access nature of a hash is used. Iterating through an array will be much faster than iterating over all the keys of a large hash. Perl allows "ragged 2D" array structures an(AoA) and something like that might be more appropriate if you are interested in speed.
2. Using and iterating over some kind of Array based structure would be considerably faster than all keys of a hash.
3. I was surprised to find how much time the "$gene_to" calculation took. I'm not sure why that is or what could easily be done about it.
4. The "$temp_gene_to_legal_range" calculation is a bit "non-real world" because this allocates a anonymous array containing one or 2 anonymous hashes (which also have to be dynamically allocated and created), But then the $temp_gene_to_legal_range array reference is "thrown away". This leaves the memory still allocated, but in a way that you will never be able to reference it again. So this part of the code is "expensive" due to all the memory structures that are being dynamically created. Krambambuli's code is faster because it eliminates the anon hash allocations.
5. So with 70,000x50x(( 1 array allocation)+(1 to 2 hash allocations-guess 1.5 avg?)), sub scenario() is going to take awhile and it does! Anyway it appears that saving this calculation is so expensive, that you'd be better of calculating it when you need it and then use it right then. Right now its not even saved so I have no idea of the eventual plan to make use of this.
6. I don't know enough about your problem to know what to recommend exactly in the way of alternative data structures, but my benchmarking indicates that this 70,000x50x~2.5 new memory structure allocations is what is taking the time - If I have my decimal point right, that is close to 9 million!. A more sophisticated 2-d hash structure or a Array of Hash or Array of Array of Array may serve the purpose better? It may be faster to extend some existing structure than create many millions of new little ones.
I don't much about Java and can't speak to the relative "apples to apples" or "apples to oranges" comparative nature of your code. I don't know if your Java code calls new memory allocation 9 million times, but your Perl code does.
Perl Cookbook
How to Cook Everything
The Anarchist Cookbook
Creative Accounting Exposed
To Serve Man
Cooking for Geeks
Star Trek Cooking Manual
Manifold Destiny
Other
Results (156 votes),
past polls | http://www.perlmonks.org/?node_id=845380 | CC-MAIN-2014-41 | refinedweb | 1,128 | 55.03 |
Go to: Synopsis. Return value. Flags. Python examples.
commandPort([bufferSize=int], [close=boolean], [echoOutput=boolean], [name=string], [noreturn=boolean], [prefix=string], [returnNumCommands=boolean], [securityWarning=boolean])
Note: Strings representing object names and arguments must be separated by commas. This is not depicted in the synopsis.
commandPort is undoable, queryable, and NOT editable.Opens or closes the Maya command port. The command port comprises a socket to which a client program may connect. An example command port client "mcp" is included in the Motion Capture developers kit. Care should be taken regarding INET domain sockets as no user identification, or authorization is required to connect to a given socket, and all commands (including "system(...)") are allowed and executed with the user id and permissions of the Maya user. The prefix flag can be used to reduce this security risk, as only the prefix command is executed. The query flag can be used to determine if a given command port exists. See examples below.
In query mode, return type is based on queried flag.
import maya.cmds as cmds # Open a command port with the default name "mayaCommand". cmds.commandPort() # Close the command port with the default name. Open client connections # are not broken. cmds.commandPort( cl=True ) # Query to see if the command command port "mayaCommand" exists. cmds.commandPort( 'mayaCommand', q=True ) | http://download.autodesk.com/us/maya/2009help/CommandsPython/commandPort.html | crawl-003 | refinedweb | 220 | 51.55 |
Hi guys!
I'm doing a little game in Java and I would like to insert a background music managed by a JButton. When the button is pressed, the music starts and then, to stop it, the button is pressed...
Hi guys!
I'm doing a little game in Java and I would like to insert a background music managed by a JButton. When the button is pressed, the music starts and then, to stop it, the button is pressed...
Thanks for the tips. Unfortunately it is not easy to understand the problem. I can say that this class is activated once the player reaches a maximum score in the GamePanel. Then you will have the...
This is the panel of the winner after the game :
import java.awt.Dimension;
import java.awt.Graphics;
import java.awt.Image;
import java.awt.Insets;
import java.awt.event.ActionEvent;...
Hello! In carrying out my little game I made a panel "Winner" in which I have two JButton plus I would like to add a blinking text to indicate the victory of the player("You win!").
Currently this...
I solved:
long totalSeconds = 150;
long minutes = totalSeconds / 60;
long seconds = totalSeconds % 60;
//minutes -> 2
//seconds -> 30
I created my countdown thread!
The manufacturer of the thread as a parameter will have the time in seconds (eg 120) and then be shown divided into minutes and seconds. In this case, doing 120/60 I...
And the switch panel who should do it?
Could you show me an example if possible?
With regard to the ActionListener I would like to add it to the JButton "Start Game" .
public String show="";
JButton startGame = new JButton(new ImageIcon("start.png");
startGame...
This is the class Game
public Game(final MainFrame mainFrame, final GameManager gameManager, ImageProvider imageProvider) {
this.setLayout(null);
dragX=0;
dragY=0;...
By studying the example in the link Swing Timers - Tutorials - Static Void Games
I noticed that the class "MyListener" implements ActionListener. In my case I will have action on the part of...
Unfortunately I could not write anything yet because I do not know where to start.
Thanks anyway for the suggestion
Hi, I'm making a small play in java in which you have a limited time to make as many moves as possible (with the aim of achieving a maximum score), like this: ...
The example is very clear, that's fine and I understand, but do you think I can / is correct that I want my object point in the red dot in the image?
And if I can, how do I determine the point (for...
I agree with what you said.
This image is loaded and displayed as a normal cursor.
If I had this same image of size 32x32 I could have the point of hotspots at the point indicated by me in the...
Now it is visible?
2730
Thanks anyway for the clarification, but I don't want the cursor has this dimension, I wrote for information, in fact, writing code on the back has been adapted to the classical size of the cursor....
Hello! I'm doing a little play and among the many things I decided to change the mouse cursor based on the theme of the game.
This is the code that I wrote for the declaration of the cursor:
...
Hi programmers! I just signed up to the site. I hope to compare with you and improve my skills as a programmer Java. :-h | http://www.javaprogrammingforums.com/search.php?s=5fd59c4f5f55d4ed25b14cf219a898e5&searchid=1814182 | CC-MAIN-2015-40 | refinedweb | 577 | 73.68 |
Created on 2010-05-15 04:10 by Paul.Davis, last changed 2018-02-05 03:27 by ncoghlan. This issue is now closed.
The docs for __getattr__ in the object model section could be more specific on the behavior when a @property raises an AttributeError and there is a custom __getattr__ defined. Specifically, it wasn't exactly clear that __getattr__ would be invoked after a @property was found and evaluated.
The attached script demonstrates the issue on OS X 10.6, Python 2.6.1
I'm thinking something along the lines of:
If the attribute search encounters an AttributeError (perhaps due to a @property raising the error) the search is considered a failure and __getattr__ is invoked.
I should mention, in example.py, it wasn't immediately clear that "print f.bing" would actually print instead of raising the AttributeError. As in, I had a property raising an unexpected error that happend to be an AttributeError. If its not an attribute error, the error is not swallowed. I can see the obvious argument for "AttributeError means not found", it just wasn't immediately obvious what was going on.
The problem with changing 2.7 docs is that object access is different for old- and new-style properties. Does your example work if you remove 'object'? (IE, can old style classes have properties?)
For new-style classes, the example behavior is clear if you 1. know that object has a .__getattribute__ method inherited by everything when not overriden and 2. read the doc for that which says that __getattr__ is called whenever a __getattribute__ call raises AttributeError, which it does here by passing through the .get error.
For 3.x, I think in 3.3.2. Customizing attribute access,
object.__getattr__(self, name)
"Called when an attribute lookup has not found the attribute in the usual places (i.e. it is not an instance attribute nor is it found in the class tree for self). name is the attribute name. "
might be replaced by
"Called when self.__getattribute__(name) raise AttributeError because name is not an instance attribute, not found in the class tree for self, or is a property attribute whose .get() method raises AttributeError."
But this does not work for 2.7.
/raise/raises/
I am pretty sure that when __getattribute__ is bypassed, so is __getattr__.
Old-style classes can’t have descriptors, hence no properties, static methods, class methods or super.
Cheryl, thank you for reviving this, as it is still needed. A slightly revised example better illustrates the claim in the doc revision about when __getattr__ is called.
class Foo(object):
def __init__(self):
self.foo = 1
self.data = {"bing": 4}
def __getattr__(self, name):
print(f'Getting {name}')
return self.data.get(name)
@property
def bar(self):
return 3
@property
def bing(self):
raise AttributeError("blarg")
f = Foo()
print('foo', f.foo)
print('__str__', f.__str__)
print('bar', f.bar)
print('bing', f.bing)
f.__getattribute__('bing')
# prints
foo 1
__str__ <method-wrapper '__str__' of Foo object at 0x0000016712378128>
bar 3
Getting bing
bing 4
Traceback (most recent call last):
File "F:\Python\a\tem2.py", line 24, in <module>
f.__getattribute__('bing')
File "F:\Python\a\tem2.py", line 17, in bing
raise AttributeError("blarg")
AttributeError: blarg
Terry,
Thanks for clarifying with this example. I hadn't tried this when I was playing with the other example. I guess __getattribute__ might be defined by a class, but generally wouldn't be called directly, so the use of __getattr__ and __getattribute__ and the raising of AttributeError is more for an `attributeref` () usage?
Before testing, let alone documenting, the status quo, I would like to be sure that suppressing the exception is truly the intended behavior. Is there a way to get an annotated listing from git (given which patch, and therefore which person, is responsible for each line)? I will try asking on pydev.
Calling __getattr__ on property failure is a behavior of __getattribute__, not of the property, and I would expect object.__getattribute__ to be tested wherever object is, but I have not found such tests. If we do add a test, the best model in test_desc.py looks like `def test_module_subclasses(self):`. The test class would only need __getattr__ and the faulty property.
class Foo(object):
def __getattr__(self, name):
print(f'Getattr {name}')
return True
@property
def bing(self):
print('Property bing')
raise AttributeError("blarg")
f = Foo()
print(f.bing)
#prints (which would be the log list in a test)
Property bing
Getattr bing
True
The behavior and doc for __setattr__ and __delattr__ should also be checked.
>> Is there a way to get an annotated listing from git (given which patch, and therefore which person, is responsible for each line)?
Which source did you want to look at? In github, if you go into any source, you can click on a line and it gives an option for 'git blame'. That shows the last commit change for each line. You can then click an icon to see a previous commit, etc. For the .rst sources, it's a little different and there is a Blame button at the top of the source that will bring up the same view (commit annotations to the left of the source) as right-clicking.
I had posted about git blame a few months ago on core mentorship and Carol Willing mentioned another tool to get all the changes by line. Here was her post:
Thanks for passing along the tip for others. You may also find the npm package `git-guilt` useful as it will display all the contributors to a particular line's history. <>
Thanks. I normally look at source in my local clone with an editor. I found 'view blame' and 'view blame prior' on github.
Nick, this is about better documenting the behavior of __get(set/del)attr__ in 3.x it relations to AttributeError in a property. I think I understand what it does and think the patch is correct. Could you either review or suggest someone else who better understands core behavior like this?
New changeset d1f318105b8781b01f3507d5cb0fd841b977d5f2 by Nick Coghlan (Cheryl Sabella) in branch 'master':
bpo-8722: Document __getattr__ behavior with AttributeError in property (GH-4754)
New changeset a8c25d1c7f0d395861cc3e10dd01989150891c95 by Nick Coghlan (Miss Islington (bot)) in branch '3.6':
[3.6] bpo-8722: Document __getattr__ behavior with AttributeError in property (GH-5542)
New changeset fea0a12f6bee4a36b2c9533003e33a12c58d2d91 by Nick Coghlan (Miss Islington (bot)) in branch '3.7':
[3.7] bpo-8722: Document __getattr__ behavior with AttributeError in property (GH-5543)
Thanks for the patch Cheryl, and for the reviews Terry! | https://bugs.python.org/issue8722 | CC-MAIN-2020-34 | refinedweb | 1,098 | 66.54 |
I am new to Java, and having problems with the use of DecimalFormat. When I compile the program it gives me an error so it doesn't work. I am not sure what I am doing wrong. I think that I am using the syntax correctly. When I take out the DecimalFormat it compiles and runs, but I would like the output with commas. What am I doing wrong here?
import javax.swing.JOptionPane; import java.text.DecimalFormat; public class yeargas { public static void main(String[] args) { int tank, month; double year, yearx ,total; String input; DecimalFormat formatter=new DecimalFormat(0,000); //get the average cost to fill 1 tank of gas input=JOptionPane.showInputDialog("What is the average cost to fill 1 tank of gas? "); tank=Integer.parseInt(input); //get the x number of times you fill in a month input=JOptionPane.showInputDialog("What is the average number of times you fill your car in a month? "); month=Integer.parseInt(input); //calculate the cost for a month //multiply that by 1 year year=(tank*month)*12; //get the x number of years to calculate input=JOptionPane.showInputDialog("How many years do you want to calculate for? "); yearx=Integer.parseInt(input); total=yearx*year; //display the aproximate total cost for x JOptionPane.showMessageDialog(null,"Your average cost of gas for "+yearx+" is $"+(formatter.format(total))+".");//total+"."); System.exit(0); } } | http://www.javaprogrammingforums.com/whats-wrong-my-code/16673-decimalformat-error-when-compiled.html | CC-MAIN-2015-11 | refinedweb | 229 | 50.23 |
OpenOffice.org ODF, Python and XML
My desktop distribution (SUSE 9.3) includes the packages python-doc-2.4-14 and python-doc-pdf-2.4-14. You also can get documentation from. In either case, we want the Library Reference, which contains information on the Python XML libraries; they are described in the chapter on “Structured Markup Processing Tools” (currently Chapter 13).
Several modules are listed, and I noticed one labeled lightweight: xml.dom.minidom—Lightweight Document Object Model (DOM) implementation.
Lightweight sounded good to me. The library reference gives these examples:
from xml.dom.minidom import parse, parseString dom1 = parse('c:\\temp\\mydata.xml') # parse an XML file by name datasource = open('c:\\temp\\mydata.xml') dom2 = parse(datasource) # parse an open file
So, it looks like parse can take a filename or a file object.
Once we create a dom object, what can we do with it? One nice thing about Python is the interactive shell, which lets you try things out one at a time. Let's unpack the first example and look inside:
% mkdir TMP % unzip -q -d TMP ex1.odt % python Python 2.4 (#1, Mar 22 2005, 21:42:42) [GCC 3.3.5 20050117 (prerelease) (SUSE Linux)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import xml.dom.minidom >>> dom=xml.dom.minidom.parse("TMP/content.xml") >>> dir(dom) [ --- a VERY long list, including --- 'childNodes', 'firstChild', 'nodeName', 'nodeValue', ... ] >>> len(dom.childNodes) 1 >>> c1=dom.firstChild >>> len(c1.childNodes) 4 >>> for c2 in c1.childNodes: print c2.nodeName ... office:scripts office:font-face-decls office:automatic-styles office:body >>>
Notice how Python's dir function tells what fields (including methods) are in the object. The childNodes field looks interesting, and indeed, it appears that the document has a tree structure. After a little more manual exploration, I discovered that text is contained in the data field of certain nodes. So, I coded up a naive script, fix1-NAIVE.py:
#!/usr/bin/python -tt import xml.dom.minidom import sys DEBUG = 1 def dprint(what): if DEBUG == 0: return sys.stderr.write(what + '\n') def handle_xml_tree(aNode, depth): if aNode.hasChildNodes(): for kid in aNode.childNodes: handle_xml_tree(kid, depth+1) else: if 'data' in dir(aNode): dprint(("depth=%d: " + aNode.data) % depth) def doit(argv): doc = xml.dom.minidom.parse(argv[1]) handle_xml_tree(doc, 0) # sys.stdout.write(doc.toxml('utf-8')) if __name__ == "__main__": doit(sys.argv)
The dprint routine prints debugging information on stderr; later we'll set DEBUG=0, and it'll be silent. Okay, let's try that on the content.xml above:
% ./fix1-NAIVE.py TMP/content.xml depth=5: Turn all these depth=6: "straight" Traceback (most recent call last): File "./fix1-NAIVE.py", line 24, in ? doit(sys.argv) File "./fix1-NAIVE.py", line 20, in doit handle_xml_tree(doc, 0) 16, in handle_xml_tree dprint(("depth=%d: " + aNode.data) % depth) File "./fix1-NAIVE.py", line 8, in dprint sys.stderr.write(what + '\n') UnicodeEncodeError: 'ascii' codec can't encode character u'\u201c' in position 22: ordinal not in range(128) %
What's that error about? When trying to print that string on stderr, we hit a non-ASCII character—probably one of those curly quotes. A quick Web search gave this possible solution:
sys.stderr.write(what.encode('ascii', 'replace') + '\n')
It says that if a non-ASCII Unicode character appears, replace it with something in ASCII—an equivalent, or at least something printable.
Replacing line 8 with that yields this output:
% ./fix1.py TMP/content.xml depth=5: Turn all these depth=6: "straight" depth=5: quotes into ?nice? quotes %
So the curly quotes were replaced with ? characters, which is fine for our debugging output. Note that the text doesn't necessarily all come at the same depth in the tree.
The document's structure also can be seen by typing the full filename of the content.xml file into a Firefox window (Figure 7). That's good for displaying the data; the point, however, is to change | http://www.linuxjournal.com/article/9319?page=0,2&quicktabs_1=2 | CC-MAIN-2016-26 | refinedweb | 674 | 61.22 |
-
Declaring a NOTATION
A notation is anything that the XML processor can't understand and parse (notations are also called unparsed entities). Although this conjures up the idea of binary data, it can also be text that XML doesn't understand. A chunk of JavaScript, for example, could be kept in an external file and referred to as a notation.
TIP
Notations only can refer to external files. There's no way to hide information inside a document and pass it on to a user agent for special processing unless you put it inside an XML tag and instruct the agent to do something with the contents of the tag. If you want to include text data containing special characters inside the document, you should escape it inside a CDATA element as described in the later section, "Escaping a Text Block in a CDATA Section."
The problem with notations is that they require access to the DTD to use the notation. Although you might use the internal subset of the DTD to make notation information available locally to the document, non-validating parsers are not required to read the internal DTD subset.
NOTATION syntax is simple:
<!NOTATION name identifier "helper" >
The identifier is a catalog entry, as used in SGML. Many XML processors are recycled SGML processors, so they support a catalog by default. This is slightly safer than pointing to a helper application that may or may not be there, but XML requires the helper application to be referenced in any case, which can lead to anomalous behavior. You might reference Adobe Photoshop, for example, as the helper application for viewing a GIF image, but the browser is likely to know how to display GIFs on its own. The browser is also far more likely to be able to integrate the image properly into the rendered document on a display device or printer, a task that Adobe Photoshop is quite incapable of. Using both the identifier and the "name" of the helper allows you to compromise between just telling the user agent what sort of file is being passed and telling it explicitly how to display it while knowing nothing whatsoever about the environment the document is being displayed in.
Although some people may encourage you to behave as if Microsoft Windows is the center of the universe and do something like this
<!NOTATION gif SYSTEM "gif" >
which actually works on Windows systems, I'm not one of them. Use this syntax at your peril. The above example assumes the presence of the Windows system Registry to resolve the reference, which registry is not exactly universally available and short-circuits the standard system identifier completely.
A safer course is to enter the entire catalog entry and identifier sequence. This method gives a stronger hint to the eventual application that will deal with the processed XML file about what might be done with it but doesn't actually process the notation reference. The bare system identifier, "gif", works on Windows systems because Windows knows about GIFs already. But a handheld device or even a computer using another operating system may not have the knowledge of how to handle GIF images at its beck and call.
So it would be best to recast the previously shown reference as
<!NOTATION gif89a PUBLIC "-//CompuServe//NOTATION Graphics Interchange Format 89a//EN" "gif">
Notations can't be used in isolation. They have to be declared in an entity as well. A complete declaration sequence might look like this:
<!NOTATION gif89a PUBLIC "-//CompuServe//NOTATION Graphics Interchange Format 89a//EN" "gif"> <!ENTITY gif89a SYSTEM "gif89a.gif" NDATA gif89a> <!ELEMENT image EMPTY> <!ATTLIST image source CDATA #REQUIRED alt CDATA #IMPLIED type NDATA gif89a >
In your document, your tag would look like this
<image source=uri" alt=[image of something]>
You can also create an enumerated list of notation types, which uses a slightly different syntax to describe the notation type:
<!NOTATION gif87a PUBLIC "-//CompuServe//NOTATION Graphics Interchange Format 87a//EN" "gif"> <!NOTATION gif89a PUBLIC "-//CompuServe//NOTATION Graphics Interchange Format 89a//EN" "gif"> <!ENTITY gif87a SYSTEM "gif87a.gif" NDATA gif87a> <!ENTITY gif89a NDATA SYSTEM "gif89a.gif":
<image source=uri" alt=[image of something]
This lets the user agent know that the image is in GIF87a format, if that matters at all.
NOTATIONs Are Awkward Solutions
NOTATIONs are interesting devices because they allow you to isolate binary data as well as character-ish data, such as scripts or interpreted source code, which you don't want the XML processor to have to deal with. In that sense, they're a good thing. But the specificity required becomes quickly tiresome. XML is being used here as a dispatcher to choose among alternatives that quickly become obsolescent as new binary technologies emerge.
The NOTATION syntax is somewhat of an anachronism left over from SGML, which uses them extensively. It's possible that they had to be included for compatibility with the older standard, but on the Web it's almost inconceivable that you would know the permanent location of anything, even on your own system. File systems evolve and change; nothing is constant. NOTATION declarations tacitly assume an unchanging environment that never alters, or alters so glacially that tweaking a catalog entry isn't a chore.
The Web is different. Changes propagate overnight. Before you can wink an eye someone has a new multimedia widget out there and everybody is using it. The plug-in mechanism already used by browsers might have been a better way of doing this. Or maybe we could just declare a plug-in NOTATION. Even better would be to let the browser and the server negotiate the best format for a particular situation.
Unfortunately, the better way has yet to emerge. So until it does, using the official method is an interim solution. Hopefully, whatever new mechanism supplants this awkward hack will be able to use the older method to extrapolate from the hard-coded location to where an appropriate processor might be found. In any case, you should think long and hard before including notations in an XML document and be prepared for something better to take the place of notations in the near future.
ENTITY Declaration
The ENTITY declaration is one of the most simple, in spite of the fact that there are so many different types of entities. The list of restrictions placed on entities is confusing, however, and so tersely described in the XML 1.0 Recommendation that it takes a thorough reading or two before you get it. You'll learn more about entities and have the opportunity to compare and contrast their various uses in the "Understanding the W3C Entity Table" section later in this chapter.
If you think of an XML document as a collection of entities, which are, roughly speaking, objects in the object-oriented sense, the entity declaration is a way of pointing at one instance of a particular object. There aren't that many options and it's not that hard to learn. Following is the basic format:
<!ENTITY name value >
The name part has two options, one with no modifier as you see it here and one with a preceding percent sign (%) followed by a space that marks a parameter entity. The value part has three basic options: either some form of quoted string, an identifier that points to a catalog entry or location external to the file, or a notation reference. The following are some variations for parsed entities:
<!ENTITY name PUBLIC "catalog entry" > <!ENTITY name PUBLIC "catalog entry" "uri" > <!ENTITY name SYSTEM "uri" > <!-- External ID --> <!ENTITY name "�" > <!-- General entity mnemonic for character entity -->
The following is how you reference parameter entities:
<!ENTITY % name PUBLIC "catalog entry" > <!ENTITY % name PUBLIC "catalog entry" "uri" > <!ENTITY % name SYSTEM "uri" > <!-- External ID --> <!ENTITY % name "�" > <!-- Parameter entity mnemonic for character entity -->
Parameter entities behave differently than general entities because they were designed for different purposes. The difference in their declarations is designed to be obvious when you, or the XML processor, sees one.
The way you refer to them is different as well. A general entity is referred to like this:
&name;
Although a parameter entity is referred to like this:
%name;
This is a relatively trivial difference that masks a huge difference in usage.
Parameter Entities
The parameter entity is a bit like a C macro designed for use in the DTD itself, so that you can identify objectscollections of datafor use during the process of building the DTD. After this is done, parameter entities have no meaning anywhere else. So if your document happens to contain a string that looks like the name of a parameter entity, it will be ignored, or rather, treated as the simple text that it is.
Inside the DTD however, a parameter entity has great utility. You can use it to store chunks of markup text for later use, or point to external collections of markup text for convenience.
In the internal DTD subset, parameter entities can only be used to insert complete markup. So, the following declaration and use is legal:
<!DOCTYPE name [ <!ENTITY % myname "<!ELEMENT e1 ANY>"> %myname; ... ]
The following example, using external references pointed to by a URI and, in the second entity, a public ID (or catalog entry), is also legal:
<!DOCTYPE name [ <!ENTITY % myname SYSTEM "uri"> %myname; <!ENTITY % myname2 PUBLIC "catalog entry" "uri"> %myname2; ... ]
However, this example, which tries to define parts of markup that will be resolved later, is not legal in the internal DTD subset:
<!DOCTYPE name [ <!ENTITY % mypart "ANY"> <!ENTITY % myname "<!ELEMENT e1 %mypart;>"> %myname; ... ]
Note that this code would be legal in the external DTD subset or an external entity.
In a non-validating XML processor, the external references may or may not be fetched and incorporated into the document, but this is not an error whichever way it goes.
NOTE
The distinction between validating and non-validating XML processors may seem trivial but almost all browsers are and will be non-validating. On the Web, it will be possible for a DTD to reference dozens of locations, any of which may reference dozens more. A vali- dating browser must read in everything, potentially the entire Web, before it displays anything. The wait can become tiresome.
This brings up a subtle point. If the internal DTD subset contains a mixture of internal and external parameter entity references, a non-validating processor must stop processing them as soon as it doesn't interpret one, which it is permitted to do. The reason for this is that the reference may become undefined:
<!DOCTYPE name [ <!ENTITY % myname "<!ELEMENT e1ement1 ANY>"> %myname1; <!ENTITY % myname2 SYSTEM "uri"> <!--external file contains <!ELEMENT e1ement2 ANY> %myname2; <!ENTITY % myname3 "<!ELEMENT3 e1 ANY>"> %myname3; ... ]
If the non-validating XML processor reads the external parameter entities, which it is permitted to do, all three elements are declared. If it doesn't read any external parameter entities, which it is also allowed to do, then only element1 is declared. The reason is that the processor doesn't know whether the external reference contained a declaration of element3 among its text, in which case the value of element3 would have been whatever that value was, because the processor would have seen that first if it had read it. Because it doesn't know for sure, it must ignore all succeeding entity and attribute list references. To make matters even more complicated, a non-validating XML processor is permitted to skip all parameter entities, in which case none of the elements are defined.
Up to that point, however, a non-validating XML processor is required to read and process the internal subset of the DTD, if any such DTD subset exists. So any other declarations inside the internal subset, including setting the replacement text of internal general entities, setting default attribute values, and normalizing attributes must be processed and performed. Figure 3.2 shows a metaphorical representation of the difference between the views seen by validating and non-validating XML parsers.
Figure 3.2 The difference is shown between the view of an XML document seen by a validating parser and a non-validating parser.
The validating parser sees everything clearly. The non-validating parser may or may not be able to see the entities and definitely doesn't see the DTD although it knows that the DTD exists.
Even though the non-validating XML processor must read and process the internal subset of the DTD until, and if, it's required to stop processing, it can't validate the document on that basis. If it did, it would be a validating processor and would be required to read everything.
NOTE
It's surprising how many people get parameter entities wrong and it's one of the problems with the EBNF that forms a part of the XML 1.0 standard. Programmers are familiar with EBNF and think it must define everything, but a large part of the specification is actually contained in the often-obscure accompanying text. You have to read both the EBNF and the text to fully capture the meaning of the W3C Recommendation.
General Entities
A general entity can occur in the document itself, at least potentially. They're identified by a particular syntax in the declaration:
<!ENTITY name {stuff} >
The big distinction in general entities is whether they're internal, in which case stuff is a quoted string, or external, when it's a catalog entry or a URL.
<-- General Internal Entity --> <!ENTITY name "text of some sort" > <-- General Internal Entities --> <!ENTITY name PUBLIC "-//LeeAnne.com//My XML Stuff//EN" > <!ENTITY name PUBLIC "-//LeeAnne.com//My XML Stuff//EN" "my-dtd=stuff.dtd" > <!ENTITY name SYSTEM "" >
External entities may not be included in the document if the XML processor is not validating. Internal general entities may appear in the DTD, but only as the values in an attribute list or an entity. Basically this means that they have to appear with quoted strings.
Unparsed Entities
The unparsed entity has already been treated earlier in the explanation of NOTATIONs. Unparsed entities can only be external, but the externality of the entity is taken care of by the notation declaration referred to in an NDATA attribute. Their use, like that of notations in general, is somewhat controversial. There's no particular reason that the designer of a DTD has to know what particular sort of multimedia file is going to sit inside a document and then dispatch it to the proper handler sight unseen. Instead of being a generic document template, then, the DTD is limited by the types of unparsed files foreseen from the beginning and accounted for.
This is unlike the existing case with HTML. Within limits, you just point to a binary file and the application figures out what it is and how to display it. It's unlike the case in the UNIX environment that many of the designers of XML came from. Within limits, in UNIX you just use a file. Executable files are self-identifying and behave properly on their own. That would have seemed a much more robust approach, in my humble opinion.
Be that as it may, you're stuck with the difficult necessity of updating your DTDs whenever a new graphics or audio format is invented. Your alternative is to leave the DTD alone and fall by the wayside as video supplants still images, as interactive video supplants spectator video, and 3D virtual reality supplants mere 2D interaction.
The ENTITY declaration for an unparsed entity works hand in hand with the notation entity. The NOTATION must be declared before using it in an ENTITY.
Here's what a declaration for an unparsed entity would look like in your DTD along with the element declaration that is necessary to actually instantiate a particular example:
<!ENTITY gif89a SYSTEM "gif89a.gif" NDATA gif89a> <!ELEMENT image EMPTY> <!ATTLIST image source CDATA #REQUIRED alt CDATA #IMPLIED type NDATA gif89a >
In your document, your tag would look like:
<image source=uri" alt=[image of something]>
You can also create an enumerated list of notation types, which uses a slightly different syntax to describe the notation type, as you learned in the discussion about NOTATION declarations:
<!NOTATION gif87a PUBLIC "-//CompuServe//NOTATION Graphics Interchange Format 87a//EN" "gif"> <!NOTATION gif89a PUBLIC "-//CompuServe//NOTATION Graphics Interchange Format 89a//EN" "gif"> <!ENTITY gif87a NDATA gif87a> <!ENTITY gif89a if your gif was in the older gif87a format:
<image source=uri" alt=[image of something]
This tiny example points out the folly of this approach. How many people know offhand which of the two formats their gif files adhere to? How many care? Yet XML as it exists today makes this and many other trivialities a matter of pressing import. In the immortal words of Tim Bray, one of the XML 1.0 design team, "This is completely bogus."
ELEMENT Declaration
The element declaration is the part of XML you see most clearly in the final product. It represents the actual tags you'll use in your documents, and you have to have at least one or your document isn't valid XML. A non-validating XML processor will never see your DTD, but the tags and attributes contained in your document will describe it fairly completely anyway. Along with an associated style sheet, you can display the document correctly without any DTD at all.
If you don't care what the document looks like, you may not even need a style sheet. This might be the case for a document that was essentially a database, or was designed as a transport mechanism to transfer structured data between two applications.
The ELEMENT declaration looks like this:
<!ELEMENT name content-model >
The name is the name of your tag in use. The content model is where things start to get interesting.
The content model can contain an arbitrary mixture of terminal and non-terminal elements. Non-terminal elements are the names of other elements while terminal elements are text or other content. This is the syntax that forms nodes in your document. There are two general content models. The first model describes sequentialor orderedcontent which uses a comma-separated list to indicate that one element has to follow another to the end of the ordered list. The second model uses a vertical "or" bar as a list separator to indicate a selection between alternatives. With these two mechanisms, you can construct almost anything.
Ordered Content
Entity names separated by commas are sequentially ordered. The first in the list is first, the second second, and so on. The items on the list should be surrounded by parentheses for clarity, although it's not strictly necessary for a pure ordered list:
<!ELEMENT name ( sub-element1, sub-element2, sub-element3, ... ) >
Selection Content
Entity names are separated by "or" bars (|), the vertical bars that should be available on your national-languagespecific keyboard, often above the backslash (\). They should and must be surrounded by parentheses. In use, they look like this:
<!ELEMENT name ( sub-element1 | sub-element2 | sub-element3 | ... ) >
Repeating XML Content Elements
Content names, or groups of content names surrounded by parentheses, can be followed by a question mark (?), a plus sign (+), or an asterisk (*) to indicate a repetition factor, sometimes called an occurrence indicator. No repetition mark means that the element must appear once and once only. A question mark means that the item can appear zero or one time. A plus sign means that the element appears at least one time and repeats as needed. An asterisk means that the element repeats as needed but is optional. In other words, an asterisk means zero or more. These signs can be combined with parentheses and the previous sequence or alternation to form structures of arbitrary complexity.
Content models are so important to XML that it might pay to write these down somewhere until you have them memorized. Table 3.1 lists the symbols used to indicate repetition factors and the two types of content model.
Table 3.1 Occurrence Indicators Used in XML DTDs
If you're familiar with Regular Expressions in Vi, Emacs, and other UNIX-style editors, the syntax will be fairly familiar. The following example shows several uses:
<!ELEMENT name (( sub-element1 | sub-element2)? , (sub-element-3))+ >
This says that the element contains one or more substructures containing either sub-element1 or sub-element2 followed by one instance of sub-element3 or it contains one instance of sub-element3. So, the following are all valid productions:
<name><sub-element3></sub-element3></name> <name><sub-element3></sub-element3><sub-element3></sub-element3></name> <name><sub-element1></sub-element1><sub-element3></sub-element3></name> <name><sub-element2></sub-element2><sub-element3></sub-element3></name>
Even this simple example can generate an infinite number of productions, although it might become boring to list them. The ways in which these simple elements can combine can become confusing quickly. One of the user-friendliest uses of parameter entities is to encapsulate subsets of these behaviors so you can think about them separately.
TIP
Look-ahead is a term from the compiler/parser world that simply means the parser can look ahead in the input stream and backtrack to resolve ambiguities. Because this implies the ability to buffer the entire document in memory if necessary, anything more than the one-character look-aheadso common that it's not usually dignified with the name look-aheadneeded to resolve tokens was dropped from the language definition. Avoiding arbitrary levels of look-ahead means that an XML parser can be small and lightweight, suitable for handheld and other devices with limited memory and power.
Because XML processors don't do look-ahead, you have to guarantee that your content model can be successfully parsed without backtracking before handing it over. A good strategy is to structure a content model with a lot of optional elements as an alternative between cascading models with optional elements dropped off the beginning of the model subset:
(a,b,c,d) | (b,c,d) | (c,d) | (d)
You can't drop off elements from the end or put alternatives in the middle because then the parser would have to backtrack to parse them. So content models that look anything like the following ambiguous examples probably don't do what the designer intends them to do:
(a,b,c,d) | (a,b,c) | (a,b) | (a) (a,b,c,d) | (a,b,d) | (a,d) | (a)
In both these examples, when the parser encounters element a, the first alternative, (a,b,c,d), is chosen. None of the other alternatives can be considered and may be ignored. Some XML parsers may generate an error when encountering a non-deterministic content model, however, so you're required to ensure that all content models are unambiguous.
NOTE
Technically, the ambiguous content models shown here are non-deterministic, which means it's not possible to construct a finite state automaton to process them. It may be possible to convert a non-deterministic content model to one that is deterministic algorithmically, but this is not guaranteed.
By nesting known bits of combining logic into larger ones, what might be daunting when viewed in its entirety can be broken down into component parts. Unfortunately, because of limitations on the internal DTD subset, this facility is only available in the external DTD.
Terminal content
The leaves of our document tree are represented by terminal content, of which there is one type: #PCDATA. Parsed Character Data is mixed content that can contain both text and markup. This is the most general type of leaf. When you use a mixed content model you cannot control the order or occurrence of elements, although you can constrain the elements to be of a certain type.
It would be used in an element like
<!ELEMENT name (#PCDATA | el1 | el2 | el3 | ... )* >
or like this with no control over element type:
<!ELEMENT name (#PCDATA)* >
It's fairly straightforward.
TIP
You could use this type of element content to contain XML tags from another namespace, for example, in an XML document without violating the DTD of the base document. Although the document DTD would have no idea what the inserted tags meant, the governing DTD of the namespace and the designer of the page presumably would.
EMPTY content
If the element is declared as EMPTY, there is no content. So if you use the start and end tag convention, you have to guarantee that the end tag immediately follows the start tag like
DTD declaration: <!ELEMENT anyname EMPTY> XML document instance: <anyname></anyname>
or you'll generate a validation error.
You'll also probably break any browser that runs into it if the name happens to look like an HTML empty tag. So this is unsafe although perfectly legal in XML:
<img></img>
All in all, it's probably better to use the special empty tag syntax like this:
<anyname />
Notice there is a space between the element name and the forward slash, and the slash is followed immediately by the closing angle bracket.
NOTE
Because empty elements are, by definition, empty, the only possible content they can carry is in the attributes associated with each empty element. In general, any text content can easily be included in an attribute. The only real limitation is that you can't typically extend the document itself by means of an attribute. There are two important exceptions: A notation could theoretically call a process that inserted more content, much as one can do using Dynamic HTML, and it is possible to transclude content from an external file using attributes on an XLink anchor element. See Chapter 8, "XPath," and Chapter 9, "XLink and XPointer,"for more information on XLink, XPath, and XPointer. Indirectly, it would also be possible to use XSLT to transform a document based on the value contained in an attribute, but that will have to wait until we discover XSL in Chapter 13, "Cascading Style Sheets and XML/XHTML."
ANY Content
This is the ultimate in loose declarations and means exactly what it says. An element so defined can contain anything at all:
<!ELEMENT anyname ANY>
It's the equivalent of listing every element in your DTD in any order but saves a lot of typing time.
ANY content is handy primarily for developing a DTD or for debugging a broken DTD if you have an example document but no DTD. If you have validity errors in your first cut you can try changing the content model to ANY in likely spots until the DTD is valid so you can load it into a DTD design tool. At that point, you can start tightening up your content model until it just begins to pinch. Then you have a valid DTD. With a large document model, this can be tedious but it's the DTD designer equivalent of knitting, after a while the process becomes so mechanical you hardly think about it. | http://www.informit.com/articles/article.aspx?p=24992&seqNum=5 | CC-MAIN-2017-51 | refinedweb | 4,485 | 53.61 |
#include <stdio.h>
#include <stdint.h>
#include <sys/types.h>
#include <rte_compat.h>
Go to the source code of this file.
RTE Mbuf dynamic fields and flags
Many DPDK features require to store data inside the mbuf. As the room in mbuf structure is limited, it is not possible to have a field for each feature. Also, changing fields in the mbuf structure can break the API or ABI.
This module addresses this issue, by enabling the dynamic registration of fields or flags:
The placement of the field or flag can be automatic, in this case the zones that have the smallest size and alignment constraint are selected in priority. Else, a specific field offset or flag bit number can be requested through the API.
The typical use case is when a specific offload feature requires to register a dedicated offload field in the mbuf structure, and adding a static field or flag is not justified.
Example of use:
To avoid wasting space, the dynamic fields or flags must only be reserved on demand, when an application asks for the related feature.
The registration can be done at any moment, but it is not possible to unregister fields or flags for now.
A dynamic field can be reserved and used by an application only. It can for instance be a packet mark.
To avoid namespace collisions, the dynamic mbuf field or flag names have to be chosen with care. It is advised to use the same conventions than function names in dpdk:
Definition in file rte_mbuf_dyn.h.
Maximum length of the dynamic field or flag string.
Definition at line 82 of file rte_mbuf_dyn.h.
Helper macro to access to a dynamic field.
Definition at line 235 of file rte_mbuf_dyn.h.
Register space for a dynamic field in the mbuf structure.
If the field is already registered (same name and parameters), its offset is returned.
Register space for a dynamic field in the mbuf structure at offset.
If the field is already registered (same name, parameters and offset), the offset is returned.
Lookup for a registered dynamic mbuf field.
Register a dynamic flag in the mbuf structure.
If the flag is already registered (same name and parameters), its bitnum is returned.
Register a dynamic flag in the mbuf structure specifying bitnum.
If the flag is already registered (same name, parameters and bitnum), the bitnum is returned.
Lookup for a registered dynamic mbuf flag.
Dump the status of dynamic fields and flags. | https://doc.dpdk.org/api-19.11/rte__mbuf__dyn_8h.html | CC-MAIN-2022-40 | refinedweb | 410 | 66.74 |
- Advertisement
stodgeMember
Content Count856
Joined
Last visited
Community Reputation144 Neutral
About stodge
- RankAdvanced Member
very BEST audio software?
stodge replied to DarkMortar's topic in Music and Sound FXDefine "best"? I bought Fruityloops as it fits my needs; cheap, full featured, soundfont support, fairly easy to use out of the box but with many features I'll probably never use. So FL is the best for me.
Problem with object factories / macros
stodge replied to stodge's topic in General and Gameplay ProgrammingThanks for your comments. I changed my ObjectFactory singleton from: static ObjectFactory mSingleton to static ObjectFactory* mSingleton and for some reason it works now with a shared library. Weird! Thanks again
Problem with object factories / macros
stodge replied to stodge's topic in General and Gameplay Programming1) I can create the static factory instance upon program initialisation if my library is a static library, but it seg faults in ObjectFactory::AddFactory: mFactories.insert(make_pair(name, factory)); Where mFactories is just: typedef std::map<std::string, IObjectFactory*> mFactories; I can query the size of the map at this point but I just can't insert into it. 2) If I create the library as a shared library, the static factory instance isn't created on program initialisation.
Problem with object factories / macros
stodge replied to stodge's topic in General and Gameplay ProgrammingSorry if my question was confusing. In this example, if I declare a variable of Clock: Clock clock; in my code, the static Clock factory is created, as I see the debug output: "Creating factory for object Clock" If I don't declare a variable of Clock as above, the static Clock factory isn't created and can't be used. So if I'm doing something fundamentally wrong, how should I automatically create factories in this manner? THanks
Problem with object factories / macros
stodge posted a topic in General and Gameplay ProgrammingSorry the formatting is screwed up - let me see if I can fix it. Not sure if I can describe this properly, but here goes. I have the following C++ macros; #define HB_DEFINE_FACTORY(x) class x##Factory: public ObjectFactory { public: x##Factory() { std::cout << "Creating factory for object " << #x << std::endl; ObjectFactory::GetSingleton().AddFactory(#x, this); }; virtual ~x##Factory() { std::cout << "Deleting factory for object " << #x << std::endl; }; virtual Object* CreateObject(const std::string& name) { return new x(); }; private: static x##Factory mFactory; }; #define HB_IMPLEMENT_FACTORY(x) x##Factory x##Factory::mFactory; There are used like this: class Clock: public SessionObject, public ICallable { public: HB_DECLARE_CLASS(Clock, SessionObject); Clock(); virtual ~Clock(); virtual void SerializeTo(DataStream& stream); virtual void SerializeFrom(DataStream& stream); virtual void SerializeUpdateTo(DataStream& stream); virtual void SerializeUpdateFrom(DataStream& stream); virtual void UpdateTime(ParameterListType ¶ms); virtual void Dump(); virtual std::string ToString(); protected: Timestamp mTimestamp; Timer* mTimer; }; HB_DEFINE_FACTORY(Clock); #include "Clock.h" HB_IMPLEMENT_CLASS(Clock); HB_IMPLEMENT_FACTORY(Clock); Clock::Clock(): mTimestamp() { } The problem is that I want to create a static instance of each factory, which is created at runtime without intervention by the user. This way the developer can create an instance of an object using the object factory class just by passing in the object type. The problem is that the static factory instance for say the Clock class is only created if I declare a Class object in my program. Why is that? Is there any way to guarantee the static instance is created? Sorry for the bum description of my problem but I'm not sure how else to explain it! Thanks [Edited by - stodge on December 17, 2006 7:52:29 PM]
The ultimate game engine
stodge replied to horizon981's topic in General and Gameplay ProgrammingOGRE3D - Proven rendering engine Irrlicht - unproven rendering engine with some features of a game engine (incomplete GUI, input) Torque - proven game engine Until people understand the difference between engines, this type of argument is pointless. If you want to compare apples and oranges, go to your local supermarket. ;)
Quick question on select() when sending UDP packets
stodge replied to stodge's topic in Networking and MultiplayerThis is just a simple test so all packets are less than 20bytes. Some good things to think about- thanks.
Quick question on select() when sending UDP packets
stodge posted a topic in Networking and MultiplayerA quick UDP question when using select(). I have select() working for when bytes are received on a socket. I can happily pass them off to a Connection class, which processes the packets, parses the command and data etc.. However, when sending to a server on localhost, I'm losing lots of packets. Say I send 1000 unreliable packets from the client, the server only receives say 25%. The PC is an AMD 64 of some variety (can't remember) and there's nothing else of any interest going on at the time. CPU usage is low. When using UDP, do you: 1) use select() to tell you when a socket is ready for writing and then only send UDP packets when this occurs? 2) write packets as fast as you can to the socket, and let your reliable layer sort things out? 3) do something else I haven't considered? Thanks
CodeBlocks compiler error - namespace clash perhaps?
stodge replied to stodge's topic in General and Gameplay ProgrammingWell spotted: in winbase.h there's: #define GetCurrentTime GetTickCount Doh! Thanks
CodeBlocks compiler error - namespace clash perhaps?
stodge posted a topic in General and Gameplay ProgrammingI'm getting the following error when compiling my library in CodeBlocks on Windows: <source>error class Habitat::Utils::Timestamp has no member GetTickCount.</source> The compiler is complaining about this code: <source>mTimestamp.GetCurrentTime();</source> So I don't know why it's giving me this error - of course it doesn't have a member GetTickCount as I didn't put one in! I presume it's confusing my Timestamp class with another one it's compiling in, but I don't know why. I've put it into the Habitat::Utils namespace to avoid conflicts, but it still won't compile. Any thoughts? Thanks
is there a way to get rid of clas::getSingleton().functionCall
stodge replied to JinJo's topic in General and Gameplay ProgrammingI'm not afraid to admit I use Singletons as facades. I guess the Singleton vs. no Singleton war will rage just as long as the GNOME vs KDE war.
Catmother Enigne Help
stodge replied to violationgames's topic in General and Gameplay ProgrammingI haven't used catmother engine, but there's something wrong with your build. Search Google or Gamedev for that particular error text. You should get many hits.
Relationship between ISceneNode and IRenderable
stodge replied to pauls_1979's topic in General and Gameplay Programmingclass ISceneNode { virtual std::vector<IRenderable*> getRenderables(); }; How does this sound? I don't think a scene node should care about a Render method, becasue to a scene node that doesn't make sense. At least to me. Disclaimer - I'm no scene expert!
TCP / UDP advice
stodge replied to SymLinked's topic in Networking and Multiplayer"TCP looks sexy in that I don't have to spend time implementing guaranteed transfer," If you're using the Torque Network LIbrary (TNL?) then you don't need to implement guaranteed transfer, as it's already supported. You could use TNL's UDP support for any type of connection, as it provides reliable UDP. With TNL you could possibly use one socket for talking to many different other machines? I'm not sure from memory if this is supported. I'm not sure what your problem is?
mmo question
stodge replied to abeylin's topic in Networking and MultiplayerIt's not about telling people to not create MMPOGs. It's about telling people who are obviously not capable of searching Google or the forums, learning to program or reading documentation that they shouldn't try to create an MMPOG. My 0.0000002c.
- Advertisement | https://www.gamedev.net/profile/18413-stodge/ | CC-MAIN-2018-43 | refinedweb | 1,311 | 52.39 |
20 April 2012 12:25 [Source: ICIS news]
SHANGHAI (ICIS)--?xml:namespace>
The 100,000 tonne/year unit at Assaluyeh achieved commercial production last week, said Ramezan Oladi, director of planning and development with Iran's National Petrochemical Co (NPC), at the sidelines of Chinaplas, a four-day exhibition in
"Part of the butene-1 output will be captively used in Jam's 300,000 tonne/year linear low density polyethylene (LLDPE) at the same site, while the rest is being exported," he said. The unit achieved full output this week, he added.
Butene-1 is a key ingredient for the production of LLDPE and has been in short supply in recent years.
With the start-up of the new unit, Jam has become the largest butene-1 producer in Iran, overtaking Amir Kabir Petrochemical and Shazand Petrochemical (the renamed Arak Petrochemical) which produce 20,000 tonnes/year and 7,000 tonnes/year respectively.
Jam's Assaluyeh unit uses technologies from France-based Axens.
Officials from Jam were not. | http://www.icis.com/Articles/2012/04/20/9552444/chinaplas-12-irans-jam-begins-exports-from-butene-1-unit.html | CC-MAIN-2013-20 | refinedweb | 169 | 58.32 |
Hi - With a big thanks to David Carr for the gr.complex_mag () module I have an AM radio working. There was one last issue I had to get around with something probably not optimal - the audio was offset and very large. The fm demodulator distributed with gnuradio produces a working signal with amplitude about .2 or so, centered about zero. I simply put gr.complex_mag() in place of gr.quadrature_demod_cf() and got audio with magnitude about 300 centered about 350. My solution was to simply shift and scale and now get good audio, just like fm. Q: Is there a better way ? Is there a signal level standard we need to comply with? hack follows: def build_pipeline (fg, quad_rate, audio_decimation): audio_rate = quad_rate / audio_decimation # input: complex; output: float am_demod = gr.complex_mag () #) normal = gr.add_const_ff (-350) scale = gr.multiply_const_ff (.003) fg.connect (am_demod, audio_filter) fg.connect (audio_filter, normal) fg.connect (normal, scale) return ((am_demod, 0), (scale, 0)) Anyway, that was fun ;) Uses about 18% user and 8% system cpu on an AMD 1.3Ghz. --Chuck | https://lists.gnu.org/archive/html/discuss-gnuradio/2004-08/msg00048.html | CC-MAIN-2019-30 | refinedweb | 173 | 61.73 |
Recently i switched to OpenDns with Familyshield.I have applied the dns setting on my belkin wireless router.
If i dont use internet for 3-4 hours,my ISP asks me to re-login again,then i cant access the login page or internet.
Then i have to go to dns setting on my router and change the setting to 'Automatic from ISP' and then i can see the login page.
It only happens when my ISP asks me to login again otherwise it fine.
Your ISP is apparently one of those that uses internal domain names for things that are intended to be visible only to customers attached to its network. It directs you to the WWW page at, say, in order to log in, where the domain name internal.example.net. from that URL doesn't exist outwith the view of the DNS namespace that your ISP provides through its proxy DNS servers to customers. (example.net., here, is of course a domain name owned by your ISP.)
internal.example.net.
example.net.
Obviously, OpenDNS — which, ironically, also provides customized versions of the DNS namespace — doesn't know anything about such arragements. Nor can you tell it.
So you need what is known as split-horizon DNS service. You need to ensure that all DNS lookups for example.net. and its subdomains — i.e. every domain name that the ISP uses for this internal, customers-only, stuff — are directed to your ISP's proxy DNS servers, and all DNS lookups for all other domain names are directed to the OpenDNS proxy DNS servers.
There's almost certainly no way that you can do this with a domestic-grade router. Such routers lack the capability. You won't be able to do this by adjusting the DNS clients on the individual workstations on your LAN, either. It's not a capability built into any but a very few DNS clients. (Pretty much only MacOS has this mechanism.)
What you have to do instead is run a forwarding proxy DNS server, somewhere on your LAN. You configure that server to perform conditional forwarding, splitting example.net. and its subdomains off from the rest of the DNS namespace; and you configure all of your workstations to use that DNS server for proxy DNS service. For best results, so that you don't need to manually look up the IP addresses of your your ISP's proxy DNS server, you enable that Automatic from ISP setting again, and configure the conditional forwarding for example.net. to use your router as an intermediate forwarding proxy.
Automatic from ISP
If the necessity of having to have a single machine powered on if any other machines on the LAN are powered on gives you pause, you could even run individual forwarding proxy DNS servers on each workstation, rather than one central one serving your entire LAN. Running a DNS server on each workstation isn't exactly a novel thing in the world of Internet. People were doing it as a matter of course some thirty years ago. Most current operating systems actually come with DNS server softwares provided as standard. If you have BSDs, Linuxen, Macintoshes, Solarises, and so forth, setting up a forwarding proxy DNS server is a simple matter of installing djbdns, PowerDNS, BIND, or suchlike and configuring it with the appropriate conditional forwarding rules.
The non-Server editions of Microsoft Windows are the glaring exceptions. Even Windows Server has a DNS server as standard. So it's only really if you have an all-Windows-workstations network, without any Windows Server machines or non-Windows machines, that you will suffer from not having the software capability right there in the box.
There are other ways to address this, of course: scripts that do complex dances to reconfigure things temporarily and log-in, bodges using hosts files. But they all suffer from the same problem that you have now: lack of automation. With forwarding proxy DNS servers and split-horizon DNS service, your ISP can move its DNS servers and its internal HTTP servers around to different IP addresses without your needing to touch a thing. The forwarding proxy on the router obtains the new IP addresses for DNS service via DHCP automatically, and any new HTTP server IP addresses are simply looked up in the usual way. With hosts file bodges, in contrast, every time that the ISP changes these things you'll have to re-edit your hosts files to catch up. And your ISP, thinking that you're using DHCP for this stuff, almost certainly considers itself under no obligation to inform you in some other way that is has changed internal equipment around.
hosts
By posting your answer, you agree to the privacy policy and terms of service.
asked
4 years ago
viewed
699 times
active
4 months ago | http://superuser.com/questions/385486/open-dns-on-my-router-giving-problems/385721 | CC-MAIN-2016-26 | refinedweb | 810 | 60.75 |
index
including index in java regular expression
including index in java regular expression Hi,
I am using java regular expression to merge using underscore consecutive capatalized words e.g....);
}
}
Looking forward for your help...
Thanks
YJS
java methods - Java Beginners
java methods Hello,
what is difference between a.length() and a.length;
kindly can anybody explain.
thanks for your time Hi... to find the size of array.
Thanks
abstract methods in java
abstract methods in java what is abstract methods in java.give better examples for understanding
Hi Friend,
Please visit the following link:
Thanks
factory methods in java?
factory methods in java? what are factory methods in java?
Hi Friend,
Factory methods are static methods that return an instance...(),ClassLoader.findClass(String name) etc.
Methods in Java - Java Beginners
Methods in Java Hello.
Currently i am involved in a group project... mean, could someone help me please?
Thanks!
Hi Friend,
Try... result is: "+count);
}
}
Thanks Interview Questions
java methods what are native methods? how and when they are use... C code into your Java application.
The steps to creating native methods... information.
Thanks
String Methods - Java Beginners
;
System.out.println("Average number of words:" +average);
}
} methods
Java methods What are the differences between == and .equals.../java/master-java/index.shtml
Thanks
Java arraylist index() Function
Java arrayList has index for each added element. This index starts from 0.
arrayList values can be retrieved by the get(index) method.
Example of Java Arraylist Index() Function
import
notify and notifyAll methods - Java Beginners
notify and notifyAll methods I need a program demonstrating the methods of notify and notifyAll. please can you suggest me a way out. ...) {
}
}
}
Thanks
creating class and methods - Java Beginners
of the Computers.
This class contains following methods,
- Constructor method...,make,price,quantity);
c.valid();
}
}
Thanks
using class and methods - Java Beginners
.
Thanks
Java overloaded methods
Java overloaded methods Can overloaded methods can also be overridden
methods type - Java Beginners
methods type in Java Give me an example programs of methods types in Java
java object class methods
java object class methods What are the methods in Object class? There are lots of methods in object class.
the list of some methods are as-
clone
equals
wait
finalize
getClass
hashCode
notify
notifyAll
How to get given index value from FloatBuffer in java.
How to get given index value from FloatBuffer in java.
In this tutorial, we will discuss how to get given index value from
FloatBuffer in java....
It provides the following methods:
Return type
Method
Javascript array methods
";
lang[1] = "Java";
lang[2] = ".NET";
lang[3] = "PERL";
document.write...;
Java Programming: Chapter 7 Index
| Main Index
Write a long value at given index into long buffer.
Write a long value at given index into long buffer.
In this tutorial, we will see how to write a long value at given index
into long buffer.... It
provides the following methods:
Return type
Method
Description
Thanks
Thanks This is my code.Also I need code for adding the information on the grid and the details must be inserted in the database.
Thanks in advance
list of predeined methods and use java
list of predeined methods and use java I need list of predefined methods in java like reverse,compare,tostring, etc
Need help in completing a complex program; Thanks a lot for your help
Need help in completing a complex program; Thanks a lot for your help ... to call the batch file into the Java code. Is there any other way to call it?
Thanks a ton for your help
Passing Arrays In Jsp Methods
Passing Arrays In Jsp Methods
... sized data
elements generally of the similar data type. In an array the index... arrays are most
commonly used arrays in java. JSP is a technology which enables us
Java :Thread Methods
Java :Thread Methods
This section explains methods of Thread class.
Thread Methods :
Thread class provides many method to handle different thread... and allow other
threads the temporal executing thread object.
Some other methods
Object Class Methods in Java
We are going to discus about Object Class Methods in Java. The
java.lang.Object class is the root of the class hierarchy tree in
JDE(java development...(), wait(), etc
Java object class methods are:-
finalize()
clone()
equals
An application using swings and vector methods
An application using swings and vector methods Hi,
I want an application in Java swings which uses good selection of Vectors methods
z-index always on top
z-index always on top Hi,
How to make my div always on top using the z-index property?
Thanks
Hi,
You can use the following code:
.mydiv
{
z-index:9999;
}
Thanks
RIAs Methods And Techniques
RIAs Methods And Techniques
JavaScript
It is the first major client side...-side solution. Ajax,
the advance tool of Java Script becomes more prominent... of Java and the JVM are available in the
RIA developing world. These are: Curl
$_GET[] index is not defined
$_GET[] index is not defined Hi,
What could be solution of the error:
$_GET['someparameter'] index is not defined
Thanks
Hi,
You... here
}else{
//your code here
}
Thanks
Java
checking index in prepared statement
checking index in prepared statement If we write as follows:
String... = con.prepareStatement(query);
then after query has been prepared, can we check the index...://
Thanks
Creating methods in servlets - JSP-Servlet
Creating methods in servlets I created servlet and jsp file.I Instantiated 3 objects and Defined 2 methods in my servlet, first method should write... check if my code is OK becuase it is still giving me an error?
Thanks
alter table create index mysql
alter table create index mysql Hi,
What is the query for altering table and adding index on a table field?
Thanks
Old and New Vector Methods
Java: Old and New Vector Methods
When the new Collections API was introduced in Java 2 to
provide uniform data structure classes, the Vector class
was updated to implement the List interface.
Use the List methods because
alter table create index mysql
alter table create index mysql Hi,
What is the query for altering table and adding index on a table field?
Thanks
Hi,
Query is:
ALTER TABLE account ADD INDEX (accounttype);
Thanks
Abstract class,Abstract methods and classes
Abstract methods and classes
While going through the java language programming you have learned so many
times the word abstract. In java programming language the word abstract
Static/Class methods
Java NotesStatic/Class methods
There are two types of methods.
Instance methods are associated with an object and
use the instance variables of that object.
This is the default.
Static methods use no instance | http://roseindia.net/tutorialhelp/comment/88603 | CC-MAIN-2014-15 | refinedweb | 1,103 | 56.96 |
myConsole
A simple JavaScript editor for you phone, in JavaScipt.
A live version of it is hosted here.
We're a place where coders share, stay up-to-date and grow their careers.
For further actions, you may consider blocking this person and/or reporting abuse
That's fine, you can still create an account and turn on features like 🌚 dark mode.
Tien Nguyen -
Suraj Vishwakarma -
Jainil Prajapati -
Alexandro Castro -
Once suspended, victorqribeiro will not be able to comment or publish posts until their suspension is removed.
Once unsuspended, victorqribeiro will be able to comment and publish posts again.
Once unpublished, all posts by victorqribeiro will become hidden and only accessible to themselves.
If victorqribeiro is not suspended, they can still re-publish their posts from their dashboard.
Once unpublished, this post will become invisible to the public and only accessible to Victor Ribeiro.
They can still re-publish the post if they are not suspended.
Thanks for keeping DEV Community 👩💻👨💻 safe. Here is what you can do to flag victorqribeiro:
Unflagging victorqribeiro will restore default visibility to their posts.
Top comments (11)
How do I
npm install? :D
You don't. Just clone it if you want to host your own application, or just go to the live version at victorqribeiro.com/myConsole and add it to your honescreen.
Let me rephrase. How do I import libraries like
import S from 'sanctuary';to play with in sandbox?
You can add an external script, from a CDN or something.
let script = document.createElement('script');
script.src = 'site.com/script.js';
document.body.appendChild(script);
Awesome!
it says the site can't be reached
My hosting service was having trouble with some hackers last week so they put up a region block. Where are you in the world?
i am from india
Unfortunately you are in one of the blocked regions. I will contact my hosting service first thing in the morning and see if there's something they can do about it. I had similar issues on my last project, at least two other people had open issue on my git hub.
Neat! though the link you posted isn't to the project.
Fixed! | https://dev.to/victorqribeiro/myconsole---a-javascript-editor-for-your-phone-in-javascript-2o3c | CC-MAIN-2022-40 | refinedweb | 364 | 75.1 |
Case Western Reserve University bearing failure data set official website:
Consolidated version of official data set (no need to go one by one):
Link: Extraction code: fr5r
Processed tenth class dataset (including source files and codes):
Link: Extraction code: 7tna
Description of very large dataset file:
1) Ten data files such as 100, 108 ~ are extracted from the official data set
2) Then it is processed into c10signals.mat file by matlab (classes 10 code. M)
3) Then use python (datasetSample.py) to process it into ten classes of data c10classes.mat
c10classes.mat file contains training sets out of order (900 × 2048), test set (300) × 2048)
Training set (900) × 2048): 900 samples, each containing 2048 data points.
4) The contents of the tenth class dataset are as follows:
Interpretation of Casey West reservoir data set:
Data format: the bearing fault data file is in Matlab format
Each file contains fan and drive end vibration data and motor speed. The file variables in the file are named as follows:
DE - drive end accelerometer data Drive end vibration data
FE - fan end accelerometer data Fan end vibration data
BA - base accelerometer data Base vibration data
time - time series data time series data
RPM- rpm during testing Unit rpm Divided by 60 is frequency conversion
The data acquisition frequencies are: data set A: bearing fault data at the driving end at the sampling frequency of 12Khz
Data set B: drive end bearing fault data at 48Khz sampling frequency
Dataset C: fan end bearing fault data at 12Khz sampling frequency
Data set D: and normal bearing data (the sampling frequency should be 48k)
Interpretation of data set B: the fault diameter of the drive end bearing at the sampling frequency of 48Khz is divided into three categories: 0.007 inch, 0.014 inch and 0.028 inch. The load under each fault is divided into 0 HP, 1 HP, 2 HP and 3 HP. Under each horsepower of each fault, there are bearing inner ring fault, bearing rolling element fault and bearing outer ring fault (because the position of bearing outer ring is generally fixed, the outer ring fault is divided into three categories: 3 o'clock, 6 o'clock and 12 o'clock).
The fault data of drive end bearing at 48Khz sampling frequency is shown in the figure:
Note: after each file is opened, such as IR007_1 file (under 48Khz drive end inner ring bearing fault data) will also be included after opening BA: base; DE: drive end, FE: fan end, RPM: speed four data, personal understanding should be in In case of drive end bearing failure It contains the data collected by three different position sensors BA, DE and FE.
matlab code:
clc; clear all; close all; drive_100 = load('100.mat'); drive_108 = load('108.mat'); drive_121 = load('121.mat'); drive_133 = load('133.mat'); drive_172 = load('172.mat'); drive_188 = load('188.mat'); drive_200 = load('200.mat'); drive_212 = load('212.mat'); drive_225 = load('225.mat'); drive_237 = load('237.mat'); % de_100 = drive_100.X100_DE_time(1:121048); de_100 = drive_100.X100_DE_time(1:4:484192); de_108 = drive_108.X108_DE_time(1:121048); de_121 = drive_121.X121_DE_time(1:121048); de_133 = drive_133.X133_DE_time(1:121048); de_172 = drive_172.X172_DE_time(1:121048); de_188 = drive_188.X188_DE_time(1:121048); de_200 = drive_200.X200_DE_time(1:121048); de_212 = drive_212.X212_DE_time(1:121048); de_225 = drive_225.X225_DE_time(1:121048); de_237 = drive_237.X237_DE_time(1:121048); de_signals = [de_100,de_108,de_121,de_133,de_172,de_188,de_200,de_212,de_225,de_237]; signals = de_signals.'; save('c10signals.mat','signals'); whos('-file','c10signals.mat')
python code from: Victor`Wu
Use python to sort out the bearing data of Case Western Reserve University (CWRU) and make the data set_ Victor`Wu's blog - CSDN blog
import numpy as np import scipy.io as scio from random import shuffle def normalize(data): '''(0,1)normalization :param data : the object which is a 1*2048 vector to be normalized ''' s= (data-min(data)) / (max(data)-min(data)) return s def cut_samples(org_signals): ''' get original signals to 10*120*2048 samples, meanwhile normalize these samples :param org_signals :a 10* 121048 matrix of ten original signals ''' results=np.zeros(shape=(10,120,2048)) temporary_s=np.zeros(shape=(120,2048)) for i in range(10): s=org_signals[i] for x in range(120): temporary_s[x]=s[1000*x:2048+1000*x] temporary_s[x]=normalize(temporary_s[x]) #Normalize each sample along the way results[i]=temporary_s return results def make_datasets(org_samples): '''Input 10*120*2048 Output the labeled training set(Accounting for 75%)And test sets(Account for 25%)''' train_x=np.zeros(shape=(10,90,2048)) train_y=np.zeros(shape=(10,90,10)) test_x=np.zeros(shape=(10,30,2048)) test_y=np.zeros(shape=(10,30,10)) for i in range(10): s=org_samples[i] # Disorder order index_s = [a for a in range(len(s))] shuffle(index_s) s=s[index_s] # Each type is divided into training set and test set train_x[i]=s[:90] test_x[i]=s[90:120] # Fill in the label label = np.zeros(shape=(10,)) label[i] = 1 train_y[i, :] = label test_y[i, :] = label #Ten types of training sets and test sets are merged and disrupted respectively x1 = train_x[0] y1 = train_y[0] x2 = test_x[0] y2 = test_y[0] for i in range(9): x1 = np.row_stack((x1, train_x[i + 1])) x2 = np.row_stack((x2, test_x[i + 1])) y1 = np.row_stack((y1, train_y[i + 1])) y2 = np.row_stack((y2, test_y[i + 1])) index_x1= [i for i in range(len(x1))] index_x2= [i for i in range(len(x2))] shuffle(index_x1) shuffle(index_x2) x1=x1[index_x1] y1=y1[index_x1] x2=x2[index_x2] y2=y2[index_x2] return x1, y1, x2, y2 #Respectively represent: training set sample, training set label, test set sample and test set label def get_timesteps(samples): ''' get timesteps of train_x and test_X to 10*120*31*128 :param samples : a matrix need cut to 31*128 ''' s1 = np.zeros(shape=(31, 128)) s2 = np.zeros(shape=(len(samples), 31, 128)) for i in range(len(samples)): sample = samples[i] for a in range(31): s1[a]= sample[64*a:128+64*a] s2[i]=s1 return s2 # Read the original data and save it after processing dataFile= 'G://Study of machine learning / / deep learning / / LSTM / / datasets / / ten original signals. mat ' data=scio.loadmat(dataFile) org_signals=data['signals'] org_samples=cut_samples(org_signals) train_x, train_y, test_x, test_y=make_datasets(org_samples) train_x= get_timesteps(train_x) test_x= get_timesteps(test_x) saveFile = 'G://study of machine learing//deep learning//LSTM//datasets//datasets.mat' scio.savemat(saveFile, {'train_x':train_x, 'train_y':train_y, 'test_x':test_x, 'test_y':test_y}) | https://www.fatalerrors.org/a/19t01TE.html | CC-MAIN-2022-21 | refinedweb | 1,072 | 53.81 |
Hey guys, I'm having a hard time trying to figure this out, but I feel I'm close.
I want a data base linked to a repeater and it should sort by show my "Category" field when I click a button with that category's name.
My example has four categories. Eat & Drink // Shop // Fun // Services.
Right now I have four repeaters styled the same way and show/hide them based on the category button clicked. I really don't want four repeaters, so can you have the button show only the rows of it's category from the database?
Here's my set up.
downtownbentonville.org/explore
I'd love an easier way.
Thanks!
Is the repeater connected to a dataset? If so you can use the wix-dataset API to filter and sort the dataset. (Note: You might experience a know bug. When the repeater is reloaded after the filter or sort the background of the items may turn to white. This is being worked on and should be fixed soon.)
Hi Sam,
1. I created a collection, import all my products ( women's apparel) to the collection via CSV,
2. Added a gallery to a new page
3. Connected the dataset to my collection
4. Connected the Gallery to dataset
5. I managed to create categories as well (for example, dresses, blouses)
Is there any way that I can provide a drop down "Sorting" option above the gallery. (Newest, Price (low to high), Price (high to low), Name A-Z )
Please let me know if this is possible.
Many thanks.
Hey Sam is there a way you could send me a simple example? I’m having a hard time creating the code. Once I see it I can read it and replace the part I need too. Seriously that’d be amazing.
Let's say you have a button for sorting. You can write something like this:
Obviously, you will have to change the IDs and field keys in the example with the ones specific to your site.
Awww yeah man thats what I need. Thanks, I'll give it a shot.
Hi sam,
I have been trying to apply this code on my site for hours without succes.
I have a repeater with text linked all to a collection and i am putting my correct dataset name, the correct field key, but when i click the button nothing happen...
Even to have a simple sort i had to put : $w.onReady( function() { fort the sort to work, wich i dont understand what that code is, i just picked it somewhere when i was searching to make my code work.
I tried adding the 'on click' line under the on ready one but now nothing work.
You guessed it, i dont know how to code. lol
Is this can work on a page not dynamic but with dynamic connection that have a dataset?
At first i tought that i needed to put my page dynamic but it changed nothing.
I dont understand what is the difference between a dynamic page and a normal page with a dataset?!
Hey Guillaume,
Could you show us your full page code so we can see in what context you're adding the code.
In the meantime, a normal page with a dataset and a dynamic page are indeed very similar.
The major difference is what items the dataset contains. On a dynamic page, the items in the dataset are determined by the page's URL in addition to any filtering applied to the dataset itself. On a regular page the items are determined only by the dataset settings.
Also, a dynamic page dataset cannot be set to write-only. If you think about what we just said above about the URL, I think you will understand why.
Hi, thanks a lot for the reply, in the meantime i got it to work but not with the same code ive seen here or on the wix dataset api, i dont understand why.
I made an event in the parameter of the buttons so it gave me the code to start and ive add the rest of your code with it, its working now, im so happy i can't even!
Since wix have the DB and the coding...wow, my website was on hold for like 4 years because the options was to limited for what i wanted to acheive but now i see the light and it's looking good for me to finish it once and for all!
Anyway, sorry for the long story! lol
Another little question, is there any way to have this dynamic page im working on as part of my menu?
I redid that page on a normal page with datasets, for that reason, i needed it to show in my menu, thus why i was asking about the differences.
As of the code, it's what it gave me at the end:
// For full API documentation, including code examples, visit
$w.onReady(function () {
//TODO: write your page related code here...
});
export function line3_viewportEnter(event, $w) {
$w("#box5").hide();
}
export function line3_viewportLeave(event, $w) {
$w("#box5").show();
}
import wixData from 'wix-data';
// ...
export function button7_click(event, $w) {
$w("#IndexDataset").setSort(
wixData.sort()
.ascending("construction")
);
}
export function button26_click(event, $w) {
$w("#IndexDataset").setSort(
wixData.sort()
.ascending("nom")
);
}
export function button28_click(event, $w) {
$w("#IndexDataset").setSort(
wixData.sort()
.ascending("lieux")
);
}
export function button63_click(event, $w) {
$w("#IndexDataset").setSort(
wixData.sort()
.ascending("nom")
);
}
You can add a dynamic page to your menu by adding a link as described here:.
Ha! cool
Just did it, but there is a little glitch, the text of the button of the new menu button dont stay in the color state as "clicked".
Like, my button are gray and becomre green when they are clicked and you are on the same page, but this button stay gray.
I haven't been able to get it to work yet, but I'll try that code you posted. Thanks guys!
Up for my last question :)
Button stay gray!
I'm having some trouble but it's almost working.
This is my code. I need it to first sort by the colum name which it "Category" and then by the all rows that are called "Eat & Drink" in that colum. My dataset has no filters or sorting. It's called "exploreDataset"
Am I getting closer and can someone help? Thanks! | https://www.wix.com/corvid/forum/community-discussion/sorting-database-on-button-click | CC-MAIN-2020-10 | refinedweb | 1,074 | 82.14 |
#include <FXObjectList.h>
#include <FXObjectList.h>
Inheritance diagram for FX::FXObjectList:
Default constructor.
Copy constructor.
Construct and init with single object.
Construct and init with list of objects.
[virtual]
Destructor.
Assignment operator.
[inline]
Return number of objects.
Set number of objects.
Indexing operator.
Reimplemented in FX::FXObjectListOf< TYPE >.
Access to list.
Access to content array.
Assign object p to list.
Assign n objects to list.
Assign objects to list.
Insert object at certain position.
Insert n objects at specified position.
Insert objects at specified position.
Prepend object.
Prepend n objects.
Prepend objects.
Append object.
Append n objects.
Append objects.
Replace object at position by given object.
Replaces the m objects at pos with n objects.
Replace the m objects at pos with objects.
1
Remove object at pos.
Remove object.
0
Find object in list, searching forward; return position or -1.
2147483647
Find object in list, searching backward; return position or -1.
Remove all objects.
Save to a stream.
Load from a stream. | http://fox-toolkit.org/ref14/classFX_1_1FXObjectList.html | CC-MAIN-2021-17 | refinedweb | 163 | 56.82 |
SERVLETS
SERVLETS I have two Servlet Containers, I want to send a request from one Servlet from one container to one Servlet in the other container, How can I do
servlets
servlets hi i am doing one servlet program in which i strucked at one point.
my requirement is after entering the student id it retieves all... the student details i have to forward that to another jsp page and there i have
creating index for xml files - XML
creating index for xml files I would like to create an index file... 30-50 records. I would like to retrieve that xml files from the directory one... cases, more than one file may have same name. So, my index file would be like
What you Really Need to know about Fashion
What you Really Need to know about Fashion
You might think... know about fashion, even if you are not really
interested in following... clothes.
First of all, you should understand that no one can explain what
Servlets and
Servlets and Sir...! I want to insert or delete records form oracle based on the value of confirm box can you please give me the idea.... thanks
the servlets
what is diff between generic servlets and httpservlets what is diff between generic servlets and httpservlets
Difference between...() to be overridden. A subclass of HttpServlet must override at least one method
servlets
servlets hi i am using servlets i have a problem in doing an application.
in my application i have html form, in which i have to insert on date value, this date value is retrieved as a request parameter in my servlet
) and creates really ugly URLs.
doPost allows you to have extremely dense forms
servlets
row representing one logical group of data with a number of columns.The result set would contain this table of data and each row can be accessed one by one. we
servlets
why is http protocol called as a stateless protocol why is http protocol called as a stateless protocol
A protocol is stateless if it can remember difference between one client request and the other. HTTP
servlets
and if the menu has to change, only one file needs editing.
For more information when i am compiling the following servlet program it compiles the successfully.but when i try to run the program it gives the following...);
int i = pstm.executeUpdate();
String sql = "select
SERVLETS
SERVLETS I follow the same procedure what you send by the links.but i got the same errors
coding is:
import java.io.*;
import java.sql....);
pstm.setString(15,Howdidyouhear);
int
Drop Index
Drop Index
Drop Index is used to remove one or more indexes from the current database.
Understand with Example
The Tutorial illustrate an example from Drop Index
Introduction to Java Servlets
Introduction to Java Servlets
Java Servlets are server side Java programs that require... associated information required for creating and executing Java Servlets
Drop Index
Drop Index
Drop Index is used to remove one or more indexes from the current database... Index. In this example, we
create a table Stu_Table. The create table is used And Jsp - JDBC
Servlets And Jsp Sir,
I need a program for when i select the one of the field name of table,It has to display the table.Please anyone help me.I need this program fully
array, index, string
array, index, string how can i make dictionary using array...please help... explain me the separate examples with one method implemented
JavaScript array index of
JavaScript array index of
In this Tutorial we want to describe that makes you to easy to understand
JavaScript array index of. We are using JavaScript... are followed by break line. The for loop execute
the script till variable i
servlets - Servlet Interview Questions
servlets more than one web pages create in one frame i,t is possible
Tutorial | Jsp Tutorials
| Java Swing
Tutorials
JSP and servlets - JSP-Servlet
between end and start are greater than one hour then i need to close that session or page
Or otherwise i need to display the result to the user...JSP and servlets Hi sir,
This is vanisree in my project i need
index - Java Beginners
servlets Hi All
I have written a servlet by extending HttpServlet.I have overrriden the init() method and doGet() method .When I try to execute... is getting executed. Can Any one tell me the reason.
ThanX
Hi Friend
servlets - Servlet Interview Questions
servlets hi
i want to pass the attributes from one servlet... this..
actually i read some values into one page.. in this value is primary key...("name","rams");
rd.setAttribute("no","34");
now i want to forward this page
Mysql Date Index
Mysql Date Index
Mysql Date Index is used to create a index on specified table. Indexes in
database are similar to books in library. It is created on one or more
importing excel file and drawing multiple graphs from one excel file
and then plotting grph:) it really helped me:):) If you can please help me with one more thing... index in the line
HSSFCell cell1 = row.getCell(a);
i want to pass column ID..how...importing excel file and drawing multiple graphs from one excel file
java servlets
java servlets please help...
how to connect java servlets with mysql
i am using apache tomcat 5.5
Java Servlets - Java Interview Questions
where can i download NetBean 5.5.1? Pojo class is normal java class.but it will have a one great feature.that is the class [pojo class] can't extends...://
3)Visit the following link:
http
java servlets - Servlet Interview Questions
java servlets is it possible sending requestdispatcher from one page to onther page means here in the first servlet i am using doPost() method in this method i am using the requestdispatcher interface and forward to another
Sessions in servlets
Sessions in servlets What is the use of sessions in servlets?
... that a person's visit to a Web site is one continuous series of interactions... with no necessary connection between one request and the next Books
Servlets That's all you hear-well, in this book, at any rate. I hope this book...
Servlets Books
...
Courses
Looking for short hands-on training classes on servlets
How to index a given paragraph in alphabetical order
:
A -> a
G -> given
I -> index
is
P -> paragraph...How to index a given paragraph in alphabetical order Write a java program to index a given paragraph. Paragraph should be obtained during runtime
Servlets Program
Servlets Program Hi, I have written the following servlet:
[code...
[/code]
The problem I am facing is when I tried to compile the code, it gave me error saying that cannot find symbol:SerialBlob(); , while I have set
JavaScript array index of
JavaScript array index of
... to easy to understand
JavaScript array index of. We are using JavaScript... by break line. The for loop execute
the script till variable i is less than
database and servlets
database and servlets how can get the questions from a database and use it as questions for a form.
and present one question per page.
its for a tomcat server
Java Servlets
Java Servlets If the binary data is posted by both doGet and doPost then which one is efficient?Please give me some example by using both doGet and doPost.
Thanks in Advance
The doPost method is more efficient have only one day to visit the Jaipur..
I have only one day to visit the Jaipur.. Hi, I have only a day to travel in Jaipur ..hence, bit worried about what to see first:
I wonder - Java Beginners
I wonder Write two separate Java?s class definition where the first one is a class Health Convertor which has at least four main members:
i... of calories required by a person
iv. A method to determine body mass index (BMI
Installation, Configuration and running Servlets
Installation, Configuration and running Servlets
... to install a WebServer, configure it and finally run servlets using this server...). This Server supports Java Servlets 2.5 and Java Server Pages (JSPs) 2.1 specifications
I couldn't solve it
the brand (his index in the array of
employees). The second one is the turn...I couldn't solve it *A customer who wants to apply for getting a car... is specialized in only one brand. Thus, according to the brand of the car
i have one txt field and one button.when i entere any test in testfield then only button should be enabled.
i have one txt field and one button.when i entere any test in testfield then only button should be enabled. i have one txt field and one button.when i entere any test in testfield then only button should be enabled. i need
form based file upload using servlets
form based file upload using servlets Hai all,
I wrote a program... into database.
for that i created a html page having browse option and text box
i used commons-fileupload and commons-io APIs.
while i enterd username and upload
Hibernate One-to-many Relationships
;
<index
column="idx"
/>
<one-to-many... Hibernate One-to-many Relationships
Hibernate One-to-many Relationships file.it was stored in perticular directory
now i have to read the stored(.csv
Write a byte into byte buffer at given index.
Write a byte into byte buffer at given index.
In this tutorial, we...; index.
ByteBuffer API:
The java.nio.ByteBuffer class extends... ByteBuffer
putChar(int index, byte b)
The putChar(..) method write
change password servlets - JSP-Interview Questions
change password servlets hi all, i need the codes to create a change password servlet. Hi,
I dont have the time to write the code. But i..., one for old password and another for new password and the last one
servlets - JSP-Servlet
servlets hi,
can anybody help me as what exactly to be done to for compilation,execution of servlets. i also want to know the software required in this execution
Servlets errors in same page.
Servlets errors in same page. How do I display errors list in the same page where a form field exists using servlets...........i.e. without using JSP? Please explain with a simple username password program
servlets - JSP-Servlet
servlets thanks deepak for ur help.. but still i`m confused.. u had... the files have to be stored.?????????? Hi friend,
i am sending servlets link . you can learn more information about servlets structure
What is Index?
What is Index? What is Index
copying data from one table to another
copying data from one table to another i need to copy data from one... should i add to my code to make my work done...
thanq so much.. you r really... and so on...
i need these querys to use in my JSP code
MYSQL and SERVLETS - JDBC
MYSQL and SERVLETS I did addition ,deletion of data in mysql using servlets .I do not know that how to combine these two programs into a single... .How I can do using servlets
Hi friend,
For developing a simple
servlets - Servlet Interview Questions
servlets Hi i want to create class timetable using servlets
that will be create dynamically with rowspans and colspans
i know using html..
if suppose i create this using html
after some time i want to modify | http://roseindia.net/tutorialhelp/comment/20164 | CC-MAIN-2014-41 | refinedweb | 1,888 | 73.07 |
Hello,
I started learning C about 2 days ago, and now I have come to a problem that I can't figure out why I am getting the result I am getting. I bought a book, and I have proceeded quickly through it until now.
The problem I was given was this: "Write a program that reads input until encountering the # character and then reports the number of spaces read, the number of new line characters read, and the number of all other characters read."
Here is the source code I came up with:
#include <stdio.h> #define SPACE ' ' int main(void) { char c, prev; int space, newline; long other; space = 0; newline = 0; other = 0L; printf("Enter text to be analyzed (# to terminate):\n");? | https://www.daniweb.com/programming/software-development/threads/10128/c-programming-question | CC-MAIN-2017-34 | refinedweb | 125 | 64.04 |
Last week I posted about starting a new project here at the port. I'm entering new development territory with this, specifically, I'm 1) Doing test-driven developmet with NUnit, 2) Using Custom Entities instead of DataSets and 3) Developing my buisness tier components as WSE services. I've been having fun. I'm very into the test-driven developent thing, NUnit Rocks! I'm definitely writing much more solid code. I'm going to post about that later, but I wanted to share some thoughts on using Custom Entities vs. DataSets here.
I'm adding a whole new layer to my Common tier, consisting of my new custom entities. These are hopefully going to be the building blocks of our framework someday, and consist of our basic business entities.
DataBinding: This was one thing I was slightly worried about with Custom Entities, but it's works great. I've created strongly-typed collection classes for each entity, and I can databind these no problem. Being used to the ease of binding with DataSets, this was a relief. The one thing I do miss is having the UI component to assist with the DataBinding which you get with DataSets, but that's not a big deal.
Equality: This was somewhat of an eye opener for me. As I was putting my entites into collections, I noticed that they were going in multiple times, because the hash codes for object that I consider equal were different. Also, I noticed that if I created two entity objects with exactly the same data a test for equality would fail. To fix this, I found I needed to override Equals(), ==, != and GetHashCode().
Now, there are some suggestionsfrom MS (Guidelines for Implementing Equals and the Equality Operator (==)) that give some guidelines for doing this, and Overview of the Object Class had some good pointers too, but I discovered (via my trusty NUnit) that it's a little more complicated than it seems. First, a warning: The suggestions in this article are not correct. In fact, even the comments that attempt to correct the arcticle are incorrect! And, definitely don't do this!
So after some testing, here's a few guidelines for overriding Equals(), ==, != and GetHashCode():
If you're testing for NULL using = = comparisons within your overrides for = =, make sure you use the base class' (object) operator. This may seem obvious, but you'll get stack overflows really quickly without code like this.
public static bool operator ==(USCity x, USCity y) { if ((object) x == null) return ((object) y == null); return x.Equals(y); }
When overriding GetHashCode() make sure you test your reference types, like String, for NULL before bitwise - or'ing the hash codes. For example, code like this will break if stateCode or city are NULL:
public override int GetHashCode() { return this.stateCode.GetHashCode() ^ this.city.GetHashCode(); }This may seem obvious as well, but it took me some Unit testing to realize this. So, once I got everything straight, my simple class ended up looking like this:
public class USCity{
private string city; private string stateCode;
.. Public Properties Removed for Brevity .. public static bool operator ==(USCity x, USCity y) { if ((object) x == null) return ((object) y == null); return x.Equals(y); }
public static bool operator !=(USCity x, USCity y) { return !(x == y); } public override bool Equals(object obj) { if(obj is USCity) if(((USCity)obj).stateCode == this.stateCode && ((USCity)obj).city == this.city) return true; return false; } public override int GetHashCode() { int hashCode = 0; if(this.stateCode != null) hashCode ^= this.stateCode.GetHashCode(); if(this.city != null) hashCode ^= this.city.GetHashCode(); return hashCode; }}And here's my UnitTest that I used to sort this out this equality stuff:
[Test]public void TestCityEquality(){ USCity city = null; USCity city2 = null;
Assert.AreEqual(city, city2, "Nulls Should be Equal"); city = new USCity(); city.City = "Norfolk"; city.StateCode = "VA"; Assert.IsTrue(city != city2, "Should not be Equal");
city2 = new USCity(); city2.City = "Norfolk"; city2.StateCode = "VA"; Assert.IsTrue(city == city2, "Should be Equal"); city2.City = "Portsmouth"; Assert.IsTrue(city != city2, "Should not be Equal");}
-Brendan
[Advertisement]
This is an old thread I know, but just for the people that tend to find and read this article.
The suggestion about the hashcode required to be stable for the lifetime of the object is totally wrong.
The exact rule is: if you are using a hashtable, the key you use for that table should be immutable for the lifetime of the object. This has nothing to do with a correct GetHashCode implementation (although the hashtable will call this function to determine in which bucket the object will have to be put)
The two only rules that really apply are:
- the distribution should be random and equally distributed
- if two objects are equal, they should render the same hash
If you put the object in a hashtable, create an extra property that provides you a stable hash and use that key to store in the hashtable. Don't ever couple the implementation of GetHashCode with the Hash code to put in an HashT | http://codebetter.com/blogs/brendan.tompkins/archive/2004/07/09/18700.aspx | crawl-002 | refinedweb | 842 | 64.1 |
Here is the next best gift for C# developers: a list of commonly asked C# interview questions. Here are the top C# interview questions and their answers prepared by C# experts. These C# developer interview questions cover various topics like the difference between ref and out parameters, enum, boxing and unboxing, Asynchronous Methods etc. Get prepared with these C# basic interview questions and live your dream career as a C# developer.
C# is pronounced as "See Sharp".
C# is a type-safe object-oriented programming language developed by Microsoft which runs under the .NET framework.
C# enables the developers to build a variety of secure and robust applications like windows client applications, client-server application, XML web services, distributed components, database applications etc.
The argument passed as ref must be initialized before passing to the method. It means when the value of this parameter is changed in the method, it will get reflected in the calling method.
The argument passed as out parameter need not be initialized before passing to the method.
public class RefAndOut { public static void Main() { int val1 = 0; //must be initialized int val2; CallwithRef (ref val1); Console.WriteLine(val1); CallWithOut (out val2); Console.WriteLine(val2); } static void CallwithRef(ref int value) { value = 1; } static void CallWithOut(out int value) { value = 2; //must be initialized } }
Output
1 2
An enumeration is a list of named integer constants called enumerator list. An enumerated type is declared using the “enum” keyword.
C# enumeration is value data type and enumeration contains its own values and cannot inherit.
By default, enum has internal access but can also be declared as public.
By default, the first enumerator has the value 0 and the value of each successive enumerator is increased by 1. Example: enum Days { Sat, Sun, Mon, Tue, Wed, Thu, Fri };
In enum, Days Sat is 0, Sun is 1, Mon is 2, and so on. You can also change the default values for enumerators. Also, we can change the default value of the enumerator.
Use of enum is to define the static constants.
Boxing is a process of converting a value type to reference type. This is an implicit conversion. We can directly assign a value to an object and C# will manage that conversion.
Example : int i = 10; object obj = i; // Box the integer type i into object type obj.
The reverse process of Boxing is an unboxing. Unboxing is a procedure of converting reference type to value type. This has to be done explicitly. The object type is explicitly cast to the value type.
Example : int i = 10; object obj = i; // Box the integer type i into object type obj. int j = (int) obj; // Unbox the integer value stored in object type obj to integer type j.
Object class is the ultimate base class of all the classes in the .NET Framework. It is the root of the type hierarchy.
The ?? operator is called the null-coalescing operator. This operator is used to define a default value for a nullable value type or reference type variable If an operand is not null then it returns left-hand operand, otherwise it returns the right-hand operand.rand.
Example :
class nullablecheck { static void Main(string[] args) { string i = null; string j = i ?? “Default Value””; Console.WriteLine(j); Console.ReadLine(); } }
Output :
Default Value
A ternary operator takes three arguments.
The first argument is a comparison argument, the second argument is value if the result of the comparison is true. And the third argument is value if the result of the comparison is false.
This operator is used to reduce the lines in the code and increase the readability.
Example:
int i = 10; string message; message = I < 0 ? “Negative” : “Positive”
Here is a list of things associated with String and StringBuilder Classes.
C# supports five types of access modifiers as given below-
In C#, Codes are differentiated between safe and unsafe code. A code which is runs under the CLR is called the safe code.
The unsafe code does not run under the management of CRL. Other words, unsafe codes are not garbage collected. Thus, it is important that all unsafe codes are managed separately and carefully.
Using unsafe keyword of C#, we can mark the entire method or code block or individual statement as unsafe.
Example: public unsafe void MyUnsafeMethod() {
// write unsafe code here }
Here are some of the important things to note :
Namespaces are containers for the classes. We will use namespaces for grouping the related classes in C#. “Using” keyword can be used for using the namespace in other namespaces.
"Convert.toString" function handles NULLS, while ".ToString()" does not handle the null value and throws an exception.
Example : string s; object o = null; s = o.ToString(); //returns a null reference exception for s.
string s; object o = null; s = Convert.ToString(o); //returns an empty string for s and does not throw an exception.
When we want to define a function with ‘n’ arguments, we can use the params keyword in front of the definition of the last parameter.
These n number of arguments are changed by the compiler into elements in a temporary array and actually, this array is received at the receiving end.
public class ParamsExample { static decimal TotalSum(decimal d1, params int[] values) { decimal total = d1; foreach (int value in values) { total += value; } return total; } static void Main() { decimal d1 = 100; int sum1 = TotalSum(d1, 1); Console.WriteLine(sum1); int sum2 = TotalSum(sum1, 1, 2); Console.WriteLine(sum2); Console.Read(); } } /* Output 101 104 */
A jagged array is called an "array of arrays."
A jagged array is a special type of array whose elements are arrays of different dimensions and sizes.
Jagged array’s elements must be initialized before its use.
int[][] jaggedArray = new int[3][];
jaggedArray[0] = new int[3]; //contains 3 elements jaggedArray[1] = new int[2]; //contains 2 elements jaggedArray[2] = new int[4]; //contains 4 elements
In the jagged array, it is necessary to specify the value in the first bracket [] since it specifies the jagged array size..
When we are dealing with resources in a program, we should use the using keyword. When we use the using keyword for any of the file operations or a database connection etc, the statement obtains the resource, uses it and calls dispose method after use and cleans up the specified resource when the execution is completed.
Example:
using (SqlConnection con = new SqlConnection(connectionString)) { // perform sql statements etc.. }
When we have to implement a singleton class pattern, we can declare a constructor private.
Private constructor contains only static members. As it is private constructor, it cannot be externally called. If a class that has one or more private constructor and no public constructor, then other classes except nested classes are not allowed to create an instance of this class and it cannot be inherited.
Sealed classes are special types of class that is being restricted to be inherited. The sealed modifier is used to prevent a class inheritance. If you try to inherit a sealed class then a compile-time error occurs.
Example: Sealed class Example
{ void Display() { Console.WriteLine("Sealed class method!"); } } class MyExample : Example { void Show() { Console.WriteLine("Show method!"); } } Produces error , 'MyExample' cannot derive from sealed type 'Example'.
From C# 5.0, keywords async and await are introduced.
Before C# 5.0, for writing asynchronous code, We needed to write callbacks.
As shown in the example, we can use these keywords in combination. We have added await operator to an async operation. When the method returns the Task Response / Result again the execution happens normally
Example: public async Task<IEnumerable<Price>> GetPriceList() { HttpClient client = new HttpClient();
Uri address = new Uri(""); client.BaseAddress = address; HttpResponseMessage response = await client.GetAsync("api/product/5/price/");
if (response.IsSuccessStatusCode) { var list = await response.Content.ReadAsAsync<IEnumerable<Price>>(); return list; } else { return null; } }
Encapsulation is an object-oriented design principle that reduces coupling between objects and encourages maintainable code.
Encapsulation in C# is implemented with different levels of access to object through the access specifiers—public, private, protected, internal, and protected internal.
There are three types of casting which are supported in C#.
Implicit conversion: The compiler itself takes care of this and it guarantees no loss of data. In this type of type conversion smaller data type to larger data type conversion, and conversion of derived classes to base class with safety.
Explicit conversion: Done by using the cast operator. It includes conversion of the larger data type to smaller data type and conversion of the base class to derived classes. In this conversion information might be lost or conversion might not be succeeding for some reasons. This is an unsafe type of conversion.
For example Derived d = (Derived)b;
User-defined conversion: User-defined conversion is performed by using special methods that you can define to enable explicit and implicit conversions. It includes conversion of class to a struct or basic data type and struct to class or basic data type. Also, all conversions methods must be declared as static.
We can index the instance of a class or struct similiar to an array by using indexer. Unlike, property this keyword is used to define an indexer. Indexers can be overloaded. Indexers can also be declared with multiple parameters and each parameter may be a different type. Indexer modifier can be private, public, protected or internal.
Example:
class childclass { static void Main(string[] args) { ParentClass indexes = new ParentClass(); indexes[0] = "0"; indexes[1] = "1"; for (int i = 0; i < ParentClass.size; i++) { Console.WriteLine(names[i]); } Console.ReadKey(); } } class ParentClass { private string[] range = new string[2]; public string this[int indexrange] { get { return range[indexrange]; } set { range[indexrange] = value; } } }
Output :
0 , 1
Delegate is a reference type which holds a reference to a class method. Any method which has the same signature as a delegate can be assigned to delegate. It is similar to function pointer but it is type safe.
Delegate is declared by the keyword delegate. To create the delegate instance we need to assign a method to delegate which has the same signature as a delegate. And then invoking a delegate same as invoking a method.
Example:
//1. Declaration
public delegate int MyDelegate(int a, int b); //delegates having the same signature as a method
public class DelegateExample {
// methods to be assigned and called by the delegate
public int Sum(int a, int b) {
return a + b; } public int Difference(int a, int b) {
return a - b; } } class Program {
static void Main() { DelegateExample obj = new DelegateExample();
// 2. Instantiation : As a single cast delegate MyDelegate sum = new MyDelegate(obj.Sum);
MyDelegate diff = new MyDelegate(obj.Difference); // 3.Invocation Console.WriteLine("Sum of two integer is = " + sum(10, 20)); Console.WriteLine("Difference of two integer is = " + diff(20, 10)); } }
There are two types of delegates: Single cast delegate: Single cast delegate holds the reference of only one method. Multi-cast delegate: A delegate which holds the reference of multiple methods is called multi-cast delegate. A multicast delegate holds the reference of the methods which return type is void.
The differences between events and delegates are given below:
The event comes with its pair of assessors i.e. Add and Remove. Which is assigned and unassigned with a += and -= operator
The event has a restrictive signature and must always be of the form Event(object sender, EventArgs args)
Defined by name - Can be static as well - Access through the name - Supports shortened syntax - A get accessor has no parameters - A set accessor has an implicit value parameter
it is identified by signature - it must be an instance member - Access through index - Doesn’t support shortened syntax - A get accessor has same parameters as it has - A set parameter has the same parameters as it has plus the value param
Destructors are techniques to remove instances of classes as soon as class close. The destructor is used to release unmanaged resources allocated by the object. It is called automatically before an object is destroyed. It cannot be called explicitly. A class can have only one destructor.
The destructor is a special type member function which has the same name as its class name preceded by a tilde (~) sign.
Example:
class DestructorExample { public DestructorExample() { Console.WriteLine("Constructor called"); } //destructor ~DestructorExample() { Console.WriteLine("Destructor called"); } } class Program { static void Main() { DestructorExample T = new DestructorExample(); GC.Collect(); } } Out Put : Constructor called Destructor called
IEnumerable: -
IQueryable: -
Finalize():
Dispose():
C# is a high level, modern, simple, general-purpose and object-oriented programming language that has grown rapidly and is also used widely. C# was developed by Microsoft led by Anders Hejlsberg and his team within the.Net initiative which was approved by Europen Computer Manufacturers Association (ECMA) and International Standards Organization (ISO). C# has grown enormously with the extensive support from Microsoft helping it to acquire a large following and now it is one of the most popular programming languages in the world. You can use it to create Windows client application, distributed components, XML Web services, database applications, etc.
Being flexible, powerful and well-supported feature has turned C# to become one of the most popular programming languages available. It is the 4th most popular programming language. More than 117,000 C# jobs are advertised each month with an average salary of more than $72,000. Only in the USA, there are more than 6,000 jobs advertised each month with an annual salary of $92,00. According to Payscale an average salary of C# Developer is $66,185. Companies like Intel Corporation, Rhino Agile, TAG Employer Services, and Food Services of America are the companies using C#.
If you are planning to start your career in C#, you need to be well prepared with all the possible C# interview questions which could be asked in a C# interview. These C# Interview Questions will provide you with in-depth knowledge and help you ace your C# interview. These are the C# interview questions have been designed specially to get you familiar with the nature of questions that you might come across during your C# interview.
Going through these C# programming language interview questions will help you to land your dream job. These C# basic interview questions will surely boost your confidence to face an interview and will definitely prepare you to answer the toughest of questions in the best way possible. These C# developer interview questions are suggested by experts and have proven to have great value.
Not only the job aspirants but also the recruiters can also refer these C# basic interview questions to know the right set of questions to assess a candidate. Treat your next C# interview as an entrance to success. Give it your best and get the job. Wish you all the luck and confidence. You can also take up the C# Certification Training to enhance your career further. | https://www.zeolearn.com/interview-questions/c-sharp | CC-MAIN-2021-10 | refinedweb | 2,474 | 55.95 |
13 May 2009 05:23 [Source: ICIS news]
SINGAPORE (ICIS news)--Environmental group Greenpeace said late on Tuesday it was protesting Neste Oil’s plan to ramp up production of renewable diesel that would use up 1.5m tonnes of palm oil a year.
Boosting output at its plants in ?xml:namespace>
“Neste Oil's expansion plans are a major climate threat increasing the pressure for deforestation in
The organisation held a demonstration at Neste Oil’s 170,000 tonne/year renewable diesel production facility in
"Palm oil biodiesel is not a solution to climate change. It actually makes the problem worse if rainforests are cut down to grow the palm oil to fuel our cars,” said Suomela.
Neste Oil deputy CEO Jarmo Honkamaa, meanwhile, refuted Greenpeace’s claims, saying palm oil could be produced sustanably through improving the yield in existing plantations and expanding palm plantations in non-rainforest areas.
“Greenpeace disagrees with that view. But [we] have had and will continue to have many discussions with them about this issue,” Honkamaa said.
Neste Oils is currently in the process of building the world’s largest renewable diesel production facility located at
With additional reporting by Bohan Loh
To discuss issues facing the chemical industry go to | http://www.icis.com/Articles/2009/05/13/9215592/greenpeace-protests-neste-oils-boost-to-biodiesel-output.html | CC-MAIN-2014-10 | refinedweb | 208 | 57.81 |
Search:
Forum
General C++ Programming
Return By Reference
Return By Reference
Aug 27, 2009 at 6:20pm UTC
Jacko
(12)
I know it's not a good idea to return by reference unless you're sure the reference you're returning will still be pointing at something valid. But look at this simple program:
#include <iostream>
using namespace std;
double &GetSomeData()
{
double h = 46.50;
double &hRef = h;
return hRef;
}
int main()
{
double nonRef = GetSomeData();
double &ref = GetSomeData();
cout << "nonRef: " << nonRef << endl;
cout << "ref: " << ref << endl;
return 0;
}
which prints out:
nonRef: 46.5
Ref: 2.12217e-313
So, I'm wondering if I'm getting lucky getting good data for the non-reference variable, or if this is indeed OK to do (on any compiler). I'm thinking this is what is going on in the (a) and (b) calls to GetSomeData() returning to a non-reference and reference, respectively:
1. The GetSomeData() function is called
2. h is created on the stack and assigned the value 46.50
3. hRef is created is set to point to h
4a. For the nonRef case, h, the thing pointed to by ref (i.e., h) in GetSomeData() is copied to nonRef in main(), in other words, an implicit conversion from double& to double takes place from hRef in the function to nonRef in main() -AND-
4b. For the ref case, the address (basically) of h, stored in hRef, is copied to ref in main(), in other words, no conversion takes place, and the value (an address) of hRef is copied to ref
5. The GetSomeData() function terminates, h and hRef both go out of scope
Since main()'s nonRef contains the actual data, a copy of h, it is OK. Since main()'s ref "points" ("refers") to the now defunct h, it is not OK.
Is this what is going on? Can I always depend on it, or is this compiler-specific behavior, and therefore indeterminate or undefined behavior?
Also, I'm not really clear on the order of steps 4a/b and 5. Are values copied to the receiving variable before the function terminates and the variables go out of scope? What are the exact mechanics of this?
Aug 27, 2009 at 6:57pm UTC
helios
(11849)
You're getting lucky. The behavior of GetSomeData() is undefined.
Aug 27, 2009 at 8:01pm UTC
Jacko
(12)
Can you tell me the mechanics of how a function returns? When you're passing in a variable into a function like this:
int SomeFunc(int a)
{
int b;
b = a * 2;
return b;
}
int main()
{
int x = 3;
int y;
y = SomeFunc(x);
return 0;
}
Is this the order:
1. main() starts
2. x is created on the stack and is set to "3"
3. y is created on the stack
4. SomeFunc() is called: the value of x, 3, is copied onto the stack, used as "a" in the function
5. b is created on the stack
6. b is set to a * 2 = 6
7. y is set to 6
8. the function terminates and a and b go out of scope
9. main() terminates
In particular, are steps (7) and (8) correct? Or does the function terminate with a and b going out of scope, and b is returned some other way? Not 100% sure on the return mechanics.
Last edited on
Aug 27, 2009 at 8:02pm UTC
Aug 27, 2009 at 9:13pm UTC
helios
(11849)
IIRC, first b goes out of scope by being popped from the stack, and then its value is pushed onto the stack. Then the control flow goes back to main() and the stack is popped and the value assigned to y. I think that's how it worked, but I'm not entirely sure.
EDIT: Even if this is true, though, it doesn't really help you. The above description doesn't apply, for instance, if there's a copy constructor call involved. You should only imagine values being copied around, not the stack being pushed or popped.
Last edited on
Aug 27, 2009 at 9:16pm UTC
Aug 27, 2009 at 9:29pm UTC
Jacko
(12)
Is it possible that when main() calls SomeFunc(), it passes -- along with the address of where to return and the passed-in parameters -- the address of y to SomeFunc()'s stack frame? So then, the return statement in SomeFunc() stores the value of b in y, using the pointer, at the "return" statement right before it terminates and the stack frame becomes toast? In other words, passing IN to the function is a no-brainer: it creates a stack frame and copies all the pertinent data to it. Passing data BACK is a bit more problematic, as somehow it has to get back to the caller before the callee's stack frame and all its data, including the return value, vanishes. So I'm guessing indirection is used, i.e., the address of the receiving variable in the caller function is passed to the callee function. That's all I can guess. Anyone? I got my degree in EE, not CS! I think that's part of my C++ difficulties...
Aug 27, 2009 at 9:43pm UTC
helios
(11849)
I'm more or less certain no pointer-passing takes place unless explicitly stated in the function signature.
Aug 27, 2009 at 10:32pm UTC
Jacko
(12)
I'm thinking of things going on behind the scenes, like the creation of the stack frame and so forth, when a function is called. The executable has to copy things when making a function call that are transparent to us, as C++ developers, such as the address of where the call was made so control can return to the caller once the callee is done, and copies of function arguments. That I can picture. An area of memory is set aside for the stack frame, and all this information is popped onto it by the caller. Once all the data necessary for the callee to operate is there, control is transferred to the callee and it operates on the contents of the stack frame, manipulating its contents, and also creating its own local data (and the return value, presumably) on that frame.
But
somehow
, the return data has to work its way
back
from the callee to the caller. And at some point the stack frame ceases to exist. So that data needs to be transferred back somehow before it ceases to exist.
The caller sending the address of the variable that will receive the return value, and the callee using this address to stuff the return value into that variable (again, behind the scenes, by the stack frame mechanics) is the only thing I can think of as a way to get the data back to the caller. It's just my speculation, theory, thoughts...
I'd like to better understand it, as I get questions on C++ interviews that have a lot to do with the inner workings, the mechanics of things. It's hard to find these answers!
Aug 27, 2009 at 11:15pm UTC
guestgulkan
(2942)
If you are really that keen:
try googling for:
C/C++ calling convention
C/C++ naming convention
Microsoft Visual C++ is brilliant for this type of investigation/learning.
You can put breakpoints in you code in debug mode and open the assembly window.
This will show you the source code and for each statement it will show the assembly code.
It will show the function prologues and epilogues etc...
Last edited on
Aug 27, 2009 at 11:19pm UTC
Aug 27, 2009 at 11:23pm UTC
helios
(11849)
It's hard to find these answers!
No, they're not. Just take a look at your compiler's output.
I'd like to better understand it
Fine, here you go. The calling procedure generated by VC++ without optimizations:
int f(int a){
00411260 push ebp
00411261 mov ebp,esp
00411263 sub esp,40h
00411266 push ebx
00411267 push esi
00411268 push edi
return a*2;
00411269 mov eax,dword ptr [a]
0041126C shl eax,1
}
0041126E pop edi
0041126F pop esi
00411270 pop ebx
00411271 mov esp,ebp
00411273 pop ebp
00411274 ret
int main(){
00411280 push ebp
00411281 mov ebp,esp
00411283 sub esp,44h
00411286 push ebx
00411287 push esi
00411288 push edi
int b=f(2);
00411289 push 2
0041128B call f (411096h)
00411290 add esp,4
00411293 mov dword ptr [b],eax
return 0;
00411296 xor eax,eax
}
00411298 pop edi
00411299 pop esi
0041129A pop ebx
0041129B mov esp,ebp
0041129D pop ebp
0041129E ret
In this case, the compiler chose to return the value through eax.
Let's try with something a bit bigger:
1
2
3
4
5
6
7
8
9
10
11
12
struct
A{
int
a[50]; }; A f(){
return
A(); }
int
main(){ A a=f();
return
0; }
A f(){
00411660 push ebp
00411661 mov ebp,esp
00411663 sub esp,108h
00411669 push ebx
0041166A push esi
0041166B push edi
return A();
0041166C push 0C8h //<-- 0xC8 is 50*sizeof(int)
00411671 push 0 //What to set the bytes to.
00411673 lea eax,[ebp-108h] //<-|
00411679 push eax //<--- address of the buffer
0041167A call @ILT+295(_memset) (41112Ch) //memset(ebp-108,0,50*sizeof(int));
0041167F add esp,0Ch
00411682 mov ecx,32h
00411687 lea esi,[ebp-108h]
0041168D mov edi,dword ptr [ebp+8]
00411690 rep movs dword ptr es:[edi],dword ptr [esi] //This is a buffer copy, IINM
00411692 mov eax,dword ptr [ebp+8]
}
00411695 pop edi
00411696 pop esi
00411697 pop ebx
00411698 mov esp,ebp
0041169A pop ebp
0041169B ret
int main(){
004116A0 push ebp
004116A1 mov ebp,esp
004116A3 sub esp,298h
004116A9 push ebx
004116AA push esi
004116AB push edi
A a=f();
004116AC lea eax,[ebp-1D0h]
004116B2 push eax
004116B3 call f (411131h)
004116B8 add esp,4
004116BB mov ecx,32h
004116C0 mov esi,eax
004116C2 lea edi,[ebp-298h]
004116C8 rep movs dword ptr es:[edi],dword ptr [esi]
004116CA mov ecx,32h
004116CF lea esi,[ebp-298h]
004116D5 lea edi,[a]
004116DB rep movs dword ptr es:[edi],dword ptr [esi]
return 0;
004116DD xor eax,eax
If I'm getting it right, the compiler first copies the structure to an intermediate location, it returns the address of this location through eax, then copies it back to the destination structure. I'm not sure why it has to perform the copy three times.
Last edited on
Aug 27, 2009 at 11:24pm UTC
Topic archived. No new replies allowed.
C++
Information
Tutorials
Reference
Articles
Forum
Forum
Beginners
Windows Programming
UNIX/Linux Programming
General C++ Programming
Lounge
Jobs
|
v3.1
Spotted an error? contact us | http://www.cplusplus.com/forum/general/13808/ | CC-MAIN-2015-06 | refinedweb | 1,797 | 75.13 |
On Sun, Jun 27 2010 at 5:49am -0400, Christoph Hellwig <hch lst de> wrote: > On Sat, Jun 26, 2010 at 04:31:24PM -0400, Mike Snitzer> > > --- > > drivers/md/dm-linear.c | 1 + > > drivers/md/dm-table.c | 51 ++++++++++++++++++++++++++++++++++ > > drivers/md/dm.c | 60 ++++++++++++++++++++++++++++++++-------- > > drivers/md/dm.h | 1 + > > include/linux/device-mapper.h | 1 + > > 5 files changed, 102 insertions(+), 12 deletions(-) > > > +static int device_discard_incapable(struct dm_target *ti, struct dm_dev *dev, > > + sector_t start, sector_t len, void *data) > > +{ > > + struct block_device *bdev = dev->bdev; > > + struct request_queue *q = bdev_get_queue(bdev); > > + > > + WARN_ON(!q); > > + return (!q || !blk_queue_discard(q)); > > +} > > How could a NULL queue happen here? It really cannot, I was just being defensive. | http://www.redhat.com/archives/dm-devel/2010-June/msg00119.html | CC-MAIN-2014-35 | refinedweb | 108 | 69.48 |
{-# LANGUAGE TemplateHaskell, FlexibleContexts #-} -- | -- Module : System.Metronome -- Copyright : (c) Paolo Veronelli 2012 -- License : BSD-style (see the file LICENSE) -- -- Maintainer : paolo.veronelli@gmail.com -- Stability : unstable -- Portability : not portable (requires STM) -- -- . -- module System.Metronome ( -- * Data structures Track (..) , Thread (..) , Metronome (..) -- * Lenses , sync , frequency , actions , priority , muted , running , alive , core , ticks , schedule -- * Synonyms , Control , Priority , Frequency , Ticks , Action , MTime , TrackForker -- * API , metronome ) where import Sound.OpenSoundControl (utcr, sleepThreadUntil) import Control.Concurrent.STM (STM, TVar, TChan , atomically, newBroadcastTChan, orElse, dupTChan) import Control.Concurrent (forkIO, myThreadId, killThread) import Control.Monad (join, liftM, forever, when) import Data.Ord (comparing) import Data.List (sortBy) import Data.Lens.Template (makeLens) import Data.Lens.Lazy (modL) import Control.Concurrent.STMOrIO -- | Track effect interface. Write in STM the collective and spit out the IO action to be executed when all STMs for this tick are done or retried type Action = STM (IO ()) -- | Priority values between tracks under the same metronome. type Priority = Double -- | Number of metronome ticks between two track ticks type Frequency = Integer -- | Number of elapsed ticks type Ticks = Integer -- execute actions, from STM to IO ignoring retriers execute :: [Action] -> IO () execute = join . liftM sequence_ . atomically . mapM (`orElse` return (return ())) -- | State of a track. data Track = Track { -- | the number of ticks elapsed from the track fork _sync :: Ticks, -- | calling frequency relative to metronome ticks frequency _frequency :: Frequency, -- | the actions left to be run _actions :: [Action], -- | priority of this track among its peers _priority :: Priority, -- | muted flag, when True, actions are not scheduled, just skipped _muted :: Bool } $( makeLens ''Track) -- | supporting values with 'running' and 'alive' flag data Thread a = Thread { -- | stopped or running flag _running :: Bool, -- | set to false to require kill thread _alive :: Bool, -- | core data _core :: a } $( makeLens ''Thread) -- | A Thread value cell in STM type Control a = TVar (Thread a) -- | Time, in seconds type MTime = Double -- | State of a metronome data Metronome = Metronome { _ticks :: [MTime], -- ^ next ticking times _schedule :: [(Priority, Action)] -- ^ actions scheduled for the tick to come } $( makeLens ''Metronome) -- | The action to fork a new track from a track state. type TrackForker = Control Track -> IO () -- helper to modify an 'Thread' fulfilling 'running' and 'alive' flags. runThread :: (Monad m, RW m TVar) => Control a -> m () -> (a -> m a) -> m () runThread ko kill modify = do -- read the object Thread r al x <- rd ko if not al then kill else when r $ do -- modify as requested x' <- modify x -- write the object md ko $ modL core $ const x' -- forkIO with kill thread forkIO' :: (IO () -> IO ()) -> IO () forkIO' f = forkIO (myThreadId >>= f . killThread) >> return () -- fork a track based on a metronome and the track initial state forkTrack :: TChan () -> Control Metronome -> Control Track -> IO () forkTrack kc tm tc = forkIO' $ \kill -> do -- make new metronome listener kn <- atomically $ dupTChan kc forever $ do rd kn -- wait for a tick runThread tc kill $ \(Track n m fss z g) -> atomically $ do Thread ru li (Metronome ts ss) <- rd tm -- check if it's time to fire let (ss',fs') = if null fss then (ss,fss) else let f:fs'' = fss in if n `mod` m == 0 -- fire if it's not muted then if not g then ((z,f):ss,fs'') -- else don't consume else (ss,fs'') else (ss,fss) wr tm $ Thread ru li (Metronome ts ss') -- the new Track with one more tick elapsed and the actions left to run return $ Track (n + 1) m fs' z g -- | Fork a metronome from its initial state metronome :: Control Metronome -- ^ initial state -> IO TrackForker metronome km = do kc <- atomically newBroadcastTChan -- non leaking channel forkIO' $ \kill -> forever . runThread km kill $ \m@(Metronome ts _) -> do t <- utcr -- time now -- throw away the past ticking time case dropWhile (< t) ts of [] -> return m -- no ticks left to wait t':ts' -> do -- sleep until next sleepThreadUntil t' -- execute scheduled actions after ordering by priority Metronome _ rs <- _core `liftM` rd km execute . map snd . sortBy (comparing fst) $ rs -- broadcast tick for all track to schedule next actions wr kc () -- the new Metronome with times in future and no actions scheduled return $ Metronome ts' [] return $ forkTrack kc km | http://hackage.haskell.org/package/metronome-0.1/docs/src/System-Metronome.html | CC-MAIN-2014-23 | refinedweb | 672 | 51.72 |
We use cookies to ensure you have the best browsing experience on our website. Please read our cookie policy for more information about how we use cookies.
- SB
plague187x2 + 0 comments
In the C# stub, base is a keyword and can't be used as a variable.
- KH
Slaunger + 1 comment
C++: No need for floats
int main() { uint32_t a, b; cin >> b >> a; cout << 2 * a / b + bool((2 * a) % b); return 0; }
shadabeqbal + 1 comment
C in one line
int lowestTriangle(int base, int area){ return ceil((float)(2*area)/base); }
- JG
jongod5399 + 0 comments
I am so mad. One of the test cases' answer was 2000000 and C++ std::cout gave the output as 2e+06 instead of 2000000 so it gave a wrong answer... I really need to start using std::printf everywhere.
- LJ
lavijain1999 + 0 comments
include
int main() { int i,b,a,h,a1;
scanf("%d %d",&b,&a); h=2*a/b; a1=b*h/2; if(a1==a){ printf("%d",h); } else { printf("%d",h+1); } printf("\n"); return 0;
}
prakarsha_malho1 + 0 comments
int main() { int base; long int area; cin >> base >> area; long int m= ceil((float)(2*area)/base); cout<< m;
return 0;
}
sz_andras_91 + 1 comment
python2
def lowestTriangle(base, area): x = area*2 ans = int(x/base) return ans if x % base == 0 else ans + 1
jwelborn + 0 comments
this may help get rid of some checks you're doing
Sort 29 Discussions, By:
Please Login in order to post a comment | https://www.hackerrank.com/challenges/lowest-triangle/forum | CC-MAIN-2018-26 | refinedweb | 253 | 57.54 |
This site works best with JavaScript enabled. Please enable JavaScript to get the best experience from this site.
addMapping(TileEntityPlayerBlock.class, "PlayerBlock");
addMapping(net.minecraft.src.TileEntityPlayerBlock.class, "PlayerBlock");
Quote from Frizzil
Hmm... "getBlockEntity() instanceof TileEntityPlayerBlock" will return false when it ain't an instance, but I'm wondering if it returns false on null or crashes... do you have Eclipse set up where you can debug it? If not, just check if getBlockEntity() is returning null with a println.
Oh wait-- is getBlockEntity() the method that creates a new block entity??? That would be a major problem, haha. It would be poorly named, as such, but you definitely should only use it once to actually create the tileentity, not grab it.
private int clock = 0;
//Decay clock, determines the frequency of decay sequences, also lag control
private static ArrayList<Point3D> positions = new ArrayList<Point3D>(); //PlayerBlock listing
// this function is called routinely from the onTickIngame() function
public void runDecay()
{
System.out.println("Attempting to run decay");
clock = 0;
Iterator<Point3D> itr = positions.iterator();
Point3D testPoint;
while(itr.hasNext()){
testpoint = itr.next()
System.out.println("Player block at point: "+testpoint.x()+", "+testpoint.y()+", "+testpoint.z()+" tested!");
}
}
public static void addBlock(int i, int j, int k){
positions.add(new Point3D(i,j,k));
System.out.println("Player block at point: "+i+", "+j+", "+k+" marked!");
}
public static void deleteBlock(int i, int j, int k){
positions.remove(new Point3D(i,j,k));
System.out.println("Player block at point: "+i+", "+j+", "+k+" unmarked!");
}
//From TileEntityPlayerBlock.java
public class TileEntityPlayerBlock extends TileEntity
{
public float Age=0;
public TileEntityPlayerBlock()
{
}
/**
* Writes a tile entity to NBT.
*/
public void writeToNBT(NBTTagCompound par1NBTTagCompound)
{
nature_reclaims.addBlock(xCoord, yCoord, zCoord);
par1NBTTagCompound.setFloat("Age", Age);
super.writeToNBT(par1NBTTagCompound);
}
/**
* Reads a tile entity from NBT.
*/
public void readFromNBT(NBTTagCompound par1NBTTagCompound)
{
Age = par1NBTTagCompound.getFloat((new StringBuilder()).append("Age").toString());
nature_reclaims.deleteBlock(xCoord, yCoord, zCoord);
super.readFromNBT(par1NBTTagCompound);
}
}
//From Block.java, the getBlockEntity function (transplanted from Chunk.java)
public TileEntity getBlockEntity()
{
if( shouldMark && blockID != 0 ){
shouldMark = false;
return new TileEntityPlayerBlock();
}
return null;
};
Have tried to see if the blocks will save? I know they won't load, but that could be because they're not getting saved.
Is there a tutorial anywhere that deals with adding new tile entities? I find it hard to believe, given the number of BlockContainer blocks added in mods, that something like this hasn't been attempted before.
By commenting those out, I've managed to get the PlayerBlock entities saving and loading, with no immediately-apparent negative effects. Chests still work, signs still work, even doors are unaffected. I'm currently working in 1.1 so that's not 100% certainty for 1.2 but since there hasn't been any gigantic rewrite it should be okay.
I can then either continue using my current method or I can add an age variable to the entity and retrieve it, increment it by the tick delay of decay, and use that to determine the odds of a block decaying. If I get the watcher working, I can also use the tile entity to add and remove decaying blocks from a looping list.
The first is simply that whilst no errors are spat out when I check for the TileEntityPlayerBlock, it shuts my code down, nothing ever happens. I'm not sure if it's fiddling with the RNG or whatever, but for some reason having any of my code subject to the condition of that tile entity existing eliminates it from ever happening.
However, I can demonstrate very readily that the tile entity exists, by logging out and viewing the world in MCEdit I can see all the entities neatly. Which brings me to my second bug that I discovered whilst doing so:
The player block entities don't die.
Or at least, they don't die when their attached block does, just when the world is re-loaded. Until then, they hang around in state, which is a problem not only for memory concerns but also because if another block is placed in the now-empty space, it inherits the pre-existing entity. Signs are still fine but I can't seem to find where they kill off their entities or such in the code. I've tested this bug by having the player block entity print out the block ID of it's associated block after a certain (short) period of time after creation, and blocks placed in previously occupied spaces don't print out.
EDIT: Bug #1 seems to be that the entity isn't getting associated with a given block, adding "if(getBlockEntity() instanceof TileEntityPlayerBlock){ System.out.println("Player block found!"); }" to the updateTick code of Block.java returns nothing. This might also be the cause of bug #2.
EDIT the Sequel: Bug #2 has been resolved, I added the code from BlockContainer's onBlockAdded and onBlockRemoval code to the main Block.java and now the entities seem to be dying properly, but they're not being related to the blocks properly and my test code above still outputs nothing.
I haven't resolved the entity-referencing issue, but I've begun changing operations over to a watcher method using a list of 3D points that is updated predominantly by the player block entities, such that I won't have to reference the entity directly except to find out the age. If I get something working properly I'll post up the altered files so others in need of player-block functionality might benefit.
Oh wait-- is getBlockEntity() the method that creates a new block entity??? That would be a major problem, haha. It would be poorly named, as such, but you definitely should only use it once to actually create the tileentity, not grab it.
u need to ask the world object for the TileEntity
The first is that regardless of what I try, I can't seem to get the code to recognise Point3D (cannot find symbol), including if I write my own class for it in the nature_reclaims.java code. I'm having similar (if not identical) issues with my own DecayData class that I'm using to shift the decay information out of Block.java.
The second is interfacing with the TileEntityPlayerBlock, since getBlockEntity now returns either null (as it originally did) or a new tile entity (which has age 0, the value I'm trying to reference). I need a way to first retrieve the tile entity and second check its age value. I assume given the former that a getter function will do for that, so it's mostly the former I need to resolve. | https://www.minecraftforum.net/forums/mapping-and-modding-java-edition/minecraft-mods/modification-development/1419889-tracking-individual-blocks?page=4 | CC-MAIN-2019-13 | refinedweb | 1,107 | 56.35 |
There has been talk around the new version of Xamarin.Forms for a while now. In May 2018, Microsoft released the new major version of Xamarin Forms, and it has great new features.
Much work has gone into stabilizing the whole thing, but that didn’t stop the Xamarin team from implementing some new features. In this article I will provide you with a quick overview in the form of a cheat sheet. guide will help you have a good overview of the most important new features of Xamarin Forms 3, and a reference on how to use them!
You might know the Visual State Manager (VSM) from other XAML platforms, but with version 3.0, it is now available on Xamarin.Forms. With a VSM, you can change XAML elements based on visual states. These changes can be triggered from code. This way you could, for instance, change your form layout whenever the orientation of a device changes.
A visual state, defined in XAML could look something like this:
<Style TargetType="FlexLayout">
<Setter Property="VisualStateManager.VisualStateGroups">
<VisualStateGroupList x:
<VisualStateGroup>
<VisualState x:
<VisualState.Setters>
<Setter Property="Direction" Value="Column"/>
<Setter Property="Margin">
<OnPlatform x:
<On Platform="iOS" Value="0,30"/>
</OnPlatform>
</Setter>
</VisualState.Setters>
</VisualState>
<VisualState x:
<VisualState.Setters>
<Setter Property="Direction" Value="Row"/>
<Setter Property="Margin">
<OnPlatform x:
<On Platform="iOS" Value="30,0"/>
</OnPlatform>
</Setter>
</VisualState.Setters>
</VisualState>
</VisualStateGroup>
</VisualStateGroupList>
</Setter>
</Style>
As you can see, define a Style at the highest level and specify a target type. Inside the style, you can have different groups and a group can have states, each state defined with a different name.
These states are the key here.
In the state, you can define values for properties that refer back to the target type, in our case a FlexLayout. You can even use the OnPlatform conditions to specify different values for different platforms.
Style can be defined in the applications ResourceDictionary. Just execute this line: VisualStateManager.GoToState(Container, (width > height) ? "Horizontal" : "Portrait"); to set a certain state from code.
For example, this line of code would go to an event handler that detects if the orientation of our device has changed, and will apply the desired visual state as necessary. The “Horizontal” and “Portrait” names here refer to the names of the states we defined.
You can find more information on the Visual State Manager in the Microsoft Docs:
Another big feature added in this major release of Xamarin.Forms is the FlexLayout.
With this new layout, you can stack and wrap child views. It works very similar to the concept you might know from CSS as the Flexible Box Layout (or flex layout or flex box).
The FlexLayout is related to the StackLayout you might already know from working with Xamarin.Forms. The big advantage of the FlexLayout is that it will wrap its children where needed. When there is no more space in the row or column where the children are defined, it will move the rest of the items to the next row/column.
A FlexLayout is very easy to define, as you can see in the code block underneath.
<ContentPage xmlns=""
xmlns:x=""
xmlns:local="clr-namespace:FlexLayoutDemos"
x:
<FlexLayout Direction="Column" AlignItems="Center"
JustifyContent="SpaceEvenly">
<!—Your controls here -->
</FlexLayout>
</ContentPage>
Define it as any other regular layout element you are used to. With the properties you set on the FlexLayout , you can determine how to layout the children and how they should align and wrap.
To learn more in-depth about the FlexLayout, please refer to the Microsoft Docs:
There has been a lot of talk about implementing CSS in Xamarin.Forms. You can love or hate it, but it is available for you to use!
XAML already bears similarity with HTML so it makes sense to use CSS to apply styling .
To use CSS in your app, you need to take three steps:
A simple example of a CSS file could look like the code underneath.
^contentpage {
background-color: lightgray;
}
#listView {
background-color: lightgray;
}
stacklayout {
margin: 20;
}
font-style: bold;
font-size: medium;
}
listview image {
height: 60;
width: 60;
}
stacklayout>image {
height: 200;
width: 200;
}
If you are familiar with CSS, you can see it looks like the real deal and has the same capabilities. The ^ selector might seem strange since it’s not available in regular CSS. With this selector you can target a base type.
Here’s a quick overview of all selectors:
And of course, you can select child elements in different ways.
To consume a stylesheet, there are different ways to do that. From XAML, you can reference an external file by declaring it like this:
<ContentPage.Resources>
<StyleSheet Source="/Assets/styles.css" />
</ContentPage.Resources>
You can also define your CSS inline. To do so, do this:
<ContentPage.Resources>
<StyleSheet>
<![CDATA[
^contentpage {
background-color: white;
}
]]>
</StyleSheet>
</ContentPage.Resources>
There are also ways to load css files dynamically from code or interpret them from a string.
If you want to learn about all the nooks and crannies, head over the Microsoft Docs page:
With Xamarin.Forms 3, it is now possible to apply a FlowDirection on any VisualElement, which is basically all controls. Simply call upon Device.FlowDirection to retrieve the direction of the device your user is using.
In the ideal case, you can update your app by implementing this piece of code on all of your pages: FlowDirection="{x:Static Device.FlowDirection}".
More information about RTL support is in this great blog post :
One of my personal favorites is a tiny addition - the new MaxLength property on the Entry and Editor control.
I might be biased because I have added this to Xamarin.Forms myself. This new version of Forms has had a lot of help from the community. Since a while now, Xamarin.Forms is open-source and on Github and they are accepting any good pull-requests you open.
Since I love Forms so much and use it all the time, I thought it would be great to contribute something back. This resulted in the addition of the MaxLength property. Using it is easy.
If you have an Entry of Editor control and you want to restrict the amount of characters that the user can enter, you can apply the MaxLength property with an integer value. This could, for example, look like this: <Entry TextColor=”Red” MaxLength=”15” />. This will cause an Entry that shows its characters in red and limits the number of characters to a maximum of 15.
The MaxLength property is bindable, so you can bind a value to this to make it more dynamic.
Other properties that were added are ProgressBar.ProgressColor, Picker.FontFamily and much, much more.
The new version of Xamarin.Forms brings a lot of new goodies while also focusing on stability. Major additions like CSS, the Visual State Manager and Left-To-Right support were long due, and a lot of devs were waiting for it.
Also, community contributions are precious. If you have worked with Xamarin.Forms before, you might know there are a lot of small features and visuals you wish would just be there.
The team at Xamarin has heard you and is now taking on this challenge working together with us. A lot of great things are added already, and I am sure much more will come.
You can see all the release notes associated with this release on this page here: and if you look, you will notice that the first 3.1 prerelease versions are on there, so you get to take a peek into the future!
I hope this article/cheatsheet has given you a good overview of what is new and provided you with a quick-start on these new features.
This article was technically reviewed by Mayur Tendulk. | https://www.dotnetcurry.com/xamarin/1452/xamarin-forms-3-cheat-sheet | CC-MAIN-2018-51 | refinedweb | 1,297 | 65.12 |
The crown jewels of the Database Edition product are the SQL parsers and script generator, these two pieces form the foundation of what the database project system does internally.
The parser, parses T-SQL code and turns it in to a script fragment, where the script generator takes a script fragment and turns it in to T-SQL code. This way we can roundtrip code between the two components. The script generator has the ability to format code on the way out, however there is no mode to preserve formatting or whitespace by the script generator at this point in time.
Per CTP16 of the GDR release the parser and script generator are public classes, so if you have any need to parse some SQL code, here is a solution.
The parser and script generator live in two assemblies. The Microsoft.Data.Schema.ScriptDom contains provider agnostic classes and the Microsoft.Data.Schema.ScriptDom.Sql assembly contain classes for the parser and script generator that are SQL Server specific.
Let’s look at a very simple example on how to use the two components. The following screenshot shows the end result. A simple WinForms application that takes in a piece of T-SQL in this case the create statement of the authors table from the pubs database. The second text box shows the formatted result coming out of the script generator.
Lets digest what was needed to achieve this. I will not go in to the detail of the WinForms code since it is trivial and not that interesting, you can look at the sample code if you need to for that. I am going to focus on the usage of the parser and the script generator code.
The first step is to add the references to the required assemblies:
We first need to add two assembly references two:
- Microsoft.Data.Schema.ScriptDom.dll
- Microsoft.Data.Schema.ScriptDom.Sql.dll
Then we add two variables to the WinForms class to hold an instance of the parser and the script generator.
1: private TSql100Parser _parser;
2: private Sql100ScriptGenerator _scriptGen;
As you can see the parser and script generator are version specific implementations, there exists a TSql80Parser, TSql90Parser and TSql100Parser, the same hold true for the Sql<>ScriptGenerator classes. Normally you would want to instantiate the classes through a factory pattern, to abstract the version away, but in this simple example we will instantiate the versioned instances of the classes directly. This happens in the form Load method.
1: private void Form1_Load(object sender, EventArgs e)
2: {
3: bool fQuotedIdenfifiers = false;
4: _parser = new TSql100Parser(fQuotedIdenfifiers);
5:
6: SqlScriptGeneratorOptions options = new SqlScriptGeneratorOptions();
7: options.SqlVersion = SqlVersion.Sql100;
8: options.KeywordCasing = KeywordCasing.UpperCase;
9:
10: _scriptGen = new Sql100ScriptGenerator(options);
11: }
The only thing the parser needs to know is how to interpret string inside double quotes, are they strings or object identifiers? This is indicated when the parser class is constructed.
The script generator has many more options and takes and SqlScriptGeneratorOptions class instance as the constructor argument. The options class lets you define things like the version of the language and how to case keywords for examples. You will also find a set of indention properties that you can use to change the formatting results.
Now we are at the point we can do the hard work. When the user clicks the button the Click method will parse the SQL and script it back out to the second text box if there are no errors parsing the input code.
Lets step through the Click method:
1: private void button1_Click(object sender, EventArgs e)
2: {
3: textBox2.Text = String.Empty;
4: textBox3.Text = String.Empty;
5:
6: string inputScript = textBox1.Text;
7: IScriptFragment fragment;
8: IList<ParseError> errors;
9: using (StringReader sr = new StringReader(inputScript))
10: {
11: fragment = _parser.Parse(sr, out errors);
12: }
13:
14: if (errors != null && errors.Count > 0)
15: {
16: StringBuilder sb = new StringBuilder();
17: foreach (var error in errors)
18: {
19: sb.AppendLine(error.Message);
20: sb.AppendLine("offset " + error.Offset.ToString());
21: }
22: textBox3.Text = sb.ToString();
23: }
24: else
25: {
26: String script;
27: _scriptGen.GenerateScript(fragment, out script);
28: textBox2.Text = script;
29: }
30: }
We first define to variables to hold the output of the parse operation, the IScriptFragment will hold the script fragment, this is the most important output of the parser, and it will also output an IList of ParseError objects when the parse operations ran in to problems.
Second we need to make the input to the parse available as a StreamReader, so we are converting the input string in to a StringReader class. Line 11 shows the actual call that parses the input stream and outputs the fragments and the errors.
Then we check if any errors occurred during the parse of the input. If so we iterate through the list of error objects and print out the message and offset information.
When there are no errors, we take the resulting script fragment and feed it back to the script generator. Line 27 shows the actual call. The output will be a formatted script.
You can download the sample code from the DataDude SkyDrive: ParsingAndScriptGen.zip
Neat he? I hope this is useful!
-GertD
PingBack from
The classes may be public, but what is the redist limitations on the assemblies? I would assume that they’re not redistributable and that any app that uses them would only run on a machine with Team Database on it…
I would like to be wrong though 🙂
Data Dude – if you know about these 2 words then you know what I’m talking about. Visual Studio Team
Which regards to the redist question please see my next post.
Thanks, this is really cool and useful stuff. I would like to request that the type, TSqlFragmentVisitor, be made public too as that will make it much easier for us to write our own visitors on top of the sql expression tree.
Also, While most of the sql expression tree is serializable, it appears that one of the member variables, which is a list of SqlParserTokens is not (the SQL Parser Token isn’t marked as serializable). Could this be fixed?
The ability to serialize a SQL Expression tree would be very useful.
One thing that I can see a use for is a set of factory methods built on top of the expression tree such that statements could be more easily constructed in code (instead of parsed from a string). Very much like SqlOM (). We’d like to replace SqlOM with these libraries once RTM happens…
In our case, we prefer SqlOM as a way of generating "safe" queries from a client machine and then sending the object graph on the wire for our servers to parse and extract a safe select query (to prevent injection, etc).
Another interesting use would be a way to translate the sql expressions (or at least the DML part) into a standard .NET Expression Tree (and vise-versa).
Very cool stuff…
Kirstin Juhl on Everything old is new again Martin Hinshelwood on Heat ITSM GertD on Getting to the…
Finally the moment is there, the final version of the Visual Studio Team System 2008 Database Edition
Ca n’est pas encore Visual Studio 2010, mais pour la partie base de données, on s’en approche à grand
[ Nacsa Sándor , 2009. február 8.] Ez a Team System változat az adatbázis változások kezeléséhez és teszteléséhez
Thanks for sharing. Based on your example I created Powershell cmdlet implementations
Is there a any way to preserve comments
Hi all,
I use Oracle. Is it possible use it ?? Thanks.
Any script generator for Oracle that can I use with VS Database Edition GDR ?? thanks | https://blogs.msdn.microsoft.com/gertd/2008/08/21/getting-to-the-crown-jewels/ | CC-MAIN-2016-50 | refinedweb | 1,289 | 61.56 |
NB: This is a free tool and it is not supported by McAfee / Intel Security.
What is it?
A python script used to convict files automatically based on VirusTotal results.
How does it work?
When a file is executed on an endpoint with the TIE/DXL modules, a determination on the reputation is established. An event is generated and sent to ePO, based on that, we launch an "Automatic Response" that will execute a python script that will query VirusTotal for the SHA1 hash of the file in question. Based on the results (ie. number of vendors that found the file malicious) and do any of the major vendors find this file malicious (Trend, Symantec, Sophos, Kaspersky), the reputation of the file is changed and, because the change of reputation is sent thru the DXL, the file is removed from the endpoint. Also, if the file was running at the time, the process is killed. An "Issue" is also created in ePO with the details on the file (name, hash, percentage of vendors that found the file malicious etc.)
Things to know about VirusTotal
In order to use this script, you need to get an API key. Note that the "free" API limits you to 4 requests per minute. If you need more, you would need to purchase a Private API from them. Contact them for prices.
How to install
Things to know about Convicter
Video
Quick video to demo it.
Have fun.
Regards,
JL Denis
1.0.1 - Updated script to include variables for ePO IP address and port.
1.0.2 - Updated script to sort the last occurence of a detection and fixed the display of text in "Issues".
1.0.3 - Updated script to fix an issue with files that had a "Known Malicious", "Known Clean" and "Known Clean Updater" reputation.
Hi,
what is the easiest way to test?? Is it possible to set a file as "might be malicious"?
I´m aksing, because my question is, how can i test the configuration without executing any malware?
The script is not working in my environment.
1) i see the threat event from my endpoint.
2) Automatic Response is triggered (server task log)
3) File is listed under Tie Reputations
The virustoal query is working fine. Changing the reputation level to block a file on the endpoint works fine.
4) Changed the values in the automatic response to a minimum.
tested with several different values. The tested file is know by virustotal.
My problem is:
There is no issue generated in EPO
The Comment is not added to the file under TIE Reputation
My Test Environment
EPO 5.1.1 with HF1 and Hotfixes
Agent 5.0
DXL 1.0.1.152
TIE 1.0.1.150
Do you have any idea?
Cheers
Additional testing: Did a wireshark trace. I see no request to internet.
Cheers
You should have a "log.txt" in your Python27 directory, do you see any errors? Also, i just posted a new version (1.0.3) to fix another issue, you might want to try that one,
Also, the easiest way to safely test it is with the "Artemis-High.exe" test files located here [Green] and posted by my collegue Sven Welschen.
Typically, you will have the Artemis level set to "Medium" so it will not be triggered by VSE.
Let me know how it works out for you.
Regards,
JL
Oh noooooo, what a shame! *lol*
Wrong password in the registered executable registration!!!!! Now it works, it works great!
For testing i disabled Artemis and removed any Signature from the Repository. This means, VSE uses the DATs from its installation package. We are testing in this way, because ATD is available and configured. This shows us how powerful ATD is.
Additional to the ATD reputation we added convicter script to this EPO environment.
From my side, one of the top benefits is, any file hash is queried to TIE. If you do not use TIE, VSE does not always triggers an GTI request. If a bad file is somewhere located on the disk, GTI is not queried. If you copy this file to the system32 folder GTI will be queried. We opened a service request, because customer was wondering why a known file was not removed.
This disadvantage is resolved with TIE/DXL.
At the moment we are testing with real Malware, what happens with different McAfee products installed. HIPS, Application Control and so on. Also remote infections. We added McAfee Raptor to the testing environment.
BUT, this script, to connectivity to virustotal is real cool!
Best,
Thorsten
Great work!
Will this process supersede/replace existing enterprise reputations?
Hi,
yes, this is the main goal of the convicter script. At the moment there is only one "field" where an administrator can change the reputation manually. This is the Enterprise Reputation Field.
The script changes this value, adds an issue and an information to the file in "TIE reputations". You can change the script in which way the reputation should be changed. This means setting the value to "might be malicious" only to block a file, not to remove it.
Let´s see which 3rd party information will be added in the future.
Cheers
Hi,
sometimes it is useful or necessary to use a proxy system. I changed the script using my MWG to connect to virustotal.com
def Connect_to_VirusTotal(sha1):
proxy = urllib2.ProxyHandler({'http': '10.x.x.x:9090', 'https': '10.x.x.x:9090'})
opener = urllib2.build_opener(proxy)
urllib2.install_opener(opener)
url = "" # Set the URL to query VirusTotal
parameters = {"resource": sha1, "apikey": VT_API_Key}
data = urllib.urlencode(parameters)
req = urllib2.Request(url, data)
response = urllib2.urlopen(req)
report = response.read()
Results_VT = json.loads(report)
Just add the red lines. | https://community.mcafee.com/t5/Threat-Intelligence-Exchange-TIE/Convicter-Utilize-VirusTotal-with-TIE-DXL-to-convict-files/m-p/442921/highlight/true | CC-MAIN-2019-47 | refinedweb | 958 | 68.67 |
Using the eCommerce Server Controls
IMPORTANT: Ektron has discontinued new development on its eCommerce module. If you have a license to eCommerce, you will continue to receive support, but if you need to upgrade, contact your account manager for options.
Ektron provides a complete set of eCommerce server controla server control uses API language to interact with the CMS and Framework UI to display the output. A server control can be dragged and dropped onto a Web form and then modified.s that let you set up an online marketplace where customers (site visitors) can purchase merchandise, services or content. Customers interact with these server controls on your site.
Most eCommerce server controls let customers move from one control to another via template properties that let you define the location of another server control. For example, if you define the path to the Cart server control in the Product server control’s TemplateCart property, customers are directed to the Cart control when they click the Product control's Add to Cart button.
Ektron’s server controls let you insert standard methods and properties within the Visual Studio environment. This means that you can see the effect of your changes in real time; you don’t have to modify a page and then compile to see results.
You can insert server controls using drag and drop or programmatically. You can also use databinding to retrieve and display data from Ektron.
Ektron provides a complete set of eCommerce server controls that let you set up an online marketplace where customers can purchase merchandise, services or content.
The following list shows typical actions a customer might perform when visiting your eCommerce site and how you can facilitate these actions.
NOTE: In addition to setting the above property, a product must be in-stock, not archivethe state of content upon reaching its end date. What happens next is determined by your choice at the content's Schedule tab > Action on End Date options field. Choices are * Archive and remove from site (expire) * Archive and remain on site * Add to CMS Refresh Reportd and buyable. Otherwise, the Add to Cart button or link does not show for a product..
A customer typically reaches the Cart server control from a product description or list page. A product description page contains a Product server control. Product list pages contain server controls that create a list of products, such as ProductList, ProductSearch or Recommendation controls. These controls contain a button or link that allows a customer to add the product to the cart. For reference information, see Cart.
You could also create a link to the cart from a master page or menu that lets customers view their cart. This link lets a customer navigate directly to the cart when they arrive at your site.
The Cart server controla server control uses API language to interact with the CMS and Framework UI to display the output. A server control can be dragged and dropped onto a Web form and then modified. consists of 2 major areas. The top part (My Cart) represents the cart with which the customer is currently working. The bottom (My Saved Carts) represents a visitor’s saved carts. Saved cart information appears only when a customer is logged in.
The My Cart area displays the Item, SKU, Quantity, List Price, Sale Price, Subtotal and Total. From this cart, a customer can remove items, update the quantity and subtotal information, apply coupons and check out. The customer can also choose to continue shopping or empty the cart.
The My Saved Cart area contains a list of carts the customer has saved and are awaiting checkout. This allows the customer to select products and save them for future purchase. This area contains:
Clicking a saved cart makes it the current cart, and displays its products in the Your Cart area. A customer can then proceed to check out.
This section explains how a logged-in customer would use the Cart server control. The following figure helps you locate key features of the Cart.
Items are added to a cart when a customer clicks Add to Cart in the following server controls.
When Add to Cart is clicked, the item is appended to the current cart’s list of items. If the customer does not have a cart, one is created.
A customer automatically creates a new cart the first time a product is added to it. If needed, a customer can create multiple carts while using the Cart server controla server control uses API language to interact with the CMS and Framework UI to display the output. A server control can be dragged and dropped onto a Web form and then modified.. This allows for the grouping of products being purchased. For example:
To create a new cart:
When the customer clicks OK, the cart is added to the list of Saved Carts and becomes the active cart. A customer can then click Continue Shopping to select products to add to the new cart.
In the Cart server control, a product’s title is a link. When clicked, it takes you to the product’s detail page. To navigate back to the cart, click your browser's Back button.
By default, the server control uses the product’s QuickLink information to provide a path to the product. You can override this functionality by adding a new path to the TemplateProduct property.
Assigning a name to a cart makes it easier for customers to identify a cart in their saved cart list. A customer can assign or change the name of a cart by clicking Edit (
) next to the cart’s name.
When clicked, the button and name are replaced with a text box that allows the customer to enter a new name for the cart.
A customer can continue to shop by clicking Continue Shopping. This redirects the visitor to a template defined in the cart server control’s
TemplateShopping property. For example, you might send the customer to a page containing a ProductList or ProductSearch server control. See also: ProductList and ProductSearch templated server controls.
As a developer, you need to add the path to this page to the
TemplateShopping property. If the page is in the same folder as the page that contains the Cart server control, just enter the page’s name.
If coupons are defined in the Workarea and the
EnableCoupons property is set to True, a customer can enter coupons to discount the purchase. How coupons affect the purchase is defined in the Workarea > Settings > Commerce > Catalog > Coupons. See also: Using Coupons.
To apply a coupon, a customer clicks Apply Coupon.
The button area changes to text box where the customer can enter the Coupon Code. The customer then enters a code and clicks OK. Next, the discount appears above the Subtotal, and the customer can continue to shop, check out, view other carts, or delete the coupon by clicking Remove Coupon.
To remove all products from the current cart, a customer clicks Empty Cart. When clicked, a dialog box appears and the customer can click OK or Cancel.
When a customer is satisfied that the cart contains all products they want, the quantity of each product is correct and all coupons have been applied, the customer can click Checkout. At that point, the customer is redirected to the page that hosts the Checkout server control. The path to the page is defined in the
TemplateCheckout property. If the page containing the Checkout server control is in the same folder as the Cart server control, just enter the page name.
If a cart includes an item that is out of stock, the Checkout button is disabled (hidden).
A customer can delete a saved cart by clicking Delete Cart.
Customers typically arrive at this server control by clicking Checkout on the Cart server control. They also might reach this server control from a Checkout link you create on your site. For reference information, see Checkout.
IMPORTANT: When using the Checkout server control, your website should have an SSL certificate installed. Set its
IsSSLRequired property to True. This protects your customers’ payment information during transmission. Do not use a self-signed SSL certificate—obtain one from a trusted certificate authority.
The control provides a navigational aid that indicates the customer's step in the checkout process. At any point, the customer can move forward to the next step or back to a previous one. When appropriate, some steps do not appear. For example, if merchandise is not tangible, the shipping screen does not appear.
The following chart represents the options and processes of the Checkout Server Control.
The login portion of the Checkout server control accommodates these types of customers.
If an existing customer is not logged in when the customer reaches this server control, it prompts the customer to log in. Optionally, this screen could have a link that lets an existing user to recover a password. To enable this link, add a template path to the control's
TemplateRecoverPassword property.
After a customer who previously purchased from this site logs in, billing and shipping information is retrieved and appear by default. The visitor can review and edit this information. However, Ektron does not store credit card numbers, so that information must be entered each time.
If a customer has not purchased from your site before and wants to set up an account, the customer clicks Create Profile and Checkout. On the new screen, the customer enters billing information including a password. After completing that screen, a membership user account is created, and the customer is logged in. The new user can then proceed through the checkout. The next time this visitor logs in, the customer is treated as an existing customer.
The Checkout server control provides a guest checkout feature for customers who do not want to set up an account on your site. To enable this feature, set the
AllowGuestcheckout property of the Checkout server control to
true.
If a user clicks Checkout Without Profile, the customer proceeds to a version of the billing screen that does not ask for a password. The rest of the checkout is the same.
Although the customer can proceed through the checkout like other customers, if the customer wants to checkout in the future, the customer must re-enter billing and shipping information. Customers who use the guest checkout feature can view their orders using the Order List server control if its
GuestOrderView property is set to
true. See also: Order Workflow.
The Checkout server controla server control uses API language to interact with the CMS and Framework UI to display the output. A server control can be dragged and dropped onto a Web form and then modified. has several sections, each representing a portion of the checkout process.
Prompts customers to enter or update billing information, including:
NOTE: If a customer chooses a country other than the United States, no field validation is applied to the postal code by default. To apply country-specific postal code validation, see Applying Field Validation for non-U.S. Postal Codes.
EktronCheckout_PostalField. The standard US Postal RegEx is applied to all input fields of this class.
WARNING! Be careful to remove only this class name; otherwise, the CSS styling may not be applied properly and the layout will be altered.
data-ektron-validation-regex. Then, add the desired RegEx as the value of this attribute. The RegEx expression will validate the user's input.
For the RegEx expression, you should be able to find one for the target country via a Web search. Alternatively, you can start here:.
Canadian Postal Code Solution
The following example shows how to apply field validation for Canada.
Attribute for U.S. postal code regex:
<xsl:attribute/^\d{5}(-\d{4})?$/ </xsl:attribute>
Attribute for Canadian postal code regex:
>
The following sample shows the entire postal code input block taken from billing info XSLT. Three checkout-related files that need these changes (
<input class="EktronCheckout_PostalCode EktronCheckout_Row_RightContents" > <xsl:attributeEktronCheckout_BillingInfo_PostalCode_ <xsl:value-of</xsl:attribute> <xsl:attributeEktronCheckout_BillingInfo_PostalCode_ <xsl:value-of</xsl:attribute> <xsl:attribute<xsl:value-of </xsl:attribute> > </input>
Add or edit shipping information. By default, it uses the billing address as the shipping address. From this screen, customers can:
When a logged-in customer adds a new address, it is stored with the account information in Ektron.
Select the type of shipping for products. Shipping methods that appear are defined in the Workarea > Settings > Commerce > Shipping > Methods. See also: Configuring Shipping Methods.
Displays information about products being purchased: their price, shipping charges, discounts, and taxes. At this point, if customers want to modify their cart, they click Edit your cart. For example, a customer wants to apply a coupon to the cart. For this link to work properly, add the path to the template that contains the Cart server controla server control uses API language to interact with the CMS and Framework UI to display the output. A server control can be dragged and dropped onto a Web form and then modified. to the Checkout server control’s
TemplateCart property.
On the Submit Order page, a customer enters payment information. The customer can pay by check, credit card, or PayPal, depending on what has been set up in the eCommerce Payment Options screen. When a customer enters the information, the customer clicks Submit. At that point, the charge is submitted to the payment gateway, and the order is posted in Ektron. See also: Processing Orders.
The Order Complete page displays a Thank You note, the order ID, and links to continue shopping or view an order’s history.
The Continue Shopping link appears when you add the path of a template that allows a customer to find products to the
TemplateShopping property. For example, you send them to a template containing a
ProductSearch or
ProductList server control.
The Order History link appears when you add the path of a template containing the OrderList server control to the
TemplateOrderHistory property.
This example demonstrates how to add a custom field to the Checkout control's Billing Info entry screen.
In the workarea/XSLT/Commerce/Checkout/ folder, look for the Checkout_BillingInfo.xsl file. In the file, locate
EktronCheckout_BillingInfo_Company_. Under the bulleted list item for company, add the following:
<li class="EktronCheckout_Required EktronCheckout_SerializableContainer"> <label class="EktronCheckout_Row_LeftContents"> <xsl:attribute EktronCheckout_BillingInfo_CustomField_<xsl:value-of </xsl:attribute> <span class="EktronCheckout_RequiredSymbol">*</span> A custom field </label> <input class="EktronCheckout_Company EktronCheckout_Row_RightContents"> <xsl:attribute EktronCheckout_BillingInfo_CustomField_<xsl:value-of </xsl:attribute> <xsl:attribute EktronCheckout_BillingInfo_CustomField_<xsl:value-of </xsl:attribute> <xsl:attributeABCXYZ</xsl:attribute> </input> </<li>
Important notes:
EktronCheckout_NotRequiredis also acceptable.
EktronCheckout_BillingInfois replaced with
param__internally. The last portion,
CustomField, is how the field will be referenced.
In the code-behind of the page with the Checkout control, you need the following imports:
using Ektron.Cms.Controls; using Ektron.Cms.Controls.ECommerce; using Ektron.Cms.Controls.ECommerce.Checkout;
Next, place the following into the Page_Init method:
cmsCheckout.FilterAjaxCallback += this.HandleFilterAjaxCallback;
Then implement the HandleFilterAjaxCallback method:
protected void HandleFilterAjaxCallback(object sender, FilterAjaxCallbackEventArgs e) { string callbackParameters = e.EventArgs; System.Collections.Specialized.NameValueCollection data = Ektron.Cms.Common.EkFunctions.ConvertToNameValueCollection(callbackParameters, true); string customFieldValue = data["param__customfield"]; }
Upon completing the Billing Info screen, the data is returned as part of the EventArgs, with the prefix
param__ (note the double underscore).
It can then be obtained directly by converting the string to a
NameValueCollection (or other construct). Developers are free to call their own methods from here.
The CurrencySelect server control lets a customer select the monetary type he will use to make purchases. When a currency is selected, eCommerce server controls use it for that order. If the customer closes the browser, the currency needs to be selected again the next time the site is visited.
For a currency to appear in the CurrencySelect server control, enable it in the Workarea > Settings > Commerce > Currencies screen. See also: Configuring Currencies.
<root><customXml>custom-xml-inserted-here</customXml></root>
By default, the control uses
siteroot\Workarea\Xslt\Commerce\CurrencySelect.xsl.
If the customer is not logged in and arrives at this control, one of the following happens:
RedirectUrlproperty is empty and the
web.configfile’s
<add key=”ek_RedirectToLoginURL” value=”” /> value is blank.
RedirectUrlproperty contains the path of a template that contains a Login server control or when the
web.configfile’s
<add key=”ek_RedirectToLoginURL” value =”” /> valueis set to the path of a template that contains a Login server control. If the control’s
RedirectUrlproperty and the
web.configfile contain different locations, the
RedirectUrlproperty in the control overrides the
web.configfile.
For information about the MyAccount server control properties, see MyAccount.
Typically, the information displayed in this control is collected the first time a customer goes through the checkout process. A customer’s information might appear in the Personal Information area before they have been through the checkout process if they are a current Ektron user or registered membership user.
If a logged-in customer arrives at this server control and is missing required information, it must be entered before continuing. For example, a customer needs to provide a Last Name or E-mail Address.
As a developer, you need to create a link to this control in a site location that appears only after log in. For example, you might place the link on a menu or on a master page that appears after a customer logs in.
A customer typically arrives at this control by clicking a link placed on a main page, master page, or menu. The following figure shows the flow of the MyAccount server control.
The Personal Information area lets customers view and edit some information contained in their profile. By default, a customer can edit their First Name, Last Name, email Address, and Password. By adding a list of Custom User Property IDs to the
CustomPropertyID property, you can let customers view and edit custom properties in their profile. See also: Creating Custom User Properties.
A customer can change the information in the Personal Information area by clicking Edit in the lower left corner of this area. The resulting page lets the person change any information in this area. When done, the customer clicks Save Changes to save the changes and return to the My Account page.
The Billing Address area lets customers view and edit their billing address information. A customer can edit their Name, Company, Address, City, State, Postal Code, Country, and Phone number. This information is used for billing purposes when the customer makes a purchase. Credit card companies typically want part or all of this information to match a customer’s credit card account information when making a purchase.
To edit a customer’s billing address, click Edit in the lower left corner of the Billing Address area.The resulting page let the person change any information in this area. When done, a customer clicks Save Changes to save the changes and return to the My Account page.
The Shipping Address area lets customers view, edit, delete or add a new address to their shipping address information. The fields in this area are the same as those in the Billing Address area with 2 exceptions: a Same as Billing check box, and a Default Address check box. The Same as Billing Address check box lets customers place a check mark in the box and use their billing address information as their shipping address. The Default Address check box lets customers specify the current shipping address as the default they want to use when making future purchases.
To edit a customer’s shipping address, click Edit in the lower left corner of the Shipping Address area. The resulting page let the person change any information in this area. Note that a customer must uncheck Same as Billing to be able to edit the shipping address. Otherwise, it is locked to the billing address. When done, a customers clicks Save Changes to save the changes and return to the My Account page.
To add a new shipping address, the customer clicks Add New Address located in the bottom center of the Shipping Address area.The resulting page lets the person enter new shipping address information. Note that the Same as Billing checkbox is not available when adding a new address. When done, a customer clicks Save Changes to save the changes and return to the My Account page.
To delete a shipping address, the customer clicks Delete Address at the bottom of the Shipping Address area. The visitor is asked to confirm. If the customer does and there are no other shipping addresses, the billing address is used as the shipping address. If the shipping address is the same as the billing address, you cannot delete it.
Customers also can reach this control through a link at the end of the Checkout process. The link appears when the Checkout server control’s
TemplateOrderHistory property contains the path to the template containing the OrderList server control. For information about the OrderList server control properties, see OrderList.
The following image depicts the flow of the OrderList Server Control.
When customers arrive at this server control, they see the View All Orders portion of the server control.
This part of the server control displays a list of submitted orders. Customers can view each order's date, confirmation number, and status. When the number of orders exceeds the number defined in the
MaxResults property, the list is paged and a user can navigate among pages with the links provided.
A customer can click the confirmation number to view the Order Details screen.
To navigate back to the list of all orders, customers click View All Orders.
TemplateProductproperty, the product’s title becomes a hyperlink.
This control handles each class of product Ektron provides such as Products, Kits, Bundles or Subscriptions without having to make any adjustments to the control. For information about the Product server control properties, see Product.
Customers typically reach this server control when they click a product from either the ProductSearch or ProductList server control. When customers clicks a product, title or image in either of these controls, they are forwarded to the Product server control.
In addition, customers can reach this control from the Cart server control. In that control, customers click a product’s title and are taken to the Product server control.
When a customer has viewed the product and decided to purchase it, they click Add to Cart in the control.
A product is an item that has no kit, bundle or subscription information associated with it.
When displaying a simple product, the Product server control displays the following information:
Templatecartproperty
ShowAddToCartproperty is set to True
When customers click this button, the product is added to their cart and they are sent to a template containing the Cart server control.
You can hide this button by setting the
ShowAddToCart property to false. This lets you show details of a product, but not offer it for sale. For example, you have a product that is no longer for sale, but you want to allow people who purchased the product to view its details.
Also, by using code-behind to dynamically set the property, you could create code that looks at your inventory system and hides the button depending on whether a product is in stock.
When a product has an alias path associated with it:
TemplateCartproperty’s path is relative to the site root. For example:
TemplateCart="Developer/Commerce/CartDemo.aspx"
protected void Page_Init(object sender, EventArgs e) { Utilities.RegisterBaseUrl(this.Page); }
A bundled product is made up of multiple products that have been grouped together for sale as one product.
See also: ActiveTopics.
When displaying a bundled product, the Product server control displays all information displayed in a Product as well as information about the individual components of the bundle.
The This Bundle Includes area includes products listed on the Items tab for a Product Bundle in the Workarea. A content editor adds existing products to this tab when creating the bundle.
Any products on the tab are displayed with the image, title and description for each product. A link to additional information about each product is also displayed.
A Complex Product lets the customer choose between variations of a product. For example, if your site sells books, variant selections might be Paperback or Electronic.
See also: ActiveTopics.
When displaying a Complex Product, the Product server control displays all of the information displayed in a Product in addition to information on product variants. The Variants area includes products listed on the Items tab. A content editor adds products to this tab when creating content.
Products on the Items tab are displayed with a radio button, image, title and description. A link to additional information about each product is also displayed. The radio buttons are used to select which product will be added to the cart.
A kit lets the customer select product options, which can affect the product’s price. There is no limit to the types of options you can add, nor to the number of items in an option. For example, a customer purchasing a computer can add RAM, a hard drive, and a larger monitor.
See also: ActiveTopics.
When displaying a kit, the Product server control displays all information displayed in a product in addition to options and subtotal information.
The Options area displays product options based on the Item tab for a kit in the Workarea. Options are divided into groups. A radio button, name and price appears for each item. The radio button lets you select one item from each group.
The Subtotal area shows the updated cost of the product with all options.
The ProductList server controla server control uses API language to interact with the CMS and Framework UI to display the output. A server control can be dragged and dropped onto a Web form and then modified. displays a list of products on a Web page. For information about the Product server control properties, see Product.
NOTE: Private catalog entries appear in display of the Product List server control only if the user is logged in and has at least Read-Only permissions for its catalog folder. See also: Making Content Private
You decide which products appear by selecting a
SourceType and populating either the
SourceId or the
IdList property, depending on the source type. You can choose from these source types.
SourceTypeproperty to Catalog, and enter the ID of the catalog in the
SourceIdproperty.
SourceTypeproperty to CatalogList, and enter a comma-separated list of catalog IDs in the
IdListproperty.
SourceTypeproperty to Taxonomy, and enter the ID of the Taxonomy in the
SourceIdproperty.
SourceTypeproperty to TaxonomyList, and enter a comma-separated list of taxonomy IDs in the
IdListproperty.
SourceTypeproperty to Collection, and enter the ID of the Collection in the
SourceIdproperty.
SourceTypeproperty to IdList, and enter a comma-separated list of product IDs in the
IdListproperty.
There are several ways customers might arrive at the ProductList server control, such as
The ProductList server control lets a customer sort by:
You can set the default sort order by setting the
SortMode property.
For the Highest Rated, Lowest Rated and Most Ratings sorting options to work as intended, a ContentReview server control should be associated with each product. This lets customers rate your products.
For example, place a ContentReview control on the Master page of the template that display products, and set its
DynamicParameter property to
ID. Then, when customers view the product, they can rate and comment on it.
Prerequisite
Cross sell or upsell products have been assigned to a catalog entry.
Typically, this control appears on a page along with a Product server control. By using this control in conjunction with the Product control, a customer can view the details of a product and also receive suggestions on additional purchases. A customer can click the title to view the suggested product to view its details. For information about the Recommendation server control properties, see Recommendation.
For example, if your site is selling a hat, mitten and scarf set, you might use this server control to cross-sell winter jackets. You could also use the control to up-sell a more expensive hat, mitten and scarf set, or a set that includes additional items.
See also: Recommending Related Products to a Customer.
The
RecommendationType property determines whether the Recommendation server control is used for up-selling or cross-selling.
Customers can add a product to their cart directly from the Recommendation server control by clicking Add to cart next to a product. This link appears below the price and allows them to skip the product’s information page and add the product directly to their cart.
By default, this button appears when the following conditions are met:
TemplateCartproperty has a cart’s template location defined.
In addition, if a product has an alias path associated with it, you need to:
TemplateCartproperty’s path is relative to the site root. For example:
TemplateCart="Developer/Commerce/CartDemo.aspx"
protected void Page_Init(object sender, EventArgs e) { Utilities.RegisterBaseUrl(this.Page); } | https://webhelp.episerver.com/Ektron/documentation/documentation/wwwroot/cms400/v9.10/Reference/Web/eCommerce/sc_eCommerce.htm | CC-MAIN-2020-34 | refinedweb | 4,842 | 55.54 |
Component Style
I think styling is one of the most interesting parts of the frontend stack lately. In trying to clearly outline a set of opinions about web development recently, I wanted to know if there was a “right” way to style components? Or, at least, given all the innovation in other parts of the stack (React, Angular2, Flux, module bundlers, etc. etc.), were there better tools and techniques for styling web pages? Turns out, the answer is, depending on who you ask, a resounding yes and a resounding no! But after surveying the landscape, I was drawn to one new approach in particular, called CSS Modules. Certain people seem to get extremely angry about this stuff, so I’m going to spend some time trying to explain myself.
First, some background on the component style debate.
If you wanted to identify an inflection point in the recent development of CSS thinking, you’d probably pick Christopher Chedeau’s “CSS in JS” talk from NationJS in November, 2014. — Glen Maddern
Chedeau’s presentation struck a nerve. He articulated 7 core problems with CSS, 5 of which required extensive tooling to solve, and 2 of which could not be solved with the language as it exists now. We’ve always known these things about CSS — the global namespace, the tendency towards bloat, the lack of any kind of dependency system — but as other parts of the frontend stack move forward, these problems have started to stand out. CSS is, as Mark Dalgleish points out, “a model clearly designed in the age of documents, now struggling to offer a sane working environment for today’s modern web applications.”
There has been a strong backlash to these ideas, and specifically to the solution proposed by Chedeau: inline styles in JavaScript. Some argue, that there’s nothing wrong with CSS, you’re just doing it wrong — learn to CSS, essentially.
These critics also say there are more problems with the inline style approach than with properly written CSS. They point to naming schemes like OOCSS, SUIT, BEM, and SMACSS — developers love acronyms — as a completely sufficient solution to the global namespace problem. Others argue that these problems with CSS are specific to the type of codebase found at Facebook, “a codebase with hundreds of developers that are committing code everyday and where most of them are not front-end developers.”
The Modern Separation of Concerns
I think the real fight here is over separation of concerns, and what that actually means. We’ve fought for a long time, and for good reason, to define the concerns of a web page as:
- Content / Semantics / Markup
- Style / Presentation
- Behavior / Interaction
These concerns naturally map to the three languages of the browser: HTML, CSS, and JavaScript.
However, React has accelerated us down a path of collapsing Content and Behavior into one concern. Keith J. Grant explains this unified concern eloquently:.
“Inliners” tend to argue that Style is also a part of that overarching concern, which can maybe be called Interface. The thinking is that component state is often tightly coupled to style when dealing with interaction. When you take input from the user and update the DOM in some way, you are almost always changing the appearance to reflect that input. For example, in a TodoList application, when the user clicks a Todo and marks it as completed, you must update the style in some way to indicate that that Todo has that completed state, whether it is grayed or crossed out. In this way, style can reflect state just as much as markup.
I think there is actually a second set of concerns being talked about here. This second set of concerns is more from the application architecture point of view, and it would be something like:
- Business Logic
- Data Management
- Presentation
On the web platform, the first two are handled by JavaScript, and a lot of the impulse towards inline styles is a move to get the whole presentation concern in one language too. There’s something nice about that idea.
The debate, though, is about whether or not style is a separate concern. This is, essentially, a debate over how tightly coupled style, markup, and interaction are:
Far too often, developers write their CSS and HTML together, with selectors that mimic the structure of the DOM. If one changes, then the other must change with it: a tight coupling.
…
Clear Fix
At this point, you should have a decent understanding of the different arguments over component style. I’m now going to outline my own opinions and highlight the approach that has emerged from them.
The foundational issue with CSS is fear of change
Recall the definition for tight coupling Grant offered:
You must have an intimate knowledge of both in order to safely make any substantive changes to either.
I think, ultimately, there are ways to organize and structure your CSS to minimize the coupling between markup and style. I think smart designers and developers have come up with approaches to styling — naming schemes, libraries like Atomic and Tachyons, and style guides — that empower them to make substantive changes to style and content in separation. And I respect the unease with the seemingly knee-jerk “Everything should be JavaScript!” reaction that seems to appear in every conversation lately.
That said, I like to minimize cognitive effort whenever possible. Things like naming strategies require a lot of discipline and “are hard to enforce and easy to break.” I have repeatedly seen that CSS codebases tend to grow and become unmanageable over time:
Smart people on great teams cede to the fact that they are afraid of their own CSS. You can’t just delete things as it’s so hard to know if it’s absolutely safe to do that. So, they don’t, they only add. I’ve seen a graph charting the size of production CSS over five years show that size grow steadily, despite the company’s focus on performance.
~ Chris Coyier
The biggest problem I see with styling is our inability to “make changes to our CSS with confidence that we’re not accidentally affecting elements elsewhere in the page.” Jonathan Snook, creator of the aforementioned SMACSS, recently highlighted this as the goal of component architecture:
This component-based approach is.
I want to get rid of that fear of change that seems to inevitably emerge over time as a codebase matures.
JavaScript is not a substitute for CSS
There are a lot of very good arguments for sticking with CSS. It’s well-supported by browsers; it caches on the client; you can hire for it; many people know it really well; and it has things like media queries and hover states built in so you’re not re-inventing the wheel.
JavaScript is not going to replace CSS on the web — it is and will remain a niche option for a special use case. Adopting a styles-in-JavaScript solution limits the pool of potential contributors to your application. If you are working on an all-developer team on a project with massive scalability requirements (a.k.a. Facebook), it might make sense. But, in general, I am uncomfortable leaving CSS behind.
A Stylish Compromise
I’ve found CSS Modules to be an elegant solution that solves the problems I’ve seen with component styling while keeping the good parts of CSS. I think it brings just enough styling into JavaScript to address that fear of change, while allowing you to write familiar CSS the rest of the time. It does this by tightly coupling CSS classes to a component. Instead of keeping everything in CSS, e.g.
/* component.css */
.component-default {
padding: 5px;
}
/* widget.css */
.widget-component-default {
color: 'black';
}
// widget.js
<div className="component-default widget-component-default"></div>
or putting everything in JavaScript, e.g.
// widget.js
var styles = {
color: 'black',
padding: 5,
};
<div style={styles}></div>
it puts the CSS classes in JavaScript, e.g.
/* component.css */
.default {
padding: 5px;
}
/* widget.css */
.default {
composes: default from 'component.css';
color: black;
}
// widget.js
import classNames from 'widget.css';
<div className={classNames.default}></div>
Moving CSS classes into JavaScript like this gets you a lot of nice things that might not be immediately apparent:
- You only include CSS classes that are actually used because your build system / module bundler can statically analyze your style dependencies.
- You don’t worry about the global namespace. The class “default” in the example above is turned into something guaranteed to be globally unique. It treats CSS files as isolated namespaces.
- You can still create re-usable CSS via the composition feature. Composition seems to be the preferred way to think about combining components, objects, and functions lately — it makes a lot of sense for combining styles too, and encourages modular but readable and maintainable CSS.
- You are more confident of exactly where given styles apply. With this approach, CSS classes are tightly coupled to components. By making CSS classes effortless to come up with, it discourages more roundabout selectors (e.g. .profile-list > ul ~ h1 or whatever). All of this makes it easier to search for and identify the places in your code that use a given set of styles.
- You are still just writing CSS most of the time! The only two unfamiliar things to CSS people are 1) the syntax for applying classes to a given element, which is pretty simple and intuitive, and 2) the composition feature, which will probably feel familiar to people who’ve been using a preprocessor like SASS anyway.
I’ve found that I can focus more on the specific problem I’m solving — styling a component — and I don’t have to worry about unrelated things like whether a CSS class conflicts with another one somewhere in another part of the app that I’m not thinking about right now. My CSS is effortlessly simple, readable, and concise.
CSS Modules is still a nascent technology, so you should probably exercise all the usual caution you would around things like that. For example, I found this behavior/bug to be particularly frustrating, and I expect I’ll run into more things like that. But I see them as rough edges on a very sound and promising model for styling components.
A lot of the most exciting stuff with CSS Modules directly relates to module bundling and the modern approach to builds — a topic for a future post perhaps.
I obviously read, linked, and quoted articles and talks by many other folks here. Thanks to Glen Maddern, Christopher Chedeau, Mark Dalgleish, Keith J. Grant, Michael Chan, Chris Coyier, Joey Baker, Jeremy Keith, Eric Elliott, Jonathan Snook, Harry Roberts, and everyone else who is thinking about this stuff in thoughtful and caring ways! | https://medium.com/@dbow1234/component-style-b2b8be6931d3?source=rss-853d2e6c9d73------2 | CC-MAIN-2017-13 | refinedweb | 1,798 | 60.65 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.