text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Details
- Type:
Improvement
- Status: Resolved
- Priority:
Major
- Resolution: Fixed
- Affects Version/s: None
-
- Component/s: Qpid Dispatch
- Labels:None
Description
Currently the package name is qpid, which conflicts with the public qpid Python package library. A better idea would be to rename it qpiddx.
Activity
- All
- Work Log
- History
- Activity
- Transitions
Commit 1541589 from Darryl L. Pierce in branch 'dispatch/trunk'
[ ]
QPID-5338: Added ChangeLog entry
Commit 1541931 from Darryl L. Pierce in branch 'dispatch/trunk'
[ ]
QPID-5338: Removed the dispatch package from under qpiddx
The package was redundant.
The C code is under include/qpid/dispatch/. Wouldn't the python analog be "qpid_dispatch"? For me, that also more sensibly matches the pattern with qpid_messaging.
By the way, you left a bunch of empty directories in the subversion tree.
The package naming convention discourages using underscores [1]. That said, the name for the package IMO so much about the directory tree for the sources as it is a namespace to avoid collisions with other Python code.
Sorry about the empty directories, it's a git->svn glitch that I'll clean that up shortly.
[1]
Commit 1542364 from Darryl L. Pierce in branch 'dispatch/trunk'
[ ]
QPID-5338: Removing empty directories left behind by git.
Yes, I know about that naming convention. I mentioned it to you before you chose qpid*_*messaging for the python binding. At any rate, the guideline does in the end allow for underscores in the name. We should stick to the pattern we have.
It's not just a namespace. We could have qpidbananas and qpidclowns if we just wanted to avoid a collision. Let's save our users having to remember one more needless difference.
You need to remove qpiddx/dispatch as well.
I'm fine then with whatever's decided.
Commit 1541585 from Darryl L. Pierce in branch 'dispatch/trunk'
[ ]
QPID-5338: Renamed the top Python package to qpiddx.
This fixes the collision between the embedded library's name and the
public Qpid Python libraries. | https://issues.apache.org/jira/browse/QPID-5338 | CC-MAIN-2015-27 | refinedweb | 332 | 67.96 |
Revision history for Perl extension JSON::XS TODO: maybe detetc and croak on more invalid inputs (e.g. +-inf/nan) TODO: maybe avoid the reblessing and better support readonly objects. TODO: compression 3.01 Tue Oct 29 16:55:15 CET 2013 - backport to perls < 5.18 (reported by Paul Howarth). 3.0 Tue Oct 29 01:35:37 CET 2013 -.. 2.32 Thu Aug 11 19:06:38 CEST 2011 - fix a bug in the initial whitespace accumulation. 2.31 Wed Jul 27 17:53:05 CEST 2011 - (reported and analyzed by Goro Fuji). 2.3 Wed Aug 18 01:26:47 CEST 2010 -. 2.29 Wed Mar 17 02:39:12 CET 2010 -. 2.24 Sat May 30 08:25:45 CEST 2009 - the incremental parser did not update its parse offset pointer correctly when parsing utf8-strings (nicely debugged by Martin Evans). - appending a non-utf8-string to the incremental parser in utf8 mode failed to upgrade the string. - wording of parse error messages has been improved. 2.232 Sun Feb 22 11:12:25 CET 2009 - use an exponential algorithm to extend strings, to help platforms with bad or abysmal==windows memory allocater performance, at the expense of some memory wastage (use shrink to recover this extra memory). (nicely analysed by Dmitry Karasik). 2.2311 Thu Feb 19 02:12:54 CET 2009 - add a section "JSON and ECMAscript" to explain some incompatibilities between the two (problem was noted by various people). - add t/20_faihu.t. 2.231 Thu Nov 20 04:59:08 CET 2008 - work around 5.10.0 magic bugs where manipulating magic values (such as $1) would permanently damage them as perl would ignore the magicalness, by making a full copy of the string, reported by Dmitry Karasik. - work around spurious warnings under older perl 5.8's. 2.23 Mon Sep 29 05:08:29 CEST 2008 - fix a compilation problem when perl is not using char * as, well, char *. - use PL_hexdigit in favour of rolling our own. 2.2222 Sun Jul 20 18:49:00 CEST 2008 - same game again, broken 5.10 finds yet another assertion failure, and the workaround causes additional runtime warnings. Work around the next assertion AND the warning. 5.10 seriously needs to adjust it's attitude against working code. 2.222 Sat Jul 19 06:15:34 CEST 2008 - you work around one -DDEBUGGING assertion bug in perl 5.10 just to hit the next one. work around this one, too. 2.22 Tue Jul 15 13:26:51 CEST 2008 - allow higher nesting levels in incremental parser. - error out earlier in some cases in the incremental parser (as suggested by Yuval Kogman). - improve incr-parser test (Yuval Kogman). 2.21 Tue Jun 3 08:43:23 CEST 2008 - (hopefully) work around a perl 5.10 bug with -DDEBUGGING. - remove the experimental status of the incremental parser interface. - move =encoding around again, to avoid bugs with search.cpan.org. when can we finally have utf-8 in pod??? - add ->incr_reset method. 2.2 Wed Apr 16 20:37:25 CEST 2008 - lifted the log2 rounding restriction of max_depth and max_size. - make booleans mutable by creating a copy instead of handing out the same scalar (reported by pasha sadri). - added support for incremental json parsing (still EXPERIMENTAL). - implemented and added a json_xs command line utility that can convert from/to a number of serialisation formats - tell me if you need more. - implement allow_unknown/get_allow_unknown methods. - fixed documentation of max_depth w.r.t. higher and equal. - moved down =encoding directive a bit, too much breaks if it's the first pod directive :/. - removed documentation section on other modules, it became somewhat outdated and is nowadays mostly of historical interest. 2.1 Wed Mar 19 23:23:18 CET 2008 - update documentation here and there: add a large section about utf8/latin1/ascii flags, add a security consideration and extend and clarify the JSON and YAML section. - medium speed enhancements when encoding/decoding non-ascii chars. - minor speedup in number encoding case. - extend and clarify the section on incompatibilities between YAML and JSON. - switch to static inline from just inline when using gcc. - add =encoding utf-8 to the manpage, now that perl 5.10 supports it. - fix some issues with UV to JSON conversion of unknown impact. - published the yahoo locals search result used in benchmarks as the original url changes so comparison is impossible. 2.01 Wed Dec 5 11:40:28 CET 2007 - INCOMPATIBLE API CHANGE: to_json and from_json have been renamed to encode_json/decode_json for JSON.pm compatibility. The old functions croak and might be replaced by JSON.pm comaptible versions in some later release. 2.0 Tue Dec 4 11:30:46 CET 2007 - this is supposed to be the first version of JSON::XS compatible with version 2.0+ of the JSON module. Using the JSON module as frontend to JSON::XS should be as fast as using JSON::XS directly, so consider using it instead. - added get_* methods for all "simple" options. - make JSON::XS subclassable. 1.53 Tue Nov 13 23:58:33 CET 2007 - minor doc clarifications. - fixed many doc typos (patch by Thomas L. Shinnick). 1.52 Mon Oct 15 03:22:06 CEST 2007 - remove =encoding pod directive again, it confuses too many pod parsers :/. 1.51 Sat Oct 13 03:55:56 CEST 2007 - encode empty arrays/hashes in a compact way when pretty is enabled. - apparently JSON::XS was used to find some bugs in the JSON_checker testsuite, so add (the corrected) JSON_checker tests to the testsuite. - quite a bit of doc updates/extension. - require 5.8.2, as this seems to be the first unicode-stable version. 1.5 Tue Aug 28 04:05:38 CEST 2007 - add support for tied hashes, based on ideas and testcase by Marcus Holland-Moritz. - implemented relaxed parsing mode where some extensions are being accepted. generation is still JSON-only. 1.44 Wed Aug 22 01:02:44 CEST 2007 - very experimental process-emulation support, slowing everything down. the horribly broken perl threads are still not supported - YMMV. 1.43 Thu Jul 26 13:26:37 CEST 2007 - convert big json numbers exclusively consisting of digits to NV only when there is no loss of precision, otherwise to string. 1.42 Tue Jul 24 00:51:18 CEST 2007 - fix a crash caused by not handling missing array elements (report and testcase by Jay Kuri). 1.41 Tue Jul 10 18:21:44 CEST 2007 - fix compilation with NDEBUG (assert side-effect), affects convert_blessed only. - fix a bug in decode filters calling ENTER; SAVETMPS; one time too often. - catch a typical error in TO_JSON methods. - antique-ised XS.xs again to work with outdated C compilers (windows...). 1.4 Mon Jul 2 10:06:30 CEST 2007 - add convert_blessed setting. - encode did not catch all blessed objects, encoding their contents in most cases. This has been fixed by introducing the allow_blessed setting. - added filter_json_object and filter_json_single_key_object settings that specify a callback to be called when all/specific json objects are encountered. - assume that most object keys are simple ascii words and optimise this case, penalising the general case. This can speed up decoding by 30% in typical cases and gives a smaller and faster perl hash. - implemented simpleminded, optional resource size checking in decode_json. - remove objToJson/jsonToObj aliases, as the next version of JSON will not have them either. - bit the bullet and converted the very simple json object into a more complex one. - work around a bug where perl wrongly claims an integer is not an integer. - unbundle JSON::XS::Boolean into own pm file so Storable and similar modules can resolve the overloading when thawing. 1.3 Sun Jun 24 01:55:02 CEST 2007 - make JSON::XS::true and false special overloaded objects and return those instead of 1 and 0 for those json atoms (JSON::PP compatibility is NOT achieved yet). - add JSON::XS::is_bool predicate to test for those special values. - add a reference to. - removed require 5.8.8 again, it is just not very expert-friendly. Also try to be more compatible with slightly older versions, which are not recommended (because they are buggy). 1.24 Mon Jun 11 05:40:49 CEST 2007 - added informative section on JSON-as-YAML. - get rid of some c99-isms again. - localise dec->cur in decode_str, speeding up string decoding considerably (>15% on my amd64 + gcc). - increased SHORT_STRING_LEN to 16kb: stack space is usually plenty, and this actually saves memory when !shrinking as short strings will fit perfectly. 1.23 Wed Jun 6 20:13:06 CEST 2007 - greatly improved small integer encoding and decoding speed. - implement a number of µ-optimisations. - updated benchmarks. 1.22 Thu May 24 00:07:25 CEST 2007 - require 5.8.8 explicitly as older perls do not seem to offer the required macros. - possibly made it compile on so-called C compilers by microsoft. 1.21 Wed May 9 18:40:32 CEST 2007 - character offset reported for trailing garbage was random. 1.2 Wed May 9 18:35:01 CEST 2007 - decode did not work with magical scalars (doh!). - added latin1 flag to produce JSON texts in the latin1 subset of unicode. - flag trailing garbage as error. - new decode_prefix method that returns the number of characters consumed by a decode. - max octets/char in perls UTF-X is actually 13, not 11, as pointed out by Glenn Linderman. - fixed typoe reported by YAMASHINA Hio. 1.11 Mon Apr 9 07:05:49 CEST 2007 - properly 0-terminate sv's returned by encode to help C libraries that expect that 0 to be there. - partially "port" JSON from C to microsofts fucking broken pseudo-C. They should be burned to the ground for pissing on standards. And I should be stoned for even trying to support this filthy excuse for a c compiler. 1.1 Wed Apr 4 01:45:00 CEST 2007 - clarify documentation (pointed out by Quinn Weaver). - decode_utf8 sometimes did not correctly flag errors, leading to segfaults. - further reduced default nesting depth to 512 due to the test failure by that anonymous "chris" whose e-mail address seems to be impossible to get. Tests on other freebsd systems indicate that this is likely a problem in his/her configuration and not this module. - renamed json => JSON in error messages. - corrected the character offset in some error messages. 1.01 Sat Mar 31 16:15:40 CEST 2007 - do not segfault when from_json/decode gets passed a non-string object (reported by Florian Ragwitz). This has no effect on normal operation. 1.0 Thu Mar 29 04:43:34 CEST 2007 - the long awaited (by me) 1.0 version. - add \0 (JSON::XS::false) and \1 (JSON::XS::true) mappings to JSON true and false. - add some more notes to shrink, as suggested by Alex Efros. - improve testsuite. - halve the default nesting depth limit, to hopefully make it work on Freebsd (unfortunately, the cpan tester did not send me his report, so I cannot ask about the stack limit on fbsd). 0.8 Mon Mar 26 00:10:48 CEST 2007 - fix a memleak when decoding hashes. - export jsonToBj and objToJson as aliases to to_json and from_json, to reduce incompatibilities between JSON/JSON::PC and JSON::XS. (experimental). - implement a maximum nesting depth for both en- and de-coding. - added a security considerations sections. 0.7 Sun Mar 25 01:46:30 CET 2007 - code cleanup. - fix a memory overflow bug when indenting. - pretty-printing now up to 15% faster. - improve decoding speed of strings by up to 50% by specialcasing short strings. - further decoding speedups for strings using lots of \u escapes. - improve utf8 decoding speed for U+80 .. U+7FF. 0.5 Sat Mar 24 20:41:51 CET 2007 - added the UTF-16 encoding example hinted at in previous versions. - minor documentation fixes. - fix a bug in and optimise canonicalising fastpath (reported by Craig Manley). - remove a subtest that breaks with bleadperl (reported by Andreas König). 0.31 Sat Mar 24 02:14:34 CET 2007 - documentation updates. - do some casting to hopefully fix Andreas' problem. - nuke bogus json rpc stuff. 0.3 Fri Mar 23 19:33:21 CET 2007 - remove spurious PApp::Util reference (John McNamara). - adapted lots of tests from other json modules (idea by Chris Carline). - documented mapping from json to perl and vice versa. - improved the documentation by adding more examples. - added short escaping forms, reducing the created json texts a bit. - added shrink flag. - when flag methods are called without enable argument they will by default enable their flag. - considerably improved string encoding speed (at least with gcc 4). - added a test that covers lots of different characters. - clarified some error messages. - error messages now use correct character offset with F_UTF8. - improve the "no bytes" and "no warnings" hacks in case the called functions do... stuff. - croak when encoding to ascii and an out-of-range (non-unicode) codepoint is encountered. 0.2 Fri Mar 23 00:23:34 CET 2007 - the "could not sleep without debugging release". it should basically work now, with many bugs as no production tests have been run yet. - added more testcases. - the expected shitload of bugfixes. - handle utf8 flag correctly in decode. - fix segfault in decoder. - utf8n_to_uvuni sets retlen to -1, but retlen is an unsigned types (argh). - fix decoding of utf-8 strings. - improved error diagnostics. - fix decoding of 'null'. - fix parsing of empty array/hashes - silence warnings when we prepare the croak message. 0.1 Thu Mar 22 22:13:43 CET 2007 - first release, very untested, basically just to claim the namespace. 0.01 Thu Mar 22 06:08:12 CET 2007 - original version; cloned from Convert-Scalar | https://metacpan.org/changes/distribution/JSON-XS | CC-MAIN-2015-18 | refinedweb | 2,302 | 67.96 |
Detecting traffic
In order to have a chance at winning a race the YetiBorgs need to be able to do more than simply follow the track.
What they have to try and do is overtake the competition.
Actually doing the overtake itself is fairly simple, drive to the left or right of the robot in front.
The tricky part is realising there is a robot in front of us to overtake.
Say we have some robots ahead of us, like this:
The processing we have for identifying the lanes sees them, but they are black like the walls are:
One way to solve this would be to determine the shape of the white areas above, but this has problems:
- Typically this kind of processing takes a fair amount of CPU time
- Difficult to identify two robots obscuring each other
- Needs to be able to see robots facing in different directions
- Identifying robots which are only partially in shot is tricky
So what happens if we just do our normal line matching from this image:
There is actually a fair amount of confusion going on.
First there are points for the "inside wall" marked between the robot tyre and the green lane.
Second there are a large number of grey points in the image.
The grey points are actually quite interesting.
These are the points in the image where we see a colour boundary, but it is not one we expect to find.
For example a blue lane against a wall, or a wall to the right of a red lane.
We get these "error" points in almost all images:
converted to lanes:
converted to points:
These "error" points occur for a number of reasons:
- When the track is nearly horizontal in the image
- At the edge of the image where the next lane is not visible
- Slight imperfections in the track itself
- Lighting differences causing mistakes
Having these is not a problem, usually we simply ignore them when doing the processing.
The difference is how many of these "error" points we see with a robot in front.
What we can do is count how many of these points we see in each image.
If there are enough then there is a robot / obstacle that needs to be driven around.
Not enough then we can keep going as we are.
We can then throw the unneeded points away and continue with the normal processing code.
The only question remaining is do we move to the left, or to the right.
Looking at the points again we can see that there are more on a closer robot:
What we can do is take the average of where all the "error" points are along the X axis (left ↔ right) of the image.
If they are generally on the left we move to the right, otherwise we move to the left instead.
We can work this out using
numpy to do the averaging for us:
import numpy imageCentreX = imageWidth / 2.0 # At some point 'others' is loaded with the error points # in the format [[X, Y], [X, Y], ..., [X, Y]] if not overtaking: # Check if we need to overtake a robot in front errorPointCount = len(others) if errorPointCount > errorPointThreshold # Robot detected, decide if we should overtake to the left or right overtaking = True errorPointAverageX = numpy.array(others)[:,0].mean() if errorPointAverageX < imageCentreX: # Robot to the left, overtake to the right # ... else: # Robot to the right, overtake to the left # ... else: # Keep overtaking until we think we passed the robot # ...
Now our YetiBorg knows there is one or more robots ahead and which side it should try and overtake on.
Add new comment | https://www.formulapi.com/blog/detect-traffic | CC-MAIN-2020-29 | refinedweb | 608 | 65.76 |
On 8/16/21 4:59 PM, Tomasz Kramkowski via Grub-devel wrote:
20def1a3c introduced support for file modification times to allow comparison of file ages on EFI systems. This patch used grub_datetime2unixtime which uses a 32 bit unix timestamp and as a result did not allow the full range of times that FAT timestamps do. In some situations a file with a timestamp of 1970-01-01 gets transferred to a FAT partition, the timestamp ends up as 2098-01-01 because of FAT's use of the 1980-01-01 DOS epoch and lack of negative timestamps. Since 2098 is after 2038, this date cannot fit in a 32 bit timestamp. Ideally grub should use 64 bit timestamps but I have not investigated what kind of work would be required to support this.
Field mtime of struct grub_dirhook_info is already 64bit. See commit 81f1962393f4 ("fs: Use 64-bit type for filesystem timestamp"). Function grub_datetime2unixtime needs to be fixed: The 2037 check should be removed and the code has to be adjusted to correctly treat year 2100. grub_unixtime2datetime already handles 64bit timestamps but ignores that year 2100 is not a leap year: grub_unixtime2datetime(4107585600) = 2100-02-29 12:00:00
This fixes bug #60565. Reported-by: Naïm Favier <n+grub@monade.li> Tested-by: Naïm Favier <n+grub@monade.li> Signed-off-by: Tomasz Kramkowski <tk@the-tk.com> --- grub-core/fs/fat.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/grub-core/fs/fat.c b/grub-core/fs/fat.c index dd82e4ee3..34589d7db 100644 --- a/grub-core/fs/fat.c +++ b/grub-core/fs/fat.c @@ -1020,16 +1020,17 @@ grub_fat_dir (grub_device_t device, const char *path, grub_fs_dir_hook_t hook, info.mtimeset = grub_exfat_timestamp (grub_le_to_cpu32 (ctxt.entry.type_specific.file.m_time), ctxt.entry.type_specific.file.m_time_tenth, &info.mtime); + if (info.mtimeset == 0) + grub_dprintf("exfat", "invalid modification timestamp for %s\n", path);
According to the commit message there is nothing invalid about the modification timestamp. It is a GRUB deficiency that it cannot handle the same date range as FAT and exFAT. So the message could be: "GRUB cannot handle the modification timestamp for %s".
#else if (ctxt.dir.attr & GRUB_FAT_ATTR_VOLUME_ID) continue; info.mtimeset = grub_fat_timestamp (grub_le_to_cpu16 (ctxt.dir.w_time), grub_le_to_cpu16 (ctxt.dir.w_date), &info.mtime); -#endif if (info.mtimeset == 0) - grub_error (GRUB_ERR_OUT_OF_RANGE, - "invalid modification timestamp for %s", path); + grub_dprintf("fat", "invalid modification timestamp for %s\n", path);
I suggest to use grub_error() for both messages. Best regards Heinrich
+#endif if (hook (ctxt.filename, &info, hook_data)) break; | https://lists.gnu.org/archive/html/grub-devel/2021-08/msg00108.html | CC-MAIN-2022-27 | refinedweb | 418 | 51.24 |
This as many of my blogs, is another DBA task and typically isn’t required in a user released reporting situation. I have used it a few times for very specialized reports that the user community runs, so it’s possible that you may also be able to use it there. One thing I stress is the use of unsafe assembly in this write and the examples I just put together in order to write the blog for everyone. Security on assemblys might be a good follow up blog.
Two things I’m going to show you today. First is how to scan the network for instances using a SQLCLR UDF. The second is how to use that listing result set in a report so you can quickly run one report like performance monitoring reports for several instances and databases. This basically by making use of dynamic data sources by means of parameters in the connection strings.
This does two major things for you right away.
1) It removes uneccessary clutter in creating folders of identical reports pointing to different data sources.
2) It makes backing these reports up much easier and much easier to maintain changes
So the first taks is scanning the network for SQL Server instances
This can be done with the following SQLCLR UDF.
using System; using System.Data; using System.Data.SqlClient; using System.Data.SqlTypes; using Microsoft.SqlServer.Server; using System.Collections; public partial class UserDefinedFunctions { [SqlFunction(FillRowMethodName = "FillRow",TableDefinition = "InstanceName nvarchar(500)", DataAccess=DataAccessKind.Read)] public static IEnumerable InstanceFinder() { System.Data.Sql.SqlDataSourceEnumerator instance = System.Data.Sql.SqlDataSourceEnumerator.Instance; System.Data.DataTable dt = instance.GetDataSources(); return dt.Rows; } public static void FillRow(Object obj, out SqlString InstanceName) { DataRow r = (DataRow)obj; InstanceName = new SqlString(r[0].ToString()); } };
Create this and deploy it to your DBA restricted database. As most of my readers know that is always a secured database named DBA (go figure!).
Now call the UDF to search the network for SQL Servers.
SELECT * FROM InstanceFinder();
Now to use that in SSRS we have to do a few things. Get a project ready and name it “DBAs Rule”. Add a shared data source named DBA. This points to the UDF we’re going to use. Next add a blank report named, “SQL DB Check.rdl”
Create a new dataset named “ServerListing” like below
Save that by hitting OK and now let’s create a parameter that our dataset will populate. Name the parameter “ServerName”. Select “From Query” for available values and find ServerListing in the list. select the only value available for “Value field” and “Label field”.
Now back in the Data tab create another dataset. Name this one DBListing. We need a new data source now sense our goal here is to connect to whatever SQL Server the UDF finds. So click the drop down for Data Source and hit create new. Name this DS NoInitialCatalog. That’s as meaningful of a name as we can get as that is the key to how we do this. To create this type of connection string all we do is specify the Data Source itself without a initial catalog. For the connection string we need to use our parameter for the server as well. All we do is add it in there as an expression to accomplish that.
like so…
=“Data Source=” & Parameters!ServerName.Value
and should appear like this in the prompts…
Save all of this and then in the text for the dataset use
Select [Name] From sys.databases
This step is import so don’t forget to do it. In order for SSRS to know what you are returning from the query on NoIntialCatalog, you need to define it sense validation is out of it’s control here and the column will not prefill for you. So in the Fields tab go ahead and enter a field name, “Name” with a type of databases field and value of Name.
Now to actually show this in action we need something on the report so drag a table over and remove the extra columns but the first. Drag over the “Name” from the dataset, “DBListing” and preview the report.
The report is obviously going to be slow loading. You’re scanning the network and we all know how annoying hitting that drop down in SSMS and selecting browse network is. To speed this up I actually modified it in most of my DBA related reports. I used my scans that I talk about here and use a simple query over the real-time scan. The scan on load is handy and in a few reports that I do searches I still use it but for speed and security, I use the results from my scan SSIS to run most of them.
So after that runs you should end up with the list, select a instance and hit View Report. After that your database listing should return.
To go farther, you linked the results of the database listing to another parameter like the instance listing. Then run your analysis off of that per database. | https://blogs.lessthandot.com/index.php/datamgmt/datadesign/one-report-for-many-instances-dyanmicall/ | CC-MAIN-2019-30 | refinedweb | 851 | 66.03 |
Timing computations
From HaskellWiki
Revision as of 03:14, 27 January 2010 by DonStewart (Talk | contribs)
Timing an IO computation -- very basic approach. For a full featured, statistically sound benchmarking system, see the criterion package. main = do putStrLn "Starting..." time $ product [1..10000] `seq` return () putStrLn "Done."
And running this.
$ runhaskell A.hs Starting... Computation time: 1.141 sec Done.
See also Timing out computations and Timing computation in cycles.
Timing a pure computation:
import Text.Printf import Control.Exception import System.CPUTime import Control.Parallel.Strategies import Control.Monad import System.Environment lim :: Int lim = 10^6 time :: (Num t, NFData t) => t -> IO () time y = do start <- getCPUTime replicateM_ lim $ do x <- evaluate $ 1 + y rnf x `seq` return () end <- getCPUTime let diff = (fromIntegral (end - start)) / (10^12) printf "Computation time: %0.9f sec\n" (diff :: Double) printf "Individual time: %0.9f sec\n" (diff / fromIntegral lim :: Double) return () main = do [n] <- getArgs let y = read n putStrLn "Starting..." time (y :: Int) putStrLn "Done." | https://wiki.haskell.org/index.php?title=Timing_computations&oldid=33329 | CC-MAIN-2017-17 | refinedweb | 166 | 55.1 |
Help:Toolforge/Developing successful tools
This page provides tips to help you develop successful tools on Toolforge. If you have useful advice, please share.
Pick a license
All code in the ‘tools’ project must be published under an OSI approved open source license. Please add a license at the beginning!
A clear license is very important. It explains the rights that you are willing to grant to others who want to use or modify the software you built. Based on the general principles of the Wikimedia movement.
To learn more about choosing a license for Wikimedia see Wikimedia movement.
The two easiest options for your licenses. is a good explanation of differences between many Free and Open Source licenses. Be aware the some of the licenses described there are not OSI approved however, so make sure to check against the OSI list before using a license for your project.
How to add your license to your source code
License your source code and document that with a LICENSE or COPYING file in the tool's home directory and header comments in the source code.
Publish the code)
Have co-maintainers
Find co-maintainers for your tools who can help out at least with starting/stopping jobs when needed.
Write some docs
Document your tool. This is essential for others to know how it functions and to help maintain it into the future.
Create a page in the
Tool: namespace documenting the basics of what your tool does and how to start and stop it.
Going beyond
Operating in the open is essential to the success of Toolforge projects.
Planning for success
There are many things to think about when you are planning to build a tool.
Have the accounts you need
Follow the Toolforge quickstart guide to make sure that you have all of the accounts and logins you need to begin developing tools with Toolforge.
Secure passwords and other credentials
Keep passwords and other credentials (OAuth secrets, etc) separated from the main application code so that they are not exposed publicly in your version control system of choice.
Stay small
Make many small tools that each do one specific task rather than a catch-all tool that does many different tasks.
Pick the right development environment
If you will be doing heavy processing (e.g., compiles or tool test runs), please use the development environment (dev.toolforge.org) instead of the primary login host (login.toolforge.org) so as to help maintain the interactive performance of the primary login host.
The dev.toolforge.org host is functionally identical to login.toolforge.org
Even when running on dev.toolforge.org, your processes may be killed without notice by system administrators or automated watch processes if performance of the shared server is severely impacted. Using the job grid is recommended for any heavy processing.
Determine which public version control you will use
You'll want to use public version control for your tool. You can learn more about how to use version control with Toolforge here: Help:Toolforge/Version Control in Toolforge.
Pick a programming language | https://wikitech.wikimedia.org/wiki/Help:Toolforge/Developing_successful_tools | CC-MAIN-2022-33 | refinedweb | 515 | 64.3 |
Hello. In this series, I used Ruby and Sinatra multiple times.
--Send the ISBN code on the back of the book to search and display the image of the book --You can record the book and refer to it later.
I would like to make a LINE Bot called "Honmemo!"
In this article, we will create a LINE Bot that returns parrots.
The flow of the program is as follows. (** Service ** is the program created this time)
First, let's create a template for the project. If you already have a template or want to incorporate it into your existing code, proceed to the next chapter.
Terminal
$ bundle init
Generate a Gemfile with the
bundle init command.
And let's add the following gem.
Gemfile
#Omission gem "sinatra" gem "sinatra-contrib" gem "dotenv" gem "line-bot-api"
After completing the description, install it with the
bundle command.
Terminal
$ bundle
Next, we will create the core file. This time, the file name is ʻapp.rb`.
app.rb
require 'bundler/setup' Bundler.require require 'sinatra/reloader' if development? get '/' do "Hello world!" end
Now, let's run it and test its operation.
Terminal
$ ruby app.rb -o 0.0.0.0
Enter the command to start the program. Sinatra's default port number is 4567, so let's go to http: // localhost: 4567 /. If "Hello world!" Is displayed, Sinatra's environment construction is complete! (The port numbers in the screenshots are different because you are biting Docker, so don't worry too much> <) Go to and log in. If you normally use LINE, you can log in with a QR code.
When you log in, you should see a screen like the one below. For the time being, you can change the language from the button at the bottom right, so it's easier to use Japanese.
Whenever you develop a LINE Bot, the words "provider" and "channel" come up. These can be roughly explained
--Provider: Developer account --Channel: Bot account
It means that · · · Providers are created in units such as companies, individuals, and groups of developers, and channels belong to one provider.
So let's create a provider first. When you click "Create new provider", a screen for entering the provider name will appear, so enter the provider name you like.
The provider name you enter here will be ** published ** as the author of the bot. You should write down your real name. Press the create button to create the provider.
Next, let's create a channel that will be a bot account. The channel used by the bot is "Messaging API", so click the Messaging API.
The setting screen for the newly created channel will appear. Let's fill in the necessary contents.
After agreeing to the required terms, press the create button to create the channel.
Now that you have the channels you need for your bot, let's write the keys and secrets to the .env file. You can write it directly in the program, but it's not so good considering security.
.env
LINE_CHANNEL_ID=xxxxxxxxxx LINE_CHANNEL_SECRET=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx LINE_CHANNEL_TOKEN=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Channel ID and channel secret are "Channel basic settings" The channel token is listed in the "Messaging API Settings" under the name "Channel Access Token (Long Term)". If it is not displayed, press the publish button to publish it.
Write Dotenv.load under require so that the variables described in .env can be used in the program.
app.rb
require 'bundler/setup' Bundler.require require 'sinatra/reloader' if development? Dotenv.load # <-Postscript get '/' do "Hello world!" end
Webhook is a mechanism to have the content of the event POSTed to the URL set in advance when the event occurs in the service of the other party (LINE in this article).
For example, if you set the URL
/ callback of your service to the LINE channel, you can have
/ callback POST the content of the message when a message arrives on the channel.
However, since it needs to be a URL that can be accessed from the LINE server, it will not work properly when the service is not actually published. For example, localhost: 4567 under development is a URL that can only be accessed from your own PC, so even if a message comes with
localhost: 4567 / callback set,
localhost: 4567 / callback will not be called.
Because of this specification, you basically need to deploy every time you develop a LINE Bot. If you open the port, you can save the trouble of deploying each time, but there is a security risk, so we will not introduce it here.
It's basically the same as the README on GitHub, but the image processing is omitted and the code is simple.
app.rb
require 'bundler/setup' Bundler.require require 'sinatra/reloader' if development? Dotenv.load # ======Postscript from here====== def client @client ||= Line::Bot::Client.new { |config| config.channel_id = ENV["LINE_CHANNEL_ID"] config.channel_secret = ENV["LINE_CHANNEL_SECRET"] config.channel_token = ENV["LINE_CHANNEL_TOKEN"] } end post '/callback' do body = request.body.read signature = request.env['HTTP_X_LINE_SIGNATURE'] unless client.validate_signature(body, signature) error 400 do 'Bad Request' end end events = client.parse_events_from(body) events.each do |event| if event.is_a?(Line::Bot::Event::Message) if event.type === Line::Bot::Event::MessageType::Text message = { type: 'text', text: event.message['text'] } client.reply_message(event['replyToken'], message) end end end "OK" end # ======Postscript up to here====== get '/' do "Hello wolrd!" end
I will explain the code below.
def client @client ||= Line::Bot::Client.new { |config| config.channel_id = ENV["LINE_CHANNEL_ID"] config.channel_secret = ENV["LINE_CHANNEL_SECRET"] config.channel_token = ENV["LINE_CHANNEL_TOKEN"] } end
This is the code to enable the "client" to operate the LINE Bot (this is a function of
line-bot-api).
You can create a client with
Line :: Bot :: Client.new, but it is implemented like this because one client is enough for one service.
||=By using [email protected] the client is empty
Line::Bot::Client.newAnd pass it [email protected] the client is already in, pass it. The processing is realized.
post '/callback' do body = request.body.read signature = request.env['HTTP_X_LINE_SIGNATURE'] unless client.validate_signature(body, signature) error 400 do 'Bad Request' end end
The
post'/ callback'do block is a bit long, so I'll explain it separately.
body = request.body.read just assigns the sent data to the body variable.
After the signature, we check whether the data sent is really from the LINE server.
The data sent from the LINE server always contains something called "HTTP_X_LINE_SIGNATURE", and you can check whether it is the data sent from the LINE server by looking at the contents.
Checking if it is a LINE server is implemented in
line-bot-api and can be used through the client created earlier.
The verification process is executed in the part called
client.validate_signature (body, signature).
This is an important code that checks if a malicious person is spoofing the LINE server and sending a message.
events = client.parse_events_from(body) events.each do |event| if event.is_a?(Line::Bot::Event::Message) if event.type === Line::Bot::Event::MessageType::Text message = { type: 'text', text: event.message['text'] } client.reply_message(event['replyToken'], message) end end end "OK" end
In ʻevents = client.parse_events_from (body)`, the sent data is converted into a form that is easy to handle with ruby. The result of the conversion is an array of events, as you can see from the name events.
events.each do |event|Is processing multiple events one by one. This is because multiple events may be sent at the same time.
ʻIf event.is_a? (Line :: Bot :: Event :: Message)` checks if the event type is Message. Non-message events include "Add Friend" and "Unblock".
ʻIf event.type === Line :: Bot :: Event :: MessageType :: Text` confirms that the message type is text. Non-text message types include images, videos, and stamps.
In other words, the code from the top 4 lines is used to "analyze the transmitted data and narrow down only the text message".
Next, let's look at the code inside the if statement.
message = { type: 'text', text: event.message['text'] } client.reply_message(event['replyToken'], message)
The top four lines assemble the message to send to the LINE server, and the last one line sends the reply. event ['replyToken'] is the reply token included in the event.
At the end, I wrote
" OK ", but it follows the rule of LINE Bot API that it is necessary to return the correct response if the process succeeds normally. It's okay to return anything.
Now that the code is complete, let's run it! Unfortunately, as I explained earlier, it does not work when executed locally. So, this time I will deploy to Heroku.
I will omit the details of deploying to heroku, but let's create only Procfile.
Procfile
web: bundle exec ruby app.rb -o 0.0.0.0 -p $PORT
Terminal
$ git init $ git add -A $ git commit -m "first commit" $ heroku create $ git push heroku master
After deploying, let's open the application. If "Hello wolrd!" Is displayed, the deployment is complete!
Go to the LINE Developers site and go to the channel settings screen.
Open Messaging API Settings and click Edit Webhook URL.
A box for entering the URL will appear, so enter
URL + / callback you just deployed and press the update button.
For example, the deployed URL
If so`
It will be.
After that, let's check the use of Webhook.
`You can check the operation of the server by pressing the verification button. If it is displayed as successful, there is no problem.
Also, Bot messages cannot be sent at this time because auto-answer messages are enabled. So disable auto attendant.
Click the Edit Response Message button in the Messaging API settings. Then, a page called response settings will be displayed, so you can see the detailed settings below.
--Turn response message ** off ** --Webhook ** on **
Set.
Now you are ready to use the webhook.
Well, everything is ready! Add your bot to your friends and send a message! There is a QR code in "Messaging API settings", so let's read it with LINE. You should be able to make friends with the bot you made.
If you can make friends, send a message. If you get the same message you sent, you're successful! Thank you for your support!
In this article, I made a bot that returns a parrot of the sent message. It's very simple, but it contains a lot of important code that is essential for making bots, so it's a good idea to familiarize yourself with it!
Recommended Posts | https://linuxtut.com/let's-make-a-line-bot-with-ruby-+-sinatra-part-1-c73c4/ | CC-MAIN-2020-50 | refinedweb | 1,753 | 68.26 |
- Advertisement
EnigmaMember
Content count2998
Joined
Last visited
Community Reputation1410 Excellent
About Enigma
- RankContributor
Spigots
Enigma posted a blog entry in The Enigma Code textures, decorations etc. An added complication exists in that imports are referenced via the archive filename, minus extension. So no two unreal archives can share the same base filename as they would conflict and one would be inaccessible. We have a good texture artist on the team who has produced (and continues to produce) a number of texture packages. Unfortunately some of those texture packages have ended up needing to be renamed. Worse, some of our maps already depend on the texture packages in question. Up until now the only way to fix this has been for the mapper to load the map in UnrealEd and manually switch all the textures involved, then close and reopen UnrealEd and reload the map to check that the dependency on the old texture package had been removed. Any decorations that used old packages would need to be rebuilt completely. Since this is obviously undesirable, and since I already had a lot of the base code written from other utilities, I decided to put together a small utility to automate the replacement of packages. The first version went together really quickly but unfortunately I'd not accounted for one thing. If the map uses a texture from the package being replaced for which there is no equivalent texture with the same name in the replacement package you end up with the default texture. The mapper would then have to go through the map after replacement looking for default textures and manually replacing them, which would be no better and maybe even worse than replacing all the textures manually to start with. So I've been working to identify such situations and require a replacement texture to be specified. It's taking longer than I'd hoped but I'm getting there. By the time Battle for Na Pali is finished we shall probably have quite an extensive set of tools available to us. In other news, deque is now officially pronounced "de-queue" in the UK and not "deck". I was talking about deques at work and, thinking to err on the side of caution used the (apparently) more common pronunciation - "deck". Nobody had a clue what I was talking about until I switched to my normal pronunciation - "de-queue". So there you have it. I do love working for a British company. Colour and normalise are spelt correctly and now even deque is pronounced correctly. On the flip side, if I ever work for an American company I'm going to lose a good few percent productivity just through all the misspellings I'll make! ?nigma
Swings & Roundabouts
Enigma posted a blog entry in The Enigma CodeSorry you didn't get a journal entry last week, I've been pretty busy recently and struggling to get back in the swing of writing weekly journal entries since the big GDNet downtime (It's all their fault really, not just me being bone-idle). Not really much I can talk about at the moment. I'm seriously looking forward to reaching the point where I can actually talk about the stuff I'm doing at work on for the Team UnrealSP mod team, but for now it's all very hush-hush, which makes for rather boring journal entries. I looked into the nVidia instrumented graphics drivers this past week. Getting them installed and hooking an application up to read the counters was pretty easy, but unfortunately by graphics card doesn't have many counters available, only one on the GPU, the rest in the drivers. I still haven't gotten around to the PC upgrade I've been planning since the beginning of January. Although on the positive side my procrastination has resulted in the components dropping in price by around GBP60 total. As a result I shall probably get both XP and Vista for my new machine. The only question is whether to go 32bit XP and 64bit Vista or 64bit for both? My only concern about the latter is compatibility and drivers. Does anyone have any tales of woe/joy to steer me one way or the other? Work continues as normal. We had one amusing incident after getting some crash report code written. One of the artists left the game running overnight. It crashed and, due to a small bug in the crash report code, proceeded to write out around eighty gigabytes of crash dump data! Less amusing and more satisfying we managed to get to the bottom of an obscure vtable size mismatch warning in one of our dlls. Turns out we were compiling one of our static libs with RTTI enabled (accidentally) and everything else (deliberately) without. I'm due to finish my probation period at work this coming week, so hopefully that will all go smoothly. ?nigma
New version!
Enigma commented on OrangyTang's blog entry in Triangular PixelsSnowman Village is awesome. It should also come with a government health warning. Lots of great improvements in the new version, although I did prefer the cloud-style buttons. I came across a number of bugs in the last version. At least one of them still exists in the new version: There were no hazards visible, I rolled off the edge of the ledge to land on the ground below and never made it. Lost a good 4m+ snowball :( Likely other bug reports to come (I'll wait until they reappear in the latest version before reporting them). Keep up the good work! Σnigma
If You Can't Join 'em, Beat 'em
Enigma posted a blog entry in The Enigma CodeThis journal entry was written a fortnight ago, but couldn't be posted then due to GameDevs downtime. Not much of interest has happened between then and now so I'm posting this old entry tonight and will get back on track next week. Saving vertices in the problem map turned out not to be possible. Various possible workarounds were mooted, but before taking any potentially drastic decisions I decided to have one last go at fixing things. As I said before, 128 000 seemed a rather arbitrary limit. The UT public headers include a templated dynamic array class, so my first thought was that the 128 000 vertex limit must have been a static array. The problematic map was failing to load by hitting an assert, so I started by searching the UnrealTournament exe and dlls for the text string in the assert message. That narrowed me down to one highly probable dll. Next I pulled out my downloaded PE format document (including covenant not-to-sue with Microsoft) and started parsing through the headers. The data segments weren't large enough to contain a 128 000 vertex static array, which left either a stack array (unlikely), a dynamically allocated array or I was looking in the wrong file. If it was a dynamically allocated array then odds were the allocation size would be stored in either the data segment or as an immediate operand. I therefore tried scanning the file for any four consecutive bytes which could be interpreted as a non-zero multiple of 128 000. The results were very promising - although there were a good fifty or so matches, most of them were clearly irrelevant. Only six or seven of the results seemed plausible. From earlier testing I knew that one of the 128 000 entries was from the test which triggered the assert (I'd tried suppressing the assert previously on the off chance, but unsurprisingly that led to a crash). With so few possibilities to choose from I decided to use educated guess work to find the values I needed. I patched the file by doubling selected multiples of 128 000 and tried running the map. After a few false starts I hit pay dirt. Although there was some significant rendering corruption the map was loading and rendering. I tried a few more similar combinations and quickly found one which fixed the remaining issues. Vertex limit? What vertex limit? I'm not sure if I've mentioned it before but at work our coding standards for our current project disallow exceptions. I don't know the reasons for this although I can think of several reasonable possibilities and the decision is a slightly contentious one. Anyway, as a result we have our own exception-free implementations of some parts of the standard library. One such implementation is the standard list class. Unfortunately I ran across a slight problem with it a couple of weeks ago. The end iterator was implemented using a null pointer, which meant that you couldn't use --end() to get an iterator to the last element. I decided to fix this and add sentinel nodes to the list implementation. Now every competent programmer should be able to write a linked list implementation. I've done it myself several times. It turns out modifying somebody else's implementation is a bit harder. Add to this the fact that all this was taking place while my computer was out of action (see my previous journal previous entry), leaving me working on a tight time limit to be checked in before the end of the day because I was working on somebody else's box as they were away for the day. And on top of that our distributed build system wasn't set-up on that machine for me and every change to list required a rebuild of practically the entire project. I worked as quickly as possible and got my changes checked in at the end of day. I knew there were a couple of issues remaining, but I thought they were minor. Turns out I was wrong. I came in the following Monday to find that I'd basically broken half the project and spent half a day fixing bugs in at least half the list member functions. Moral of the story? If at all possible use an existing standard library implementation. Don't write your own! A few days later I found a curious problem with some usage of our list template. Compilation of one function was failing with an error that the compiler couldn't convert from pointer to reference. Fair enough I thought, except that it shouldn't have been trying to convert to reference. I played around with it a bit and managed to boil it down the roughly the following snippet: typedef list< Type * >::const_reverse_iterator iterator; typedef iterator::reference reference; Type * p = 0; reference r = p; reference (iterator::* f)() const = &iterator::operator*;list< Type * >::const_reverse_iterator was a typedef of std::reverse_iterator< list< Type * >::const_iterator >, of which the relevant bits of implementation are: class reverse_iterator : public _Iterator_base_secure { // wrap iterator to run it backwards /* snip */ typedef typename iterator_traits::reference reference; /* snip */ reference __CLR_OR_THIS_CALL operator*() const { // return designated value _RanIt _Tmp = current; return (*--_Tmp); } /* snip */ };list< Type * >::const_iterator::reference was Type * &. The confusing thing was that the test code snippet was compiling the line reference r = p; fine, thus proving that Type * was convertible to reference, but was choking on the following line, complaining that it could not convert type Type & (iterator::*)() const to Type * (iterator::*)(). I don't understand how iterator::reference can be Type & in the iterator class scope and Type * outside it. The only possibility I can think of is that this is another ODR violation error, but wasn't able to find any reason why the ODR might have been violated. I'm going to have another look when I have some time to try and figure out what's going on, but for now this one has me baffled. I anyone has any ideas please let me know. ?nigma
One Step Forwards, Two Steps Back
Enigma posted a blog entry in The Enigma CodeI spent my free time this week modifying my old Unreal map reader so that it could rebuild the file after parsing it into memory. I then went about investigating whether those vertices I thought were unused really were redundant. Unfortunately it turns out they aren't. I'd forgotten about the completely brain-dead manner in which Unreal handles its texture coordinates. For every polygon Unreal stores the texture coordinates by storing the world-space origin of the repeat-textured infinite plane which coincides with the polygon, plus x and y vectors within that infinite plane to represent the texture axes. Like I said, brain-dead. So the vertices I though were unused were actually the texture coordinate origins. I'm now searching for alternative ways to save precious vertices in the map. I had some "fun" with Visual Studio at work this week too. Due to reasons I won't go into our network is not as good as it might be. Having made a few changes to a utility class I hit recompile only for IncrediBuild to decide it was only going to build on my machine. Since this change meant recompiling practically the entire project this was going to take a while. One of my colleagues suggested rebooting my machine just to see if I could get IncrediBuild into a more cooperative mood, so I did. I stopped the build, closed Visual Studio, restarted and hit compile. Immediately I got a C1902 error (Program database mismatch: please check your installation). I couldn't build anything. We tried just about everything to try and fix it, including reinstalling Visual Studio. Finally, just as we were waiting for tech. support to show up to completely rebuild the machine I thought to Google the error. Some of the hits were talking about mspdb80.dll, so I tried replacing it. Lo and behold everything started working again. Why on Earth a full uninstall and reinstall of Visual Studio didn't fix the problem I can't begin to guess. ?nigma
Magic Constants
Enigma posted a blog entry in The Enigma CodeOne of Team UnrealSPs mappers came across an interesting problem this weekend. We already knew about UTs zone limit (64, because a zone mask is stored in a 64 bit integer) and bsp node limit (65535, because bsp node indices are stored in unsigned shorts) but now for the first time we've hit the vertex limit. The limit is 128 000, which seems a little arbitrary. I've been looking into the issue and it looks to me like UnrealEd isn't cleaning up after itself very well. As near as I can tell there are a good 50 thousand unreferenced vertices in the map data, so I'm hoping I'll be able to write a small utility to clear those unused vertices out of the map file this coming week and bring the map back under the limit. We changed source control systems at work this week, which was great fun. We're now using Perforce, or at least trying to - we're still finding our feet a little bit. The diff viewer and merge tool certainly look funky, with their multicoloured displays and variable speed scrolls. ?nigma
Jpeg 2000
Enigma commented on Ysaneya's blog entry in Journal of YsaneyaQuote:Original post by Ysaneya Quote:Original post by Jotaf Looks like Enigma here is working on a Jpeg2000 loader, and even at a very early stage he claims it's quite fast. I wouldn't be surprised, the libraries you mentioned are full of bloat :)Pretty cool :) I don't think it's faster than J2K. He mentions 1.5 seconds to load a 2048x2048 image. J2K loads a 1024x1024 image in 253 ms. Assuming linear scaling, a 2048x2048 would take about 1 second to load in J2K.I don't expect to be faster than J2K-Codec yet, but then I have a list of optimisations still to implement. Eventually I expect/hope to be pretty competative speed-wise with J2K-Codec, with the following advantages/disadvantages: AdvantagesFree (as in beer)Free (as in speech)Portable static linking DisadvantagesNot a complete Jpeg2000 implementationNo technical supportNaff nameAlso, I don't see anywhere that mentions whether or not J2K-Codec offers a multi-threaded soultion, but the Jpeg2000 decoding algorithm should be heavily parallelizable and I hope to take advantage of that. Anyway, keep up the good work on Infinity and the interesting journal entries, Σnigma
Like a Hot Knif through Butter
Enigma posted a blog entry in The Enigma CodeI've finished (the first pass of) my Jpeg2000 loader. I think I'm going to opt to call it Jackknif. That's Jackknife, the only English word I could find which contains a 'J' followed by two 'k's (J2K, get it?), with the e knocked off to indicate that it's not a complete implementation. Yes, there is method to my madness (or should that be madness to my method?). It turns out I did manage to get a finished version of the code down to less than a thousand lines, which shows that it really isn't that complicated an algorithm. Speed wise I was competing against two open source reference implementation - JasPer (written in C) and JJ2000 (written in Java). My reference image (2048x2048 rgb) took approximately nine seconds to load under Jasper and approximately six seconds to load under JJ2000. The first complete version of Jackknif was taking around fourteen seconds. I thought this was pretty reasonable and whipped out a profiler only to be rather confused by the results. The hotspot was showing up as 120 million calls to fill_n, but I only used fill_n in a couple of places. One place which should only have amounted to a few thousand calls and another, in static initialisation, which should only have involved about twenty or so calls. I took a careful look through the source and spotted a minor bug in my static initialisation code. It looked something like: static int array[size]; static bool initialised = false; if (!initialised) { function_which_initialises_array(array); } // code which uses arrayI'd forgotten to set the boolean flag to true, so my array was being repeatedly initialised, to the tune of ~6 million times. Fixing that minor bug, along with a couple of very minor optimisations (changing arrays of ints to arrays of shorts) brought Jackknif down to just under six seconds. I was very pleased with this. My fairly naieve implementation was outperforming even the "optimised" JJ2000 implementation. Next bottleneck was the filtering. The way it was implemented wasn't very cache friendly. I looped through every component and for each component looped through every row and then every column. To demonstrate, a 4 pixel square image would have been processed something like: Image: (components): r11 r12 r12 r14 r21 r22 r23 r24 r31 r32 r33 r34 r41 r42 r43 r44 r11 r21 r31 r41 r12 r22 r32 r42 r13 r23 r33 r43 r14 r24 r34 r44 g11 g12 g12 g14 g21 g22 g23 g24 g31 g32 g33 g34 g41 g42 g43 g44 g11 g21 g31 g41 g12 g22 g32 g42 g13 g23 g33 g43 g14 g24 g34 g44 b11 b12 b12 b14 b21 b22 b23 b24 b31 b32 b33 b34 b41 b42 b43 b44 b11 b21 b31 b41 b12 b22 b32 b42 b13 b23 b33 b43 b14 b24 b34 b44 Visitation order (array indices): 0 3 6 9 12 15 18 21 24 27 30 33 36 39 42 45 0 12 24 36 3 15 27 39 6 18 30 42 9 21 33 45 1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 1 13 25 37 4 16 28 40 7 19 31 43 10 22 34 46 2 5 8 11 14 17 20 23 26 29 32 35 38 41 44 47 2 14 26 38 5 17 29 41 8 20 32 44 11 23 35 47I switched the order to loop though components of each pixel one after another and processed the first pixel of each column in order before processing the next column: Visitation order (components): (array indices): 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47I expected that might bring the execution time down to around four and a half seconds, maybe four if I was lucky. I underestimated. With that simple optimisation the execution time plummeted to around 2.2 seconds. I still have a few more optimisations to apply. I'm not sure when that will happen since I shall probably be working on something else this next week for a bit of a break. My target though is to bring that execution time down to no more than 1.5 seconds for my reference image. I intend to release the final source code, both a cleaned up unoptimised version so people can see how the algorithm works, plus the final optimised version, under a permissive open source license when I'm done. The only thing I intend to disallow is patenting of techniques used in derivative works. I'm sure there exists an open source license with this kind of restriction. If anyone knows of a license with this restriction, please let me know - it'll save me a few minutes searching. Finally, the obligatory screenshot, actually four screenshots in one: The top left shows the fully decoded image, minus horizontal and vertical filtering, resized from 2048x2048 to 256x256. The top right shows the same image, but only the top left corner of it, at normal size. The bottom left shows the fully decoded image with horizontal and vertical filtering, again, resized from 2048x2048 to 256x256. The bottom right shows the same image, but again only the top left corner of it, at normal size. ?nigma
Graphic Violence
Enigma posted a blog entry in The Enigma CodeI've broken the back of the Jpeg2000 decoding algorithm. What really rankles is that all the publicly available implementations are at least a hundred thousand lines of code and my nearly complete implementation (admittedly with only a subset of the functionality) is only just about the hit a thousand lines. Here's what I have so far, the first code block, which equates to the red channel of the image reduced by a factor of 32 in each dimension: All that's left now is to add the loops and two additional lookup table to allow me to decode the remaining 3071 code blocks for that image, add filtering code to recombine the code blocks into the finished image, optimise and then clean-up and resolve hard-coded values to the appropriate variables. I have a few ideas how I can optimise which, if they work, should result in a significant speed-up. I came across yet another interesting code issue at work this week. I had some code roughly like this: class Base1 { public: Base1() { // code } void function() { // code } protected: int variable1_; int variable2_; bool variable_; }; class Base2 { protected: bool variable_; }; class Derived1 : public Base1 { public: Derived1() { function(); // code } }; Where Base1 and Base2 were bases of classes with similar interfaces, used for similar purposes (think static polymorphism). The code in the Derived1 constructor, after the call to function was failing with a very odd error (invalid Windows error message). Stepping through the code we discovered that although execution correctly stepped through the Base1 constructor and Base1::function, the debugger seemed to think that Derived1 was inherited from Base2, not Base1. It wasn't just a debugger fault either. The error was occurring because access to variable_ was actually accessing variable1_ which happened to be where variable_ would have been if the base class really was Base2, not Base1. Something obviously got very confused somewhere. Eventually I resorted to getting a completely clean version of the entire project from source control, which fixed the issue. I still don't know what was wrong. ?nigma
Happy New Year
Enigma posted a blog entry in The Enigma CodeNot really much to talk about what with Christmas and the new year. I've spent a little free time looking further into Jpeg2000 over the last couple of weeks. I'm now trying to get my head round entropy decoding and the MQ arithmetic decoder. I printed out 22 pages of source code to take away with me over Christmas. I think I understand enough about the arithmetic decoder, which was about three of those 22 pages. The entropy decoder which took up the remainder of the space appears to be a very complicated "optimised" implementation of a relatively simple algorithm. I put the word optimised in quotes because I'm pretty confident that it was a bad choice of optimisation strategy. I shall find out if I'm right in the new year. I thought I'd leave you with a couple of snippets from the JJ2000 source which made me laugh, when they didn't make me cry. Check the JavaDoc comments: /** * Returns the reversibility of the filter. A filter is considered * reversible if it is suitable for lossless coding. * * @return true since the 9x7 is reversible, provided the appropriate * rounding is performed. * */ public boolean isReversible() { return false; }Tricky stuff, that dyadic decomposition: for (rl=0; rl maxrl-rl) { hpd -= maxrl-rl; } else { hpd = 1; } // Determine max and min subband index minb = 1
Obscure, Incomprehensible and just plain Broken
Enigma posted a blog entry in The Enigma CodeI was off work this week. Tomorrow could be interesting since I think the code I checked in just before I left might have broken the build. It should only be a small break, but unfortunately build success is a binary state - the build is either broken or it's not. I did email them about it with steps to fix so hopefully it won't have been much of a problem. I spent my week working on the Jpeg2000 loader again, working through the new source code I talked about last month. Also Christmas shopping, playing DHTML Lemmings and various other random activities, so not actually as much time on the loader as I'd been intending. The new source code is still pretty awful: int i,k1,k2,k3,k4,l; // counters int tmp1,tmp2,tmp3,tmp4; // temporary storage for sample values int mv1,mv2,mv3,mv4; // max value for each component int ls1,ls2,ls3,ls4; // level shift for each component int fb1,fb2,fb3,fb4; // fractional bits for each component int[] data1,data2,data3,data4; // references to data buffers final ImageConsumer[] cons; // image consumers cache int hints; // hints to image consumers int height; // image height int width; // image width int pixbuf[]; // line buffer for pixel data DataBlkInt db1,db2,db3,db4; // data-blocks to request data from src int tOffx, tOffy; // Active tile offset boolean prog; // Flag for progressive data Coord nT = src.getNumTiles(null); // 38 lines which don't modify nT or src // getNumTiles is a non-modifying getter nT = src.getNumTiles(null);Not to mention the seemingly everpresent "what, you mean some people don't use the same size tabs as me" interchange of tabs and spaces for indentation. I really ought to find a beautifier. Still, it's easier to work through that the jasper source. Feels a bit strange to be working with Java again though. Next weeks installment will either be a day early or won't get written, since I'm away for Christmas as of next Sunday. ?nigma
Nooks & Crannies
Enigma posted a blog entry in The Enigma CodeThe observant amongst you will have noticed that it's not Sunday. The even more observant amongst you will have noticed that it's not Sunday and I'm posting a journal entry. The really observant amongst you will notice something odd about this. There is a reason for this. Drum roll please... I wasn't feeling too good last night and had an early night instead of writing this. So it's a day late. I had another interesting compiler incident at work last week. I had a piece of code performing a number of floating-point operations including some basic trigonometry. It was all working fine until I made a slight modification. After said modification the code worked fine in debug mode but failed with a floating-point stack check error in release mode. Investigations led to much confusion since doing anything differently seemed to result in the code working fine. Even just reading the floating-point operating environment at the start of the function caused the code to stop failing. I hunted through the source code and the generated assembly to see what could be wrong and while the source code looked OK the assembly looked a bit odd. Eventually our lead programmer took a look and after a bit of poking said he'd seen something similar before and it was probably an optimiser bug involving inline assembly (we have our own trig function implementations since our base library is portable across PC and console(s)). If he's right then I'm beginning to lose faith in compilers. That would be two genuine bugs in less than a month! Outside of work I've been poking around some more obscure parts of the C++ standard. Such knowledge sometimes comes in useful, like when a co-worker was trying to suppress a lint error in a macro and wondering why he couldn't get it to work. Lint errors can be suppressed by adding comments of the form //lint -eXXX but adding that to a macro won't do anything since comments are replaced with a single space before preprocessing. In the course of my poking I came across the macro examples in Section 16.3.5, Paragraphs 5 & 6, beautifully obscure examples intended to demonstrate as many macro combinations and effects as possible with the minimum quantity of code: Quote:C++ Standard, Section 16.3.5, Paragraph 5); Quote:C++ Standard, Section 16.3.5, Paragraph 6 To illustrate the rules for creating character string literals and concatenating tokens, the sequence #define str(s) # s #define xstr(s) str(s) #define debug(s, t) printf("x" # s "= %d, x" # t "= %s, \ x ## s, x ## t) #define INCFILE(n) vers ## n /* from previous #include example */ .Just trying to follow through the expansions to verify the result took me a good five minutes. Any errors in transcribing the above excerpts are my own. I'm also trying to figure out if the following code is valid C++: int main() { int @ = 1; return @; } I've not found a compiler that will accept it, but '@' is not in the basic source character set (Section 2.2, Paragraph 1) and the first phase of translation includes: Quote:C++ Standard, Section 2.1, Paragraph 1, excerpt Any source file character not in the basic source character set (2.2) is replaced by the universal-character-name that designates that character. And an identifier is defined as: Quote:C++ Standard, Section 2.10 identifier: nondigit identifier nondigit identifier digit nondigit: one of universal-characterAnyone wanting to argue for or against the validity of @ as a C++ identifier, speak now or forever hold your peas. (Yes, that was a terrible pun. You should be used to them by now). Finally, unless anyone has any other recommendations, I'm planning on adding Journal of EasilyConfused to my list of regularly-read GDNet journals, to replace EDI's journal. (Always three there are, a master, an apprentice, and a very talented indie team). I almost forgot, this week I found myself writing two oddly named functions: consume_hash and consume_carrot. The latter was a typo indirectly caused by the former (No, not for the reasons you're thinking). Who needs drugs when your brain is capable of such nonsense unaided? ?nigma
Things that go tweet in the night
Enigma posted a blog entry in The Enigma CodeSo, it's been three months already since I started my journal. I had hoped that this would become a fairly regular record of my work for Team UnrealSP, with occasional interludes of randomness. Instead it's turning out kind of the opposite. It's been another slow week. I seem to have acquired a sparrow or similar small bird that seems to like landing outside my window at half six in the morning and wake me up by chattering away for ten minutes before flying off again. As a result, due to tiredness, I haven't managed to do any work on the mod this week. Not much of interest going on at work that I can tell you about either. We had a couple of new programmers start last week which means I'm now officially not the most junior programmer on the team! I've also booked all my holiday now since our leave year runs from January to December. As a result I'll only be in the office for another seven and a half days this year. Even better, half a day will probably be spent doing "research" - a couple of the guys are getting Wiis on Friday and bringing them into the office. I just hope we don't break the company's large flat screen TV. I found some more Visual C++ weirdness at work this past week, though not nearly as bad as the last one: struct Object { Object(int i) : i(i) { } int i; }; struct Array { Object & operator[](unsigned int index) { return array[index]; } Object array[1]; }; Array objects = {Object(2001)}; int main() { return objects[0].i; } When compiled under Visual C++ 8.0 at warning level 4 produces the following warnings: spuriouswarning.cpp(18) : warning C4510: 'Array' : default constructor could not be generated spuriouswarning.cpp(12) : see declaration of 'Array' spuriouswarning.cpp(18) : warning C4610: struct 'Array' can never be instantiated - user defined constructor requiredIt appears somebody forgot about aggregate initialisation when writing that second warning. In other news, I feel I need a new GDNet journal to read since EDI stopped updating regularly. Anybody have any suggestions? Finally, since it's Advent and I feel bad about not giving you anything interesting to read about I'm going to let you in on a little secret. I have another project slowly ongoing. It's not directly game related but hopefully it will be of interest to some people here. It's a long term project, years most likely, especially at the rate I'm going. That's all I'm going to tell you for now. Still, I'm told the first step is admitting you have a problem. Err, I mean secret project. Let the rampant speculation commence. ?nigma
The Source Code Challenge
Enigma posted a blog entry in The Enigma CodeI managed to find a bit of free time to get back to my Jpeg2000 decoder this week. Unfortunately it's so long since I last worked on it I've lost track of where I was. I did have lots of notes, but the source code is so impenetrable it would still take me a while to get back up to speed. I say "would". Instead I found another open source Jpeg2000 codec, this time written in Java. Hopefully between the two sources I've be able to get a more solid grasp on the format and accelerate my progress. I keep wondering whether I ought to just buy a copy of the spec. Because two major concurrent projects isn't enough I also keep getting distracted by other random issues. This week I decided to look into parsing. I've written parsers before, even a very simple parser generator. I've also used Boost.Spirit a few times (and I'd love to learn how to use it better, maybe something to get distracted by some other month). This time however I decided to forget everything I new and research from scratch. It's funny what you can learn when you do this. I didn't research in too much depth, but it didn't take me too long to come across Parsing Expression Grammars (PEGs) and Packrat parsers, neither of which I remember coming across before. I'm not completely sold on Packrat parsing - it looks great for simple parsing but not so good for more complex parsing due to the complexity of changing state - but Parsing Expression Grammars seem really useful. I quickly hacked together a simple recursive descent parser for mathematical expressions, along with a generator using an equivalent grammar. After getting the parser working I spent a bit of time working on error handling, for which some of the details mentioned in the Packrat parser paper I was reading were very useful. All-in-all it was an interesting diversion and next time I need a parser I'll have a slightly better foundation to start from. Finally, I present you with The Source Code Challenge(TM). If you remember (assuming anyone actually reads this drivel regularly) a few weeks ago I was freeing up hard drive space to install Medieval II: Total War. I vaguely wondered at the time how much of my hard disk was filled with source code. This week I decided to find out. The Source Code Challenge(TM) is for you to do the same. The target to beat is 27 505 files (.c, .cpp, .h & .hpp) or 537 175 479 bytes (512MB!). Admittedly a large proportion of that come from six compilers with attendant include folders, plus boost, but it's still an awful lot of source code! ?nigma
Iterative Ranting
Enigma commented on Enigma's blog entry in The Enigma CodeNot a bad guess. My guess before encountering this would have been one of: On first reading, expect the code to print the output 2. On second reading, expect the code to fail to compile with an error that Base2 is not a valid identifier (you can only use a template name without template parameters within the definition or a specialisation of that template class). Borland 5.8.2 and GCC 3.3.1 agree with my second reading: Error E2102 example.cpp 37: Cannot use template 'Base2<Type>' without specifying specialization parameters in function Derived::function() Error E2379 example.cpp 37: Statement missing ; in function Derived::function() example.cpp: In member function `void Derived::function()': example.cpp:37: error: use of class template `template<class Type> struct Base2' as expression example.cpp:37: error: syntax error before `;' token Visual C++ 8.0 on the other hand goes for option three. Literally. It compiles with no errors and produces the output 3. It appears that there is a compiler bug which accepts the incorrect explicit scoping and then generates an incorrect this * offset. As you can imagine, this was quite an interesting bug to track down in real code. Σnigma
- Advertisement | https://www.gamedev.net/profile/20511-enigma/?tab=clubs | CC-MAIN-2018-26 | refinedweb | 6,552 | 59.84 |
The API Gateway exposes global settings that enable you to configure which versions of the SOAP and WSSE specifications it supports. You can also specify which attribute is used to identify the XML Signature referenced in a SOAP message.
To configure the namespace settings, in the Policy Studio tree, select the Settings node, and click the Namespace tab at the bottom of the screen. Alternatively, in the Policy Studio main menu, select Tasks -> Manage Settings -> Namespace.
The SOAP Namespace tab can be used to configure the SOAP namespaces that are supported by the API Gateway. In a similar manner to the way in which the API Gateway handles WSSE namespaces, the API Gateway will attempt to identify SOAP messages belonging to the listed namespaces in the order given in the table.
The default behavior is to attempt to identify SOAP 1.1 messages first, and for this reason, the SOAP 1.1 namespace is listed first in the table. The API Gateway will only attempt to identify the message as a SOAP 1.2 message if it can't be categorized as a SOAP 1.1 message first.
The Signature ID Attribute tab allows you to list the supported attributes that can be used by the API Gateway to identify a Signature reference within an XML message.
An XML-signature
<signedInfo> section may
reference signed data via the
URI attribute. The
URI value may contain an id that identifies data in
the message. The referenced data will hold the "URI" field value in one
of its attributes.
By default, the server uses the
Id attribute for each of
the WSSE namespaces listed above to locate referenced signed data. The
following sample XML Signature illustrates the use of the
Id
attribute:
<soap:Envelope xmlns: <soap:Header> <dsig:Signature <dsig:SignedInfo> ... <dsig:Reference ... </dsig:Reference> </dsig:SignedInfo> .... </dsig:Signature> </soap:Header> <soap:Body> <getProduct wsu: <Name>SOA Test Client</Name> <Company>Company</Company> </getProduct> </soap:Body> </soap:Envelope>
It is clear from this example that the Signature reference identified by the
URI attribute of the
<Reference>
element refers to the nodeset identified with the
Id attribute
(the
<getProduct> block).
Because different toolkits and implementations of the XML-Signature
specification can use attributes other than the
Id
attribute, the API Gateway allows the user to specify other attributes that
should be supported in this manner. By default, the API Gateway supports the
Id,
ID, and
AssertionID attributes for the purposes of
identifying the signed content within an XML Signature.
However you can add more attributes by clicking the Add button and
adding the attribute in the interface provided. The priorities of attributes can be
altered by clicking the Up and Down buttons. For
example, if most of the XML Signatures processed by the API Gateway use the
ID attribute, this attribute should be given the highest priority.
The WSSE Namespace tab is used to specify the WSSE (and corresponding WSSU) namespaces that are supported by the API Gateway.
The API Gateway attempts to identify WS Security blocks belonging to the WSSE namespaces listed in this table. It first attempts to locate Security blocks belonging to the first listed namespace, followed by the second, then the third, and so on until all namespaces have been utilized. If no Security blocks can be found for any of the listed namespaces, the message will be rejected on the grounds that the API Gateway does not support the namespace specified in the message. To add a new namespace, click the add button.
First, enter the WSSE namespace in the Name field. Then enter the corresponding WSSU namespace in the WSSU Namespace field. | https://docs.oracle.com/cd/E39820_01/doc.11121/gateway_docs/content/general_namespaces.html | CC-MAIN-2020-50 | refinedweb | 605 | 52.39 |
Textures are basically a chuck of memory, often using R,G,B(,A) values with 8 bits per channel. Usually textures contain image data, but it is data, you can do with it whatever you want. In GLSL a texture is specified as a uniform variable. Textures have their own type, which is one of the following:
2D texture
Table: Texture Data Types in GLSL
There are texture lookup functions to access the image data. Texture lookup functions can be called in the vertex and fragment shader. When looking up a texture in the vertex shader, level of detail is not yet computed, however there are some special lookup functions for that (function names end with "Lod").
The parameter "bias" is only available in the fragment shader It is an optional parameter you can use to add to the current level of detail.
Function names ending with "Proj" are the projective versions, the texture coordinate is divided by the last component of the texture coordinate vector.
Table: Texture Lookup Functions in GLSL
This part doesn't have anything to do with GLSL, but to have some code to load textures is important. There are many OpenSource libraries specialized in loading image formats. One of them is DevIL, another one is FreeImage. I am going to use FreeImage to load images for this tutorial.
I wrote a little wrapper around FreeImage to load and create textures with minimal code overhead. I also included a simple implementation of a smart pointer, which makes it much easier if you have several objects using the same texture, but you don't have to use it to load textures.
#include "texture.h"
#include "smartptr.h"
cwc::SmartPtr<cwc::TextureBase> pTexture;
void loadtexture()
{
pTexture = cwc::TextureFactory::CreateTextureFromFile("texture.jpg");
}
void draw()
{
if (pTexture) pTexture->bind(0); // bind texture to texture unit 0
}
An simple GLSL example is to swap the red and blue channel of the displayed texture..
varying vec2 vTexCoord;
void main(void)
{
vTexCoord = gl_MultiTexCoord0;
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}
the texture coordinate is passed down to the fragment shader as varying variable.
uniform sampler2D myTexture;
varying vec2 vTexCoord;
void main(void)
{
gl_FragColor = texture2D(myTexture, vTexCoord).bgra;
}
You probably noticed the ".xy" and ".bgra". The order of components in GLSL can be changed. This can be done by appending the component names in the order you want. You can even repeat components. In this example ".bgra" is used. This technique can also be used to convert vectors, for example vec4 to vec2.
vec4 TestVec = vec4(1.0, 2.0, 3.0, 4.0);
vec4 a = TestVec.xyzw; // (1.0, 2.0, 3.0, 4.0)
vec4 b = TestVec.wxyz; // (4.0, 1.0, 2.0, 3.0)
vec4 c = TestVec.xxyy; // (1.0, 1.0, 2.0, 2.0)
vec2 d = TextVec.zx; // (3.0, 1.0)
You may also wonder why "rgba" is used and not "xyzw". GLSL allows using the following names for vector component lookup:
Table: Vector Component Names
Mutlitexturing is very easy: In the GLSL program you simply specify several samplers and in the C++ program you bind textures to the appropriate texture units. The uniform sampler variables must be set to the appropriate texture unit number.
This source code only contains the simple example of swapping red and blue channel and is meant as base code for your texturing experiments. It contains everything required to compile it under Visual Studio 8 (GLEW, Freeglut, FreeImage).
Download:
GLSL_Texture.zip (Visual Studio 8 Project)
| https://www.opengl.org/sdk/docs/tutorials/ClockworkCoders/texturing.php | CC-MAIN-2016-26 | refinedweb | 579 | 56.76 |
PDFxStream for .NET
PDFxStream.NET is produced by translating the PDFxStream for Java binary into a managed .NET assembly. This translation process is complete, preserving PDFxStream’s API, architecture, functionality, and performance characteristics.
This kind of translation is possible because the Java Virtual Machine (JVM) and the .NET Common Language Runtime (CLR) are very similar architecturally, and the Java and .NET object models are conceptually analogous. The actual translation is performed by IKVM's static compilation process. IKVM is an open source toolkit that makes it possible to run Java applications and libraries within the .NET environment.
IKVM and the included OpenJDK library both use a liberal open-source license that makes it possible to redistribute them with commercial products without constraining such products' own licenses.
Requirements
PDFxStream.NET requires v2.0 SP2 or higher of the .NET or Mono runtime.
All DLLs for a given PDFxStream release are found in the lib directory of
the PDFxStream.NET distribution. This includes a number of
IKVM.*.dll files (e.g.
IKVM.Runtime.dll), as well
as two PDFxStream DLLs, only one of which you will use, depending on the
.NET language you are using:
PDFxStreamVB.dll, for use only in VB.NET projects
PDFxStream.dll, for use with any language other than VB.NET
As indicated above, you should choose only one of the PDFxStream DLLs, based
on which .NET language you are using: VB.NET projects should use
PDFxStreamVB.dll, while all other languages should use
PDFxStream.dll.
The IKVM DLL files are PDFxStream.NET's only dependencies. They provide the implementation of Java's standard library in .NET, as well as some runtime components that are required by any Java JAR that has been translated into a .NET assembly. No configuration or special initialization of these DLL files are necessary.
Why are there different PDFxStream DLLs for different .NET languages?
Symbols in VB are case-insensitive, which causes a collision between the
com.snowtide.pdf namespace and our primary entry point, the
com.snowtide.PDF class.
In the
PDFxStreamVB.dll library for use with VB.NET, the
com.snowtide.PDF class is renamed to
com.snowtide.PDFxStream, eliminating any ambiguity. No other
changes to the API documented here or in our API reference is affected, so
you can continue to use these resources while programming PDFxStream via
VB.NET.
All other .NET languages (including C#, F#, and others) do support case-sensitivity in namespace and class symbols, so they can use the standard PDFxStream API as-is
Installation
Using PDFxStream.NET within your .NET project is as simple as adding
references to each of the DLL files indicated in the previous section: all
of the
IKVM.*.dlls, and one of either
PDFxStream.dll or
PDFxStreamVB.dll, depending on the
.NET language your project uses.
Typical Usage
Using PDFxStream.NET is very straightforward, and mirrors typical PDFxStream for Java usage. Here's a sample text extraction function in C#:
using com.snowtide; using com.snowtide.pdf; using java.io; class ExtractTextAllPages { public static void Main(string[] args) { string pdfFilePath = args[0]; StringWriter text = new StringWriter(1024); using (Document doc = PDF.open(pdfFilePath)) { doc.pipe(new OutputTarget(text)); } System.Console.WriteLine("The text extracted from {0} is:", pdfFilePath); System.Console.WriteLine(text.toString()); } }
Without exception, all of the PDFxStream API is available in .NET. Because of this, the PDFxStream javadoc is the authoritative API reference for PDFxStream, whether it is used in Java or .NET.
Notes and Limitations
The sole minor difference between the documented PDFxStream API and its usage in .NET is how one obtains bitmap objects from extracted PDF image data. See this note for details.
Aside from this minor irregularity, PDFxStream.NET carries no limitations; it is a pure .NET assembly, through and through, and it acts like it.
For example, you can freely write
com.snowtide.pdf.Appendable sb) : base(sb) { } public override void textUnit (com.snowtide.pdf.layout.TextUnit tu) { base.textUnit(tu); cnt++; } public int getCount () { int _cnt = cnt; cnt=0; return cnt; } } }
An
OutputHandler
(or
com.snowtide.pdf.OutputTarget, in this case)
subclass like this can be used in conjunction with
any
pipe(OutputHandler) method, found on instances
of
com.snowtide.pdf.Document,
com.snowtide.pdf.Page,
and
com.snowtide.pdf.layout.Block.
Snowtide Collection Method Extensions
The
com.snowtide namespace provides a couple of extension
methods to make it easier to use parts of the PDFxStream API in .NET.
Consuming collections as
IEnumerable
Java collections all implement the
java.util.Iterable
interface, which is analogous to
.NET's
IEnumerable
interface. Unfortunately, the IKVM compilation process does not expose Java
collections as
IEnumerables; without an appropriate method
extension, this would mean that iterating through any collection returned by
PDFxStream could not be traversed with e.g.
foreach or passed
to any method that requires an
IEnumerable.
Using the
com.snowtide namespace will bring an extension method
into scope that makes it easy to treat any collection returned by PDFxStream
as an
IEnumerable, e.g. here used to easily iterate through the
keys of the document metadata in a PDF document:
using com.snowtide; using com.snowtide.pdf; class ExtractMetadata { public static void Main(string[] args) { string pdfFilePath = args[0]; System.Console.WriteLine("All document metadata from {0}:", pdfFilePath); using (Document doc = PDF.open(pdfFilePath)) { foreach (string attrKey in doc.getAttributeKeys().toIEnumerable<string>()) { System.Console.WriteLine("{0}: {1}", attrKey, doc.getAttribute(attrKey)); } } } }
Using
StringBuffer and
StringBuilder
as
Appendables
Many implementations
of
OutputHandler provided by
PDFxStream accept
java.lang.Appendable objects as their principal
constructor argument. This interface is implemented by a number of useful
sinks for textual output,
including
java.lang.StringBuffer,
java.lang.StringBuilder,
java.nio.CharBuffer,
any subclass of
java.io.Writer, etc.
The one wrinkle to this is that
StringBuffer
and
StringBuilder implement
Appendable via a shared
package-private superclass, the methods and implemented interfaces of which
are not visible to code using
StringBuffer
or
StringBuilder in .NET. This means that this C# code will not
compile:
using com.snowtide; using com.snowtide.pdf; // ... StringBuilder sb = new java.lang.StringBuilder(); OutputTarget tgt = new OutputTarget(sb);
The simple solution to this is to simply not use
java.lang.StringBuilder
or
java.lang.StringBuffer from .NET. Any usage of them in
conjunction with PDFxStream can be replaced with
e.g.
java.io.StringWriter; all PDFxStream code samples
demonstrate and recommend using
StringWriter
with
OutputHandler
implementations.
The other option is to use the
.toAppendable() extension method provided
by the
com.snowtide namespace:
using com.snowtide; using com.snowtide.pdf; // ... StringBuilder sb = new java.lang.StringBuilder(); OutputTarget tgt = new OutputTarget(sb.toAppendable()); | http://www.snowtide.com/help/pdfxstream.net | CC-MAIN-2017-43 | refinedweb | 1,096 | 53.27 |
Hi, I'm a little confused and was hoping someone could help me tidy up this problem. I'm trying to write a program which allows the user to input an address, which then prints it on the screen, and finally sends it to a label printer. The problem I've got is that I've just discovered that variables aren't passed between functions.
I need addressenter() to pass the address to the other functions like menu(), and printlabel()
How do I do this? I've misunderstood the tutorial I think, so I'd grateful if you could clear it up for me.
Thanks
Sorry this is a real shambles, but I can sort it out once I know what to do.
Code:#include <iostream> using namespace std; // prototype address enter function int menu(); void printlabel(); void addressenter(); int main() { cout<<"This program prints single address labels input by the user.\n\n"; addressenter(); print(); cin.get(); } // post-addressenter menu int menu() { x = 0; cout<<"\n\nYour address was: \n\n"; while (x<y) { //Address output here cout<<address[x]<<endl; x++; }; cout<<"\nThe address has to be truncated to 22 chars, is it correct?\n1. Yes\n2. No\n\n"; int m; cin>>m; if ( m == 1 ) { cout<<"Print label..."; } else if ( m == 2 ) { cout<<"Reenter address"; addressenter(1); } else { cout<<"Invalid selection, please try again.\n"; menu(); }; return 0; } int addressenter() { // address enter function // str length is 23 as 22+terminating.char "\0" string address[7]; string addressin; cout<<"Enter the address to be printed.\n"; int x = 0; int y = 0; while (x<7 && y!=1) { //Address input here cout<<"Line "<< x + 1 <<": "; //cin.getline ( address[x], 23, '\n' ); getline(cin, addressin); if (addressin!="") { address[x] = addressin.substr(0, 22); x++; } else { y=1; }; }; y = x; x = 0; cout<<"\n\nYour address was: \n\n"; while (x<y) { //Address output here cout<<address[x]<<endl; x++; }; menu(); return 0; } | https://cboard.cprogramming.com/cplusplus-programming/121954-variables-between-functions.html | CC-MAIN-2017-13 | refinedweb | 323 | 72.26 |
#include <berryIDisposable.h>
The interface that should be implemented by services that make themselves available through the
IAdaptable mechanism. This is the interface that drives the majority of services provided at the workbench level.
A service has life-cycle. When the constructor completes, the service must be fully functional. When it comes time for the service to go away, then the service will receive a Dispose call. At this point, the service must release all resources and detach all listeners. A service can only be disposed once; it cannot be reused.
This interface has nothing to do with OSGi services.
This interface can be extended or implemented by clients.
Definition at line 45 of file berryIDisposable.h.
Disposes of this service. All resources must be freed. All listeners must be detached. Dispose will only be called once during the life cycle of a service. | https://docs.mitk.org/nightly/structberry_1_1IDisposable.html | CC-MAIN-2022-21 | refinedweb | 144 | 69.68 |
$349.00.
Premium members get this course for $389.00.
Premium members get this course for $151.20.
VC will add "_" prefix begin the function's name.So you should check it.
_YourFunName
2. You can add a DEF file to get the pure function name.
LIBRARY XXX
DESCRIPTION 'fsdafsadfsaf'
EXPORTS
YourFunName
YourFunName2
3. Of couse, you should add extern "C", or the exported name will in C++ style. maybe _Xxxx@4 ro ?Xxxx@8
@4, @8 is the stack size which the function need, is the total of all parameter's size.
=========================
#if !defined(_YOURDLL__INCLUDE
#define _YOURDLL__INCLUDED_
#ifdef __cplusplus
extern "C" {
#endif
__declspec( dllexport ) int YourCoolExport();
#ifdef __cplusplus
}
#endif
#endif // _YOURDLL__INCLUDED_
=========================
your_dll.c
=========================
#include "your_dll.h"
int YourCoolExport()
{
return 0;
}
=========================
.def file is not necessity
It’s our mission to create a product that solves the huge challenges you face at work every day. In case you missed it, here are 7 delightful things we've added recently to monday to make it even more awesome.
One does not link to a DLL (exception: runtime linking via LoadLibrary/GetProcAddress
#pragma comment(lib,"mylib.lib")
into a source file that calls your exported fn.
That does not explain why QuickView can't see the exports (unless it is an old 16-bit version of QuickView)
Also, dump the EXPORT section of the DEF file. It just confuses the issue. The__declspec( dllexport ) places all of the needed info into the LIB file.
-- Dan
because with visual c++ in release mode, Quickview hide exported functions !
Your answer is completely correct (except you omitted the declspec thingy) . I managed to get it working as an API DLL by using:
-extern "C" around prototypes
-__declspec(_dllexport) as a prefix to the function defn.
-a DEF file with the EXPORTS section, etc.
Also - the other response is right. One still cannot view the exported functions in QuickView. (Although they are exported in pure API form). However, The Dependency viewer in MSVC++ does show the exported functions properly.
Thanks for your help | https://www.experts-exchange.com/questions/20173228/Functions-are-not-exported-from-Visual-C.html | CC-MAIN-2018-13 | refinedweb | 337 | 68.47 |
Please note: this guide specifically covers the Java Edition version of Minecraft. Bedrock Edition does not use data packs, but provides customization through add-ons.
The data packs built in this series can be found in the unicorn-utterances/mc-datapacks-tutorial repository. Feel free to use it for reference as you read through these articles!
What is a data pack?
Minecraft's data pack system allows players to fundamentally modify existing behavior of the game by "replacing" or adding to its data files. Data packs typically use
.mcfunction files to specify their functionality as a list of commands for the game to run, and
.json files for writing advancements or loot tables.
One thing to note: While data packs are simple to use and enable a huge amount of functionality, they do have a couple drawbacks. One is that, while data packs allow most game features to be changed, they do not allow players to add new features into the game (although some can convincingly create that illusion with a few tricks).
If you want to add new controls to the game, integrate with external services, or provide a complex user interface, a Minecraft modding framework such as Fabric or Spigot might be better for you.
Advantages of Minecraft mods
Can communicate with external servicesMods can perform HTTP requests, talk to other applications, or use any library that is compatible with Minecraft's Java runtime.
Able to modify the user interface and settings menusSome data packs have used innovative (and highly complex) workarounds to this using modified item textures, but in general, Minecraft's controls and user interface cannot be fundamentally changed without the use of a mod.
Can add entirely new functionality to the gameWhile data packs can add things like custom mobs or items through a couple workarounds, there are always some limitations. Mods can add any code to the game with no restrictions on their behavior.
More performant than data packs when running large operationsThis obviously depends on how well their functionality is written, but mods can provide much better performance with multithreading, asynchronous code, and generally faster access to the data they need. In comparison, data packs are limited by the performance of the commands available to them.
Advantages of data packs
Easy to install on any Minecraft (Java Edition) versionData packs are widely supported in almost any Minecraft launcher, mod loader, and hosting provider. In comparison, mods will require players to set up a specific Minecraft installation (such as Fabric or Forge) before they can be used.
Generally simpler to test and writeWhile some modding tools can provide fairly seamless testing & debugging, they all require programming knowledge in Java and/or Kotlin, and it can be tedious to set up a development environment for that if you don't have one already. Most data pack behavior can be written in any text editor and tested right in the text chat of your game!
Safer to make mistakes withSince data packs are restricted to interacting with the commands Minecraft provides, it typically isn't possible to do anything that will entirely break your game. Mods can run any arbitrary code on your system, however — which means there's a higher chance that things can go wrong.
Typically better update compatibilityWhile some commands do change in new Minecraft updates, I have (anecdotally) found the changes to be less impactful than the work required to bring mods up to date with new versions. Since mods often use mixins and directly interact with Minecraft's internal code, they can be affected by under-the-hood changes that wouldn't make any difference to a data pack.
Summary
I usually prefer to write data packs for most things I work on, as I find them to be more useful to a wider audience because of their easier installation process. Some players simply don't want the trouble of setting up another installation folder or using a different Minecraft loader to play with a specific mod, and data packs can work with almost any combination of other mods and server technology.
With that said, data packs can certainly be tedious to write at times — while they are easier to build for simple functionality that can be directly invoked through commands, more complex behavior might be better off as a mod if those advantages are more appealing. Nothing is without its drawbacks, and any choice here is a valid one.
Writing our first Minecraft function
Data packs make frequent use of
.mcfunction files, which are text files that contain a list of commands for Minecraft to run. But how do we know which commands to write? We can actually test them in Minecraft first!
Let's try making a new Minecraft world; I'll name mine "testing" so I can find it easily. Make sure that the "Allow Cheats" option is set to "ON", then press "Create World".
If you press "t" to bring up the text chat, then type "/s", a list of commands should appear! This list can be navigated with the "up" and "down" arrow keys, and includes every command in the game. If you start typing one out, it should prompt you for any additional syntax it requires. If the command turns red, that means the syntax is invalid.
Let's try making a list of commands that can spawn some animals. The below commands should all work when typed into the text chat, and will summon the entity at the same location as the player.
shell
/summon cow/summon sheep/summon pig/summon goat/summon llama
Now let's see if we can put these into a function!
Building a data pack folder structure
We'll need to make a new folder to build our data pack in — I'll name mine "1-introduction" to reflect the name of this article. We then need to place a "pack.mcmeta" file inside this folder to describe our pack.
json
{"pack": {"pack_format": 10,"description": "Spawns a bunch of animals around the player"}}
The
"pack_format": 10 in this file corresponds to Minecraft 1.19; typically, the format changes with each major update, so for newer versions you might need to increase this number...
We then need to create a series of folders next to this file, which should be nested inside each other as follows:
data/fennifith/functions/animals/
In this path, the
fennifith/ folder can be called a namespace — this should be unique to avoid potential clashes if someone tries to use multiple data packs at once; if two data packs use exactly the same function name, at least one of them won't work as expected.
The namespace and the
animals/ folder can be renamed as you like, but the
data/ and
functions/ folders must stay the same for the data pack to work. Additionally, it is important that the "functions" folder is exactly one level below the "data" folder. For example,
data/functions/ or
data/a/b/functions/ would not be valid structures.
Finally, we should make our
.mcfunction file in this folder. I'm going to name mine
spawn.mcfunction:
shell
summon cowsummon sheepsummon pigsummon goatsummon llama
Note that, while a preceding
/ is needed to type these commands into the text chat, it should not be included in the
.mcfunction file.
We should now have our data pack organized as follows:
shell
1-introduction/pack.mcmetadata/fennifith/functions/animals/spawn.mcfunction
Installing & testing the data pack
To turn this folder into a data pack, we simply need to convert the "1-introduction" folder into a zip file.
- Windows
- MacOS
- Linux
This can be done by holding down the Shift key and selecting both the
pack.mcmeta and
data/ files in the file explorer. Then, right click and choose "Send to > Compressed (zipped) folder".
This should create a zip file in the same location — you might want to rename this to the name of your data pack. Right click & copy it so we can move it to the Minecraft world!
To find the location of your world save, open Minecraft and find the "testing" world that we created earlier. Click on it, then choose the "Edit" option, and "Open World Folder".
In the Explorer window that opens, enter the "datapacks" folder. Right click and paste the zip file here.
Now that we've installed the data pack, you should be able to enter the world save again (or use the
/reload command if you still have it open). But nothing happens!
That's because, while our function exists, it isn't connected to any game events — we still need to type a command to actually run it. Here's what the command should look like for my function:
shell
/function fennifith:animals/spawn
If you didn't use the same folder names, autocomplete should help you figure out what your function is named. After running this command, if you see all your animals spawn, you have a working data pack!
Specifying a function tag
In order to run a function automatically, Minecraft provides two built-in function tags that run during specific events:
load (when the world is opened) and
tick (every game tick).
Using the "load" event
We'll start with
load — for which we'll need to create two new files in our folder structure! Below, I'm creating a new
load.mcfunction next to our previous function, and a
minecraft/tags/functions/load.json file for the
load tag.
shell
1-introduction/pack.mcmetadata/minecraft/tags/functions/load.jsonfennifith/functions/animals/load.mcfunctionspawn.mcfunction
Note that, while I'm using the
fennifith/ namespace for my functions, the tag file lives under the
minecraft/ namespace. This helps to keep some data isolated from the rest of the game — any files in the
minecraft/ folder are modifying Minecraft's functionality, while anything in a different namespace is creating something that belongs to my data pack.
Inside
load.json, we can add a JSON array that contains the name of our load function as follows:
json
{"values": ["fennifith:animals/load"]}
In
load.mcfunction, I'll just write one command for testing:
shell
say Hello, world!
Testing the "load" event
If you repeat the steps to install the data pack now, you should see a "Hello, world" message appear in the chat window! You could modify this message to display information about your data pack or explain how to use it.
To invoke the "load" tag manually, you can either use the
/reload command, or type
/function #minecraft:load (note the
# symbol used to specify the tag).
And the "tick" event...
Be aware that when using the tick event, it is very easy to do things that cause humongous amounts of lag in your game. For example, connecting this to our
spawn.mcfunctionfrom earlier might have some adverse consequences when summoning approximately 100 animals per second.
Now, what if we try adding a file for the
tick event with the same contents? We could add a
tick.json file pointing to a
fennifith:animals/tick function — and write a
tick.mcfunction file for it to run.
The chat window fills up with "Hello, world" messages! Every time the
tick function tag is invoked (the game typically runs 20 ticks per second) it adds a new message! This is probably not something we want to do.
Could there be a way to check some kind of condition before running our commands? For example, if we wanted to run our
say command when the player stands on a specific block...
Try experimenting! See if you can find a command that does this — and check out the next post in this series for the solution!
Conclusion
If your data pack hasn't worked first try — don't worry! There are a lot of steps here, and the slightest typo or misplacement will cause Minecraft to completely ignore your code altogether. If you're ever stuck and can't find the issue, the Unicorn Utterances discord is a great place to ask for help!
So far, we've covered the basics of data packs and how to write them — but there's a lot more to get into. Next, we'll start writing conditional behavior using block positions and entity selectors! | https://unicorn-utterances.com/posts/minecraft-data-packs-introduction | CC-MAIN-2022-33 | refinedweb | 2,040 | 60.85 |
January 2019
Volume 34 Number 1
[The Working Programmer]
Coding Naked
By Ted Neward | January 2019
Readers who missed the last column may be surprised that I’m not still talking about the MEAN (Mongo, Express, Angular, Node) stack; that’s because I wrapped up the series last time, and now it’s time to turn the attention toward something a little different.
In the MEAN series I systematically deconstructed the entire stack and examined each of its constituent parts in detail. Now I’m going to take a look at a stack (well, technology, singular, really) that’s designed to blend all of its parts into a single, seamless whole, one that’s intended to be used to hide much of the low-level detail work required. In other words, sometimes developers “just want to work with objects,” without having to worry about building out a UI, database or intermediate middleware. In many ways, this is the ultimate expression of what Alan Kay had in mind when he invented objects, back in the days of Smalltalk (which was Kay’s original object-oriented language), and it’s a natural, logical extension of the Domain-Driven Design (DDD) concept (or, perhaps more accurately, the other way around).
Smalltalk, when run, always executes inside a larger environment called a browser. In essence, the browser is a runtime execution environment, IDE and UI host all wrapped into one. (By comparison, an HTML browser is usually just a UI host, although several vendors, including Microsoft, are trying like mad to make the HTML browser into both an IDE and execution environment.) When a developer defines a new class in Smalltalk, the browser knows how to build a generic UI around a given object, and (depending on which vendor’s Smalltalk) knows how to persist that to a relational store or to its own object database. The developer doesn’t need to define “views” or “controllers,” and “models” aren’t anemic data-only classes that essentially just define a database schema. In many respects, this is object-oriented the way it was meant to be: Developers interact with customers, find the objects, define properties on those objects (and the relationships between them), define behaviors on those objects and ship the project. It was for this reason that Kay once said, “I invented the term object-oriented, and I can tell you that C++ was not what I had in mind.” All jokes at the C++ language’s expense aside, his disapproval was with all the thousands of objects that stood between the user and the actual domain object being manipulated. (Well, that and the whole directly manipulating-memory-through-pointers thing, but let’s stay focused here.) Which means I can presume he’d be equally disappointed in C# and Java. (And Node.js, to boot.)
Naturally, developers have sought to recreate this Holy Grail of object programming, and I’m going to examine one of these attempts, which goes by the name of Naked Objects. (No, I’m serious, that’s its name, which derives from the idea that developers should be focusing solely on the business domain and users should be able to work with the objects directly “without additional decoration,” if you will. Or, put another way, users should work with objects in their natural, “naked” state.)
In essence, what’s happening in this approach is equally intriguing and intimidating: Based on metadata gathered from an object at run time (via Reflection calls, usually), you generate a UI that knows how to display and edit the properties on the object, validate the edits based on additional metadata specified on the object (usually via custom attributes) and, if necessary, reject those changes that don’t meet validation. From there, you query for and store the object to the database (typically through an object/relational mapping layer), along with any additional objects that might require updates, such as objects linked by ownership or some other relation.
Of course, part of the attraction to using the Naked Objects Framework is all the puns to be made. Ready to shuck it all and dive in?
Getting Naked
Fire up a browser, point it at nakedobjects.org and notice that the Web site is a simple redirect site that lets you choose where to go, depending on which platform holds your interest: There’s a .NET flavor (which will be my focus) and a Java flavor, which also goes by the name Apache Isis. (Again, not kidding—the folks in charge of Isis are thinking about changing the name, but to be fair, they chose the name long before the folks in the Middle East did.)
When redirected to the .NET flavor, you end up on the GitHub project page for the NakedObjects Framework project, which as of this writing is at version 9. The project page has two notable links on the README homepage: One is to the Developer’s Guide, which is a must-have when working with the framework (and a great example of documentation done well), and the other is a .ZIP file containing a template solution to use as a starting point. While the NakedObjects Framework (abbreviated NOF) assemblies are available through NuGet, it’s generally easier to use the .ZIP template to begin your new NOF project, for reasons that will become more apparent later. For now, grab the .ZIP template, explode it into a subdirectory for code, and open up the Solution file in your favorite instance of Visual Studio.
When Visual Studio finishes loading, you’ll notice that the solution is made up of several different projects, five to be exact. For the most part, their names are self-explanatory: Template.Client will be a Web client, Template.Server is the Web server, and Template.DataBase and Template.SeedData represent the layers for talking to the database. (In essence, the last two are pretty straightforward Entity Framework projects, so if you already know EF, you’ve got the persistence part of NOF down, as well.)
The last project in the solution, Template.Model, is where most, if not all, of the developer work will take place. This is the collection of classes that represent the domain model of the project and, therefore, the bulk of the work that a developer will need to do should—and usually will—be done here. Fortunately, the NOF template already has a bit of sample code in it—a Student type, representing one of those creatures who loves to study—so let’s just fire it up and watch it go. Make sure that Template.Server is set as the startup project (which it should be already), punch F5 and relax.
Running Naked
First, Visual Studio will fire up the server component and, owing to the EF default configuration, will take a few moments to build the database out of nothing to start. A few seconds after start, some JSON will appear in the browser window—this is because the Template.Server is actually a RESTful server, which means not only does it operate over HTTP, it serves back JSON that describes the entire collection of options that a user can take advantage of if they want. Notice the JSON consists of what basically look like hyperlinks: “rel” for “relation,” “href” for the URL to use, “type” to describe what’s expected and so on. This is so that developers who don’t want to use the generic UI (which I’ll examine next) can create their own UI that knows how to work from the JSON being handed back.
Let’s look at the UI that NOF builds for you. In a new browser tab, navigate to. The result that comes back is … stark. It’s clearly not built to be pretty, and to the unfamiliar eye, it looks like there’s no real starting point. However, remember that REST (as Fielding originally intended it) and Smalltalk had similar aims: a universal UI, so that regardless of the domain, a user would know how to operate it. NOF is essentially building the UI collectively off of a number of static methods of classes, and these will be displayed in the top-level Menu from that homepage. Clicking it reveals three options: “All Students,” “Create New Student” and “Find Student by Name.” Pretty clearly, these are simple CRUD methods (well, C and R, anyway), so let’s see who’s already in the system. Click All Students, and three students (that are fed to EF on startup from the Template.SeedData project) will appear, along with a curious icon in the upper-right of the list returned. Clicking this icon will display the results as a table instead of just a list, but because Students only have a FullName property and nothing else, this won’t seem all that interesting yet.
Selecting a student from that list will occupy the full page. Let’s assume “James Java” has had a change of heart and wants a legal name change to go with it, so select him from the list, and when he comes up, select Edit. Notice the UI will change to make his name editable, so let’s change it to “James Clr.” Click Save, and you’re back to a read-only UI. Let’s go back and see the list of students again, so select the Home icon (the house in the lower-left corner), and you’re back to that starting menu again. You could go looking for James by selecting Menu | All Students again, but the clipboard icon shows you a list of all the objects you’ve used recently. Select it and, sure enough, “James Clr” is there. (The “Student” there is the type of object James happens to be. That’s not important in a demo that only has one kind of object, but as the system grows, so will the chances that objects named James might be both a Student and a RegistrationHistory, for example.)
Students and Studying
The Student model seems pretty anemic—students are more than just a name! They’re also a subject, so let’s add that to the model. Stop Visual Studio, and open up the Student.cs file in the Template.Model project. Student currently looks like the code shown in Figure 1.
Figure 1 The Basic Student Type
using NakedObjects; namespace Template.Model { public class Student { // All persisted properties on a domain object must be 'virtual' NakedObjectsIgnore] // Indicates that this property // will never be seen in the UI public virtual int Id { get; set; } [Title]// This property will be used for the object's title at // the top of the view and in a link public virtual string FullName { get; set; } } }
Add a new property to Student, say “Subject,” also a string, and also public and virtual, and then punch F5 again. Thanks to the development-default settings in EF, James Java will lose his name change, but notice what you’ve gained: Throughout the UI and the database, you now have students who have subjects. All of them are currently empty, mind you, but then again, lots of college students have no idea what they’re studying, either. The larger point is that with that single property, you’ve gained complete UI support, as well as database support, without having to write any code to create the UI, validate the input, or persist the data.
Wrapping Up
There’s obviously a lot here that I’m just skipping over, and it would be teasing to not explore it further, so I’ll spend a few articles examining NOF and determining what its limitations are. The important point is that particularly for applications that aren’t consumer-facing, not everything has to be a hand-crafted artisanal UI and database; sometimes the speed with which an application can be generated is far more important than how pretty it looks. NOF and other DDD-inspired kinds of tools can be exactly what the doctor ordered in those kinds of situations, and you even have some room to make things “prettier” by building a more customized Angular (or other SPA framework) front end. Next time I’ll explore some of the options and capabilities of domain classes in NOF, and we’ll see just how well NOF can handle common UI concerns. In the meantime ….
Discuss this article in the MSDN Magazine forum | https://docs.microsoft.com/en-us/archive/msdn-magazine/2019/january/the-working-programmer-coding-naked | CC-MAIN-2020-10 | refinedweb | 2,074 | 57.3 |
We should rename plugin processes on Mac OS X.
I get 'Minefield Plugin Process' under Activity Monitor. Do we still want a patch for this?
I think we still want to fix this but we won't block on it. I'd approve a patch.
Isn't this already fixed?
I should have been more specific - I meant for this bug to be about making the process name reflect the particular plugin that the process is for.
Created attachment 466209 [details] [diff] [review]
SetProcessName v0.9
Uses the method linked by Josh. I just need to know what format we want to use and how to support localization?
Would really like to see this fix finished. Willing to mentor someone fix up this patch. Otherwise I may look at it myself.
I really don't think we need to support localization for process names.
I have this working renaming the process to the plugin's name, such as 'Shockwave Flash'. Do we have any preference for the exact name?
'Plugin Container: Shockwave Flash'?
I'd like to get this landed soon.
Created attachment 539824 [details] [diff] [review]
SetProcessName v1.0
Let's get the patch reviewed even I'm still waiting on feedback for the process title.
Comment on attachment 539824 [details] [diff] [review]
SetProcessName v1.0
This is basically fine with me, but I think it could be improved.
1) Why not make getASNFunc and setInformationItemFunc static, so they
only have to be initialized once?
2) What about the case where aProcessName is NULL or (more likely) an
empty string? This is probably unusual, but I've seen it myself in
the last 2-3 days: On one of my partititions, the QuickTime
plugin's nsPluginInfo.fName somehow got set to an empty string (I
saw this in Tools : Add Ons).
3) Unlike the original Chromium code (from SetProcessName() in
mac_util.mm) your patch doesn't explain why it uses
GetCurrentProcess(). With no explanation, the call to
GetCurrentProcess() seems completely unnecessary.
Finally, for the record, I found where this code exists in WebKit --
in the non-open-source WebKitLibraries, under the following thin
wrapper (defined in WebKitSystemInterface.h):
void WKSetVisibleApplicationName(CFStringRef);
> 1) Why not make getASNFunc and setInformationItemFunc static, so
> they only have to be initialized once?
Now that I think more about it, this code will probably only be called
once. If so, this change isn't needed.
(In reply to comment #11)
> > 1) Why not make getASNFunc and setInformationItemFunc static, so
> > they only have to be initialized once?
>
> Now that I think more about it, this code will probably only be called
> once. If so, this change isn't needed.
I'm going to make the change anyways since this code will probably get moved and reused when we have content processes by default.
I haven't heard any suggestion about the name format. Should I post to dev.platform?
Created attachment 540462 [details] [diff] [review]
SetProcessName v2.0
Thanks for the review Steven. Just waiting on a decision for the process name and I will upload a final version for review.
Created attachment 541723 [details] [diff] [review]
SetProcessName v3.0
I fixed 1, 2. I tried removing 3 but the rename failed so I added a comment to indicate that this is required. I'm not sure why it is required.
I couldn't get the appName from the info service in the plugin process so I opted to use cocoa to get the current process name.
The format for the process name is '<PROCESSNAME> (<PLUGIN NAME>)".
Comment on attachment 541723 [details] [diff] [review]
SetProcessName v3.0
This looks fine to me.
One very small nit: formatedName should be formattedName :-)
Green try run, ready for checkin.
Created attachment 541933 [details] [diff] [review]
SetProcessName v4.0 (Fixed typo)
Conflicts with bug 587370.
With this patch, I get a compile error using clang:
/Users/ehsanakhgari/bin/clang/bin/clang++ -o PluginUtilsOSX.o -c -fvisibility=hidden -DMOZILLA_INTERNAL_API -D_IMPL_NS_COM -DEXPORT_XPT_API -DEXPORT_XPTC_API -D_IMPL_NS_GFX -D_IMPL_NS_WIDGET -DIMPL_XREAPI -DIMPL_NS_NET -DIMPL_THEBES -DSTATIC_EXPORTABLE_JS_API -DOSTYPE=\"Darwin10.8.0\" -DOSARCH=Darwin -DEXCLUDE_SKIA_DEPENDENCIES -DCHROMIUM_MOZILLA_BUILD -DOS_MACOSX=1 -DOS_POSIX=1 -DFORCE_PR_LOG -I/Users/ehsanakhgari/moz/inbound/dom/plugins/ipc/../base -I/Users/ehsanakhgari/moz/inbound/xpcom/base/ -I/Users/ehsanakhgari/moz/inbound/ipc/chromium/src -I/Users/ehsanakhgari/moz/inbound/ipc/glue -I../../../ipc/ipdl/_ipdlheaders -I/Users/ehsanakhgari/moz/inbound/dom/plugins/ipc -I. -I../../../dist/include -I../../../dist/include/nsprpub -I/Users/ehsanakhgari/moz/inbound/obj-ff-dbg/dist/include/nspr -I/Users/ehsanakhgari/moz/inbound/obj-ff-dbg/dist/include/nss -fPIC -fno-rtti -fno-exceptions -Wall -Wpointer-arith -Woverloaded-virtual -Wsynth -Wno-ctor-dtor-privacy -Wno-non-virtual-dtor -Wno-invalid-offsetof -Wno-variadic-macros -Werror=return-type -fno-strict-aliasing -fno-common -fshort-wchar -pthread -DNO_X11 -pipe -DDEBUG -D_DEBUG -DTRACING -g -DNO_X11 -DMOZILLA_CLIENT -include ../../../mozilla-config.h -MD -MF .deps/PluginUtilsOSX.pp -fobjc-exceptions /Users/ehsanakhgari/moz/inbound/dom/plugins/ipc/PluginUtilsOSX.mm
/Users/ehsanakhgari/moz/inbound/dom/plugins/ipc/PluginUtilsOSX.mm:177:48: error: no member named 'sApplicationASN' in namespace 'mozilla::plugins::PluginUtilsOSX'
static void *mozilla::plugins::PluginUtilsOSX::sApplicationASN = NULL;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
/Users/ehsanakhgari/moz/inbound/dom/plugins/ipc/PluginUtilsOSX.mm:178:48: error: no member named 'sApplicationInfoItem' in namespace 'mozilla::plugins::PluginUtilsOSX'
static void *mozilla::plugins::PluginUtilsOSX::sApplicationInfoItem = NULL;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
2 errors generated.
I think clang is right here, these two variables have not been declared *anywhere* <>. I have no idea why gcc doesn't choke on this.
Benoit, can you please fix this? This is currently blocking me from building, which kind of sucks. ;-)
Created attachment 543003 [details] [diff] [review]
Remove static vars for clang
I'm not entirely sure why the previous were not proper declaration of global variables with static linkage. If someone cares to explain I'd love to know?
I don't think you cared much for having it static so I figured I would take the simplest approach and revert to what I previous had since it has no impact. We're trading off storage size to cache something that wont be called again.
You're trying to define a variable which was never declared. It has to be declared in the class, e.g.
class PluginUtilsOSX
{
static void* mozilla::plugins::PluginUtilsOSX::sApplicationASN;
};
And then you define it in the .cpp file without a "static".
Created attachment 543313 [details] [diff] [review]
proposed patch
Comment on attachment 543313 [details] [diff] [review]
proposed patch
Landed on inbound.
QA tracking to verify in Firefox 7.
Setting resolution to Verified Fixed on MacOS X 10.6 and 10.7.
STR
To verify that the process from the description is renamed, I have loaded and opened Activity Monitor. The 'Firefox','Aurora','Nightly' Plugin Process (Shockwave Flash) process is present and renamed.
Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7; rv:9.0a1) Gecko/20110919 Firefox/9.0a1
Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7; rv:8.0a2) Gecko/20110921 Firefox/8.0a2
Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7; rv:7.0) Gecko/20100101 Firefox/7.0 | https://bugzilla.mozilla.org/show_bug.cgi?id=557226 | CC-MAIN-2016-36 | refinedweb | 1,169 | 51.85 |
Good evening, I am designing an RPG and attempting to teach myself all the relevant skills involved in producing the game.
Before you all laugh, I am not attempting to complete the game as a one man army, I just want to learn a decent amount of art and design so that when I hire an artist, I can more accurately describe what I want in some of their own language.
And so that when I hire a programmer, I can more accurately describe what I want in some of their own language. Besides, the more I learn, the more it all affects my concept (as in, what is realistic, what is workable, what is standard, and so on). Basically I want to be the best team leader I can by having at least some knowledge of all the relevant areas.
Also I just love learning stuff.
Anyway, this is the code I have so far, using Visual C++ 2010 with SDL.
#pragma comment(lib, "SDL.lib") #pragma comment(lib, "SDLmain.lib") #pragma comment(lib, "SDL_TTF.lib") #include "SDL.h" #include "SDL_image.h" #include <iostream> #include <stack> #include <string> #include "SDL_TTF.h" #include "Defines.h" using namespace std; int main(int argc, char *argv[]) { const int SCREEN_WIDTH = 644; const int SCREEN_HEIGHT = 400; const int SCREEN_BPP = 32; SDL_Surface *logo = NULL; SDL_Surface *border = NULL; SDL_Surface *screen = NULL; SDL_Init(SDL_INIT_EVERYTHING); SDL_WM_SetCaption("Necromancer", NULL); screen = SDL_SetVideoMode( SCREEN_WIDTH, SCREEN_HEIGHT, SCREEN_BPP, SDL_SWSURFACE); border = IMG_Load("Graphics/Title/Border.png"); logo = IMG_Load("Graphics/Title/Logo.png"); SDL_BlitSurface(logo,NULL,screen,NULL); SDL_BlitSurface(border,NULL,screen,NULL); SDL_Flip(screen); SDL_Delay( 5000 ); SDL_FreeSurface(logo); SDL_FreeSurface(border); SDL_Quit(); return 0; }
I have copied most of this stuff from websites, so most of it I genuinely have no idea what it does. But after hours of furious tweaking, it creates a screen in the size I want and displays two images I want for 5 seconds. A pretty big win for someone fumbling along.
My question is, where do I insert code to tell the program to 'fade in' the logo, then fade it out again (over a total period of 3 seconds), and what what would that code look like?
Then I would like it to fade in and fade in a second image over 5 seconds, what would that code look like?
Thank you for your time and putting up with my awkward starting point.
Kind regards,
Dan | https://www.gamedev.net/topic/637315-beginning-to-code-in-c-some-rookie-questions/ | CC-MAIN-2017-04 | refinedweb | 398 | 72.26 |
Learn how to make a remotely viewable pan and tilt security camera with a Raspberry Pi. This project can be completed in a morning with only the simplest of parts. Here’s the end result:
What you Need
- Raspberry Pi 2 or 3 with Micro SD card
- Arduino UNO or similar
- 2 x micro or mini hobby servos
- USB webcam
- Male to male hookup wires
- Male to female hookup wires
- Assorted zip ties
Building the Security Camera
Attach a servo horn (the little plastic “shapes”) to each servo using the provided screw. The particular shape does not really matter, although the larger the better. Do not over-tighten the screw.
Now use zip ties to attach one servo to the other at a right angle. One of these will be pan (left to right), whilst the other will be tilt (up and down). It does not matter which one does what, it can be adjusted in the code.
Finally, attach your webcam to one of the servos. You could use zip-ties for this, although my webcam came with a clip screwed to the bottom — I removed this and used the screw to hold it to the horn. For stability, you may want to mount the whole rig to a case or box. A simple cardboard box does the trick quite nicely. You could cut a neat square hole and mount one servo flush to the surface, however a zip tie will be sufficient.
A Word About Webcams
Not all USB webcams are created equally. Connect your webcam to the USB port of your Pi and run this command:
lsusb
This command displays information about all USB devices connected to the Pi. If your webcam is not listed here, you may want to try a powered USB hub and repeating the command. If the webcam is still not recognised you may have to purchase a compatible webcam.
Servo Setup
Whilst servos may seem scary and complex, they are really quite simple to connect. Servos operate on Pulse Width Modulation (PWM), which is a way for digital systems to imitate analog signals. PWM signals are essentially a rapid ON – OFF signal. A signal that is ON or HIGH is described using duty cycle. Duty cycle is expressed as a percentage, and describes how long the signal is ON for. A PWM signal of 25% duty cycle will be ON for 25% of the time, and OFF for the remaining 75%. The signal is not ON at the start and then OFF forever, it is pulsed regularly over a very short period of time.
Servos listen for these pulses and act accordingly. Using a duty cycle of 100% would be the same as “regular” 5v, and 0% would be the same as ground. Don’t worry if you do not fully understand how PWM works, you can still control servos (Extreme Electronics is a good place to learn more).
There are two main ways to use PWM — hardware or software. Hardware PWM often provides lower latency (how long between the servo receiving the command and moving) than software PWM, however the Pi only has one hardware PWM capable pin. External circuits are available to provide multiple channels of hardware PWM, however a simple Arduino can also handle the task, as they have multiple hardware PWM pins.
Here is the circuit:
Double-check the pinout for your Pi, they vary slightly between models. You need to figure out how your servos are wired. Servos require three wires to control them, however the colours vary slightly:
- Red is positive, connect this to Pi +5v
- Brown or black is negative, connect this to GND on the Pi
- Orange or white is signal, connect this to Arduino pins 9 and 10
Arduino Setup
New to Arduino? .
Once the servos are connected, open the Arduino IDE on your computer and upload this test code. Don’t forget to select the correct board and port from the Tools > Board and Tools > Port menus
#include <Servo.h> // Import the library Servo servoPan, servoTilt; // Create servo objects int servoMin = 20, servoMax = 160; // Define limits of servos void setup() { // Setup servos on PWM capable pins servoPan.attach(9); servoTilt.attach(10); } void loop() { for(int i = servoMin; i < servoMax; ++i) { 1 // Move servos from minimum to maximum servoPan.write(i); servoTilt.write(i); delay(100); // Wait 100ms } for(int i = servoMax; i > servoMin; --i) { // Move servos from maximum to minimum servoPan.write(i); servoTilt.write(i); delay(100); // Wait 100ms } }
All being well you should see both servos slowly move back and forth. Notice how “servoMin” and servoMax” are defined as 20 and 160 degrees (instead of 0 and 180). This is partially because these cheap servos are unable to accurately move the full 180 degrees, and also because of the physical size of the webcam prevents the full range being used. You may need to adjust these for your setup.
If they are not working at all double-check the circuit is wired correctly. Breadboards can sometimes vary in quality as well, so consider investing in a multimeter to verify.
The servos are almost too powerful for the Arduino to power, so they will be powered by the Pi. The 5v rail on the Pi is limited to 750mA provided to the whole Pi, and the Pi draws approximately 500mA, leaving 250mA for the servos. These micro servos draw approximately 80mA, meaning the Pi should be able to handle two of them. If you wish to use more servos or larger, higher powered models you may need to use an external power supply.
Now upload the following code to the Arduino. This will listen to incoming serial data (serial as in Universal Serial Bus, or USB). The Pi will send this data over USB to the Arduino, telling it where to move the servos.
#include <Servo.h> // Import the library Servo servoPan, servoTilt; // Create servo object String data = ""; // Store incoming commands (buffer) void setup() { // Setup servos on PWM capable pins servoPan.attach(9); servoTilt.attach(10); Serial.begin(9600); // Start serial at 9600 bps (speed) } void loop() { while (Serial.available() > 0) { // If there is data char singleChar = Serial.read(); // Read each character if (singleChar == 'P') { // Move pan servo servoPan.write(data.toInt()); data = ""; // Clear buffer } else if (singleChar == 'T') { // Move tilt servo servoTilt.write(data.toInt()); data = ""; // Clear buffer } else { data += singleChar; // Append new data } } }
You can test this code by opening the serial monitor (top right > Serial Monitor) and sending some test data:
- 90P
- 0P
- 20T
- 100T
Notice the format of the commands — a value and then a letter. The value is the position of the servo, and the letter (in caps) specifies the pan or tilt servo. As this data is transmitted from the Pi serially, each character comes through one at a time. The Arduino has to “store” these until the whole command has been transmitted. The final letter not only specifies the servo, it also lets the Arduino know there is no more data in this command.
Finally, disconnect your Arduino from the computer, and plug it into the Raspberry Pi via the usual USB port connection.
Pi Setup
Now it’s time to setup the Pi. First, install an operating system How to Install an Operating System on a Raspberry Pi How to Install an Operating System on a Raspberry Pi Here's how to install an OS on your Raspberry Pi and how to clone your perfect setup for quick disaster recovery. Read More . Connect the webcam and the Arduino to the Pi USB.
Update the Pi:
sudo apt-get update sudo apt-get upgrade
Install motion:
sudo apt-get install motion
Motion is a program made to handle webcam streaming. It handles all the heavy lifting, and can even perform recording and motion detection (try building ). Open the Motion configuration file:
sudo nano /etc/motion/motion.conf
This file provides lots of options to configure Motion. Setup as follows:
- daemon on — Run the program
- framerate: 100 — How many frames or images/second to stream
- stream_localhost off — Allow access across the network
- width 640 — Width of video, adjust for your webcam
- height 320 — Height of video, adjust for your webcam
- stream_port 8081 — The port to output video to
- output_picture off — Don’t save any images
This is quite a big file, so you may want to use CTRL + W to search for lines. Once finished, press CTRL + X and then confirm to save and exit.
Now edit one more file:
sudo nano /etc/default/motion
Set “start_motion_daemon=yes”. This is needed to ensure Motion runs.
Now find out your IP Address:
ifconfig
This command will show the network connection details for the Pi. Look at the second line, inet addr. You may want to set a static IP address (what is a static IP? ), but for now make a note of this number.
Now start Motion:
sudo service motion start
You can stop or restart Motion by changing “start” to “stop” or “restart”.
Switch over to your computer and navigate to the Pi from a web browser:
Where xxx.xxx.x.xx is the Pi IP address. The colon followed by a number is the port that was setup earlier. All being well you should see the stream from your webcam! Try moving around and see how things look. You may need to adjust brightness and contrast settings in the config file. You may need to focus the webcam — some models have a small focus ring around the lens. Turn this until the image is the sharpest.
Back on the Pi, create a folder and navigate into it:
mkdir security-cam cd security-cam/
Now install Twisted:
sudo apt-get install python-twisted
Twisted is a webserver written in Python, which will listen for commands and then act accordingly.
Once installed, create a Python script to execute commands (move the servos).
sudo nano servos.rpy
Notice how the file extension is “.rpy” instead of “py”. Here is the code:
# Import necessary files import serial from twisted.web.resource import Resource # Setup Arduino at correct speed try: arduino = serial.Serial('/dev/ttyUSB0', 9600) except: arduino = serial.Serial('/dev/ttyUSB1', 9600) class MoveServo(Resource): isLeaf = True def render_GET(self,request): try: # Send value over serial to the Arduino arduino.write(request.args['value'][0]) return 'Success' except: return 'Failure' resource = MoveServo()
Now start the webserver:
sudo twistd -n web -p 80 --path /home/pi/security-cam/
Lets break it down — “-p 80” specifies the port (80). This is the default port for webpages. “–path /home/pi/security-cam/” tells Twisted to start the server in the specified directory. If you make any changes to the scripts inside the “security-cam” folder you will need to restart the server (CTRL + X to close, then run the command again).
Now create the webpage:
sudo nano index.html
Here’s the webpage code:
<!doctype html> <html> <head> <title>Make Use Of DIY Security Camera</title> <style type="text/css"> #container { /* center the content */ margin: 0 auto; text-align: center; } </style> </head> <body> <div id="container"> <img src="" /> <script src=""></script><br /> <button onclick="servos.move('P', 10)">Left</button> <button onclick="servos.move('P', -10)">Right</button> <button onclick="servos.move('T', -10)">Up</button> <button onclick="servos.move('T', 10)">Down<os $.get('' + value); }, } } </script> </html>
Change “PI_IP_ADDRESS” (used twice) to the real IP address of your Pi (raspberrypi.local should also work if you’re running the latest Raspian). Restart the webserver and then navigate to the Pi from your computer, no need to specify the port. You should be able to pan left and right, and see the video stream:
There you have it. Your very own Pan and Tilt Network Camera. If you want to expose your webcam to the internet, remember to consider the dangers – then look into , so your router knows where to send incoming requests. You could add an external power supply 3 Raspberry Pi Battery Packs for Portable Projects 3 Raspberry Pi Battery Packs for Portable Projects A Raspberry Pi battery can make a regular Pi into a portable computer. You'll need one of these battery solutions to get started. Read More and Wi-Fi adaptor for a really portable rig.
Have you made something cool with a webcam and a Pi? Let me know in the comments, I’d love to see!
Explore more about: Home Security, Raspberry Pi, Webcam.
This is wonderful work and I am using it and very excited about it, however, there is ONE problem that is killing all of the joy....
When I click on one of the buttons, it works flawlessly, but I get THIS error in Chromes Inspect/Console window:
XMLHttpRequest cannot load. No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin '' is therefore not allowed access.
This means I cannot use it outside of my LAN. When I try to do that, the buttons do not work at all. Do you happen to have a solution for CORS violations in Python Twisted? The port has to be something other than 80 to gain access to the outside. Changing the rpy so it reflects :81 does not work.
25.7.2017
I installed the motion only.
Sometimes I get no picture over the Firefox Browser and sometimes it works for a while then stops.
I want a real time video fom the camera display on a browser in shich I can change the Pan/Tilt over the screen with a mouse. I am wondering id Motion is the right tool to use.
Any help/tips would be gratefully received.
I am following your great instructions, and everything is fine except the frame rate is terrible. I imagine it is about 2 frames per second. I do not understand why it is so slow. I did everything according to your instruction even listening to the comment about changing the motion detection threshold to 204800 (640x320). Nothing seems to make the frame rate livable. ANY ideas? Thanks again!
You can adjust the framerate and quality in the motion config -- having done another project recently, I was able to achieve 100fps (likely capped to 24fps due to the camera), but only at lower resolutions. 1080p was horribly laggy.
In researching this, this seems to be a real problem but only for some. There is nothing in motion.config that will change the at least 1 second per frame (more like 2 seconds per frame that I am getting (that I can figure out). I spent an entire day fidgeting with the motion.conf with no results. I am beginning to wonder if Motion does not like the Logitech C270. I run OctoPrint on my 3D printers which uses MPGStreamer/Pi with those same cameras with great result. Something isnt right.
Hi Joe,
Outstanding tutorial, worked outside my firewall after I changed "$.get('' + value);" to "$.get('/servos.rpy?value=' + value);" on line 56 of index.html.
On pressing either a pan or tilt button on the first visit to the webpage, the servo rapidly slews to a limit, then responds properly to button pushes. I would like to read the present values of both the pan and tilt positions, and show them in the page, perhaps to the right of the buttons. When a button is pushed for the first time upon navigating to the page, rather than slew to a limit, the servo command will add or subtract 10 from the value read from the Arduino. Can you suggest a method from index.html to query the Arduino for the present pan and tilt positions? I know how to get the Arduino to send the positions via serial commands, but I need an example of how to request those values from the index.html page, and how to display the present pan and tilt positions. Learning how to do this one simple addition to your example opens up all kinds of possibilities for moving solar panels, to moving 2.4GHz Yagi arrays, and reporting RSSI using wavemon. Thanks for the great instruction, and please consider showing how to request and display a value from the Arduino. Thank you.
Hello Joe,
At first thank you for this project.
I tried to do all steps same like you explained.
But I have a problem, i wrote the webpage code like you explained. But I dont know which address I have to put into the webbrowser to be on the website with the buttons and video.
If i put 'my-Pi-address:8081' I only get the videostream without the buttons. If i write only 'my-Pi-address" or 'my-Pi-address/servos.rpy?value=' into the webbrowser I get only the Information 'website is not avaible' .
I would be happy if you could help me.
Thanks,
Hi Joe, Nice project, however, some of the lines of webpage code are truncated on the far right side due to indentation. Since I am not an HTML coder it is hard for me to try to imagine what is not being displayed. Is there anyway to repost the code so all of it can be seen? Or perhaps provide the code for download, perhaps via GitHub? Thanks
Thanks for stopping by!
There is a scrollbar at the bottom of the code segment, which will let you scroll horizontally to see the rest of the code.
Joe
Hi Joe, No scrollbar was displayed at the bottom of the code segment, but, I did manage to figure it out by placing the cursor on the code segment then pressing the left/right cursor control buttons. Thanks for your feedback.
Just curious on why a Pi, and Arduino is needed. Couldn't this be done with one or the other?
Hi
Resolved the issue with the servo not working.
the problem was the addressing of the port of the Arduino Uno in the PI.
Changed the lines in the Servio.rpy from
# Setup Arduino at correct speed
try:
arduino = serial.Serial('/dev/ttyUSB0', 9600)
except:
arduino = serial.Serial('/dev/ttyUSB1', 9600)
TO
# Setup Arduino at correct speed
try:
arduino = serial.Serial('/dev/ttyACM0', 9600)
except:
arduino = serial.Serial('/dev/ttyACM1', 9600)
Still working on the reason for the video dropping out after a few seconds.
Glad you got it working!
You might want to try updating the Pi ("sudo apt-get update").
You could also try using a powered USB hub for the webcam.
Hi. Your problem with the video is due to the fact that Motion was originally designed to be a motion detection software. This means that the video is cutting out whenever your webcam is detecting a change in a certain number of pixels. To remedy this open the Motion config file and under motion detection change the pixel change tolerance to a number bigger than the video feed's height multiplied by the video feed's width. You could also increase the audio tolerance to be on the safe side. This should prevent the video from cutting out (it worked for me).
HI
Tried this set up as per above. The servo's move when give a command xxP or xxT from the Arduino serial monitor but not from the web server of the PI.
Also after about 15 to 20 seconds the web cam picture goes off the screen and i am left only with the buttons for servo movement that again does not work.
Any idea where the problem could be?
Thanks in advance
V
Hello there,
Can it record on a disc or send it to a server for recording?
Thanks
Gonçalo Ferreir
Yes, Motion can record to disc -- Have a look at "output_pictures" and other options here:
I am wondering how smooth something like this would be for sports type application where you are following
I'm not too sure - I would think you will want a fluid head video tripod, or at the very least precise stepper motors. These servos are too cheap!
How do you connect the RPI to the Arduino physically? Could you post a picture?
Hey Jonas,
Connect the two using USB -- sorry that was not very clear!
And, why the arduino? This could be done from the rPI alone.
Hi Ray,
You are correct, you can do this 100% from the Pi.
The reason for the Arduino is that the Pi only has one (hardware) Pulse Width Modulation (PWM) capable pin. This means using more than one servo requires software PWM, which is not great!
Arduinos have several PWM pins.
Thanks for stopping by | https://www.makeuseof.com/tag/diy-pan-and-tilt-network-security-cam-raspberry-pi/ | CC-MAIN-2019-30 | refinedweb | 3,428 | 64.2 |
The .
MQTT makes it fairly straightforward to set up programs on a Linux machine that harvest information and publish that info on the network for small, resource-constrained microcontrollers to see and process. The recent availability of very cheap WiFi-enabled microcontrollers such as the ESP8266 makes this an exciting time to be tinkering with IoT.
The advantage of using messages is that devices can listen for interesting things and can send any information that they think is important. Every device doesn’t need to know about the other devices on the network for this to happen. For example, a weather station can just publish the temperature, humidity, wind speed, and direction and the rest of your “things” can subscribe to take advantage of that information. Although there are many ways to send messages on a Linux desktop, MQTT should let you sent messages to and from your Arduino or mbed smart devices, too. If you are interested in buying an IoT or “smart” device, you might want to investigate whether the messaging used by it is an open standard, such as MQTT.
MQTT is published as an open standard by OASIS. Many implementations of MQTT are available, including the one I’ll focus on here: Mosquitto. Mosquitto can be installed on a Fedora 23 machine using the first command below and started with the second command.
# dnf install mosquitto-devel
# systemctl start mosquitto
Programs subscribe to messages that they are interested in, and programs can publish informative messages for clients to see. To make all this work, MQTT uses a broker process, which is a central server that keeps track of who wants to hear what and sends messages to clients accordingly.
To work out which clients are interested in which messages, each message has a topic, for example, /sensors/weather/temperature. A client can request to know just that temperature message or can use wildcards to subscribe to a collection of related messages. For example, /sensors/weather/# will listen to all messages that start with /sensors/weather/. The slash character is used much like the files and folders in a file system.
The two commands below give an introduction to how easy using MQTT can be. The two commands should be run in different terminal windows, with the mosquitto_sub executed first. When the mosquitto_pub command is run you should see abc appear on the terminal that is running mosquitto_sub. The -t option specifies the topic, and the -m option to mosquitto_pub gives the message to send.
$ mosquitto_sub -t /linux.com/test1
$ mosquitto_pub -t /linux.com/test1 -m abc
A relevant question here involves the timing of these commands. What if you run the mosquitto_pub command first? Nothing bad, but the mosquitto_sub command might not see the “abc” message at all. Now, if the topic was about the current temperature, and the topic was only published every hour, you might not want the client to have to wait that long to know the current temperature. You could have your weather station publish the temperature more frequently, for example, every 5 minutes or every 5 seconds. But, the trade-off is that you are sending messages very frequently for a value that changes infrequently in order for clients to have access to data right away.
To get around these timing issues, MQTT has the retain option. This is set when you publish a message using the -r option and tells the broker to keep that value and report it right away to any new clients that subscribe to messages on the topic. Using retain, you can run the publish command shown below first. Then, as soon as mosquitto_sub is executed, you should see def right away.
$ mosquitto_pub -t /linux.com/test2 -r -m def
$ mosquitto_sub -t /linux.com/+
def
In the preceding command, I’ve used the + in the topic used by the mosquitto_sub command. This lets you subscribe to all messages at that level. So, you will see /linux.com/test2 and /linux.com/test3 if it is sent, but not /linux.com/test2/something, because that is one level deeper in the hierarchy. The # ending will subscribe to an entire tree from a prefix regardless of how deep the topic gets. So, /linux.com/# would see /linux.com/test2/something and /linux.com/test2/a/b/c/d.
Another question to consider is what happens to messages that are sent when your program is not running. For example, a program might like to know if and how many times the refrigerator door has been opened in order to graph the efficiency of the refrigerator over time. The –disable-clean-session option to mosquitto_sub tells the broker that the program is interested in hearing those messages even if the program is not running at the moment.
Because the mosquitto_sub process might exit and might be running on a different computer to the broker, it needs to identify itself to the broker so that the broker knows who it is storing messages for and when that program has started up again. The –id option provides an identifier that is used to help the broker know who is the door watching client. Note that the open2, open3, and open4 messages might be sent when the mosquitto_sub is not running in the below example.
$ mosquitto_sub -t /linux.com/fridge/door --disable-clean-session --id doorwatcher -q 1
open1
^C
$ mosquitto_sub -t /linux.com/fridge/door --disable-clean-session --id doorwatcher -q 1
open2
open3
open4
$ mosquitto_pub -t /linux.com/fridge/door -m open1 -q 1
$ mosquitto_pub -t /linux.com/fridge/door -m open2 -q 1
$ mosquitto_pub -t /linux.com/fridge/door -m open3 -q 1
$ mosquitto_pub -t /linux.com/fridge/door -m open4 -q 1
The -q options that were used above tell MQTT what quality of service (QoS) we want for these messages. MQTT offers three levels of QoS. A QoS of 0 means that the message might be delivered, QoS of 1 makes sure the message is delivered, but that might happen more than once. A QoS of 2 means that the message will be delivered, and delivered only once. For messages to be stored for a client, the QoS must be 1 or more.
The message passing shown above is not limited to working without security. Both mosquitto_pub and mosquitto_sub sessions can use username and passwords or certificates to authenticate with the broker and TLS to protect communication. This can be a trade-off; as a protocol aimed at IoT, you might be more interested in knowing that a message is from a known good source and has not been altered than that the message has been encrypted. You might not care to keep it secret that the wind is at 20 miles an hour, but you do want to know that the message came from your weather station. So, you might want a valid message authentication code but the message itself can be sent in plain text format or using only a very rudimentary cipher.
ESP8266: Mixing in Small Microcontrollers over WiFi
The ESP8266 is a small, very inexpensive, microcontroller with WiFi support (Figure 1 above). Depending on the board, you might get it for under $10. Recent versions of the Arduino environment can be set up to compile and upload code to the ESP8266 board, making it easy to get up and running.
I used an ESP-201 board and had to set the following pin connections to run the microcontroller. Pin io0 is set to ground before applying power to the ESP8266 when you want to flash a new firmware to the board. Otherwise, leave io0 not connected and the ESP8266 will boot into your firmware right away. Note that the ESP8266 is a 3.3 volt machine and connecting it to 5 volts will likely damage the hardware.
ESP-201 Connection
------------------------
3.3v 3.3v regulated voltage
io0 pulled low to flash, not connected for normal operation
io5 pulled low
chip_en pulled high (to 3.3v of supply to esp)
rx tx on UART board
tx rx on UART board
gnd gnd
The code for the ESP8266 shown below is based on an example from the Adafruit MQTT Library ESP8266. You will need to replace the WiFi SSID and PASSWORD with your local settings and update the MQTT_SERVER to the IP address of the local Linux machine on which you are running your MQTT server. You should be able to upload the program from a recent version of the Arduino IDE using a USB to TTL serial converter.
/***************************************************
#include "Adafruit_MQTT.h"
#include "Adafruit_MQTT_Client.h"
// the on off button feed turns this LED on/off
#define LED 14
/************************* WiFi Access Point *********************************/
#define WLAN_SSID "...FIXME..."
#define WLAN_PASS "...FIXME..."
// Create an ESP8266 WiFiClient class to connect to the MQTT server.
WiFiClient client;
const char MQTT_SERVER[] PROGMEM = "192.168.0.FIXME";
const char MQTT_USERNAME[] PROGMEM = "ben";
const char MQTT_PASSWORD[] PROGMEM = "secret";
// Setup the MQTT client class by passing in the WiFi client and MQTT server and login details.
Adafruit_MQTT_Client mqtt(&client, MQTT_SERVER, AIO_SERVERPORT, MQTT_USERNAME, MQTT_PASSWORD);
const char ONOFF_FEED[] PROGMEM = "/sensor/espled";
Adafruit_MQTT_Subscribe onoffbutton = Adafruit_MQTT_Subscribe(&mqtt, ONOFF_FEED);
void setup() {
...
WiFi.begin(WLAN_SSID, WLAN_PASS);
while (WiFi.status() != WL_CONNECTED) {
delay(500);
Serial.print(".");
}
Serial.println();
Serial.println("WiFi connected");
Serial.println("IP address: "); Serial.println(WiFi.localIP());
// Setup MQTT subscription for onoff & slider feed.
mqtt.subscribe(&onoffbutton);
}
The main loop in this example reconnects to the MQTT broker if the connection was lost or has not yet been made. The readSubscription() call checks for any incoming data for subscriptions from MQTT and acts on the only subscription that the program has, turning an LED on and off depending on the message. The full example from which this code was taken is available on GitHub.
void loop() {
MQTT_connect();, "ON") == 0) {
digitalWrite(LED, HIGH);
}
if (strcmp((char *)onoffbutton.lastread, "OFF") == 0) {
digitalWrite(LED, LOW);
}
}
}
if(! mqtt.ping()) {
mqtt.disconnect();
}
}
Regardless of which MQTT implementation(s) you choose to run, by selecting an open standard, you are not limited in how your IoT devices can interact. | https://www.linux.com/news/mqtt-building-open-internet-things/ | CC-MAIN-2020-24 | refinedweb | 1,671 | 63.09 |
Agenda
See also: IRC log
<trackbot> Date: 22 September 2011
<scribe> Scribenick: vhardy
ed: Topic: pre-TPAC meeting
... are we settled to have an SVG WG meeting Oct 27/28.
... I do not think we have a location yet.
... I do not think that anybody took an action to host the meeting.
cl: I think vh had an action.
vhardy: I do not recall that, but I could look into it.
<ed>
ed: Patrick said he could host at Microsoft.
<scribe> ACTION: ed to confirm with Patrick Dengler that he can host the Oct. 27/28 meeting in Santa Clara (or close by) [recorded in]
<trackbot> Created ACTION-3114 - Confirm with Patrick Dengler that he can host the Oct. 27/28 meeting in Santa Clara (or close by) [on Erik Dahlström - due 2011-09-29].
<ed>
ed: Doug sent a reminder to
everybody to register for the TPAC.
... we are meeting on Thursday/Friday.
cl: CSS WG is meeting on Sunday.
ed: vhardy asked if we should have an FX meeting during TPAC.
cl: they are meeting Sunday/Monday/Tuesday.
ed: we usually schedule a half
day.
... do we want to have an FX meeting as one of the SVG WG days?
cl: we could do the meeting during the SVG group meeting days.
<scribe> ACTION: ed to coordinate with the CSS WG chairs on joint FX meeting during TPAC. Agree on meeting date/time. [recorded in]
<trackbot> Created ACTION-3115 - Coordinate with the CSS WG chairs on joint FX meeting during TPAC. Agree on meeting date/time. [on Erik Dahlström - due 2011-09-29].
ed: I will try to work on the use
cases and requirements document next week, first working on the
wiki, then on a document.
... heycam is away. I am not sure if he will be able to help when he comes back.
cl: last week, at the typography conference, people were interested in SVG for glyph definitions. I have good examples of what people want. There was interest from Adobe and Microsoft during the conference.
<scribe> ACTION: Chris to document typographic requirements for SVG gathered during the recent typography conference he attended. [recorded in]
<trackbot> Created ACTION-3116 - Document typographic requirements for SVG gathered during the recent typography conference he attended. [on Chris Lilley - due 2011-09-29].
ed: I have see there is a bit more emails on the mailing list, request for new features. Has anybody adding this to the requirements page.
cl: there was a comment about
device RGB on the mailing list. Did I miss a discussion on
that?
... I do not want to head backwards. It is not going to help us long term.
ed: no, I do no think you missed a conversation.
cl: there was also feedback from Rik.
vhardy: I think Rik will have a more crystalized position for Adobe by TPAC.
ed: there was also a requirement for stroke position (inside, outside path).
cl: I thought it was above or below fill. The inside/outside is a little harder to implement. But it is a good one to have.
vhardy: I think it is a great requirement (both above/below and inside/outside).
cl: implementations may be able to do it in different ways. Would be good to have.
ed: anybody could go over the recent email and make sure that we have all the input on the wiki page?
<scribe> ACTION: Chris to go over recent email threads on requirements (strokes for example) and add to the requirements wiki page. [recorded in]
<trackbot> Created ACTION-3117 - Go over recent email threads on requirements (strokes for example) and add to the requirements wiki page. [on Chris Lilley - due 2011-09-29].
ed: the list is long. We need to decide on what we will have in SVG 2.0. This is part of the document I'll start writing up soon.
<ed>
ed: everybody. It would be great
if everybody could add their comments on the features.
... there is a template for doing that.
cl: could we group the features that are the same or very similar.
ed: some features do not have a
lot of description, and it makes it harder to judge their
relevance/importance.
... one of the big items is the namespace requirements clean-up.
... is that something we can resolve today.
cl: we have a resolution to drop
the xlink:href attribute and replace it with href
attribute.
... for SVG if it is XML then you need to declare the SVG namespace, but not if it is in HTML5.
... for XML Events namespace which we used in SVG Tiny 1.2. We wont need that. We will reference DOM Events and not XML Events in SVG 2.0.
... it might be as simple as importing things in our own namespace.
... the last thing is that for custom data, in XML you do that with things in your own namespace. In HTML5, it is data-*. We want to define that in SVG so that it is clear that for SVG elements, then the data-* should be reserved and mean the same thing as in HTML.
ed: what about XML id and XML base?
cl: HTML5 only uses base and id instead of xml:base and xml:id.
ed: SVG Tiny 1.2 requires both id and xml:id.
cl: there were questions about using both.
ed: I would like to drop xml:base
and xml:id. There is not that much content that would break if
we change that.
... if we go with that, we should come up with a way to make legacy content work.
cl: for SVG 2.0, we should put the id and base attributes in the per-element partition and say they mean the same thing as in the HTML5 spec.
<rect id="rect_1" base="" ... />
cl: is base the same in HTML5 as in xml:base?
ed: I think they are the same,
but I am not 100% sure.
... it is the same feature, it should behave the same. There may be minor differences.
<scribe> ACTION: Chris to document the changes around namespace handling in SVG 2.0 (id, base, attributes, etc..). [recorded in]
<trackbot> Created ACTION-3118 - Document the changes around namespace handling in SVG 2.0 (id, base, attributes, etc..). [on Chris Lilley - due 2011-09-29].
ed: for xlink, I wonder if we drop the xlink prefix, we may get conflicts with some SVG attributes. e.g., xlink:title. There may be other clashes, but I have not checked. The ones used are xlink:title and xlink:href.
cl: yes. In 1.1, for the ones other than xlink:href we said they are for documentation, nothing else.
<cyril> or maybe type vs. xlink:type ?
ed: ok, we can resume the
discussion when we have the wiki page done.
... there is a big thing with improving the SVG DOM. I want to see if there is anyone here who wants to make changes to the DOM that are backwards incompatible. Or do we agree on backward compatibility?
<pdengler> what do you mean by partially backward compatible
vhardy: I would be worried about not being backward compatible because there is content using the DOM.
ed: by partially backward compatible, I would like to change the things that are not implemented, not well implemented or not interoperable. But otherwise, be backward compatible.
vhardy: sounds good.
<pdengler> I agree that we need to maintain backward compatability; if there are areas that are maybe really not travelled we could explore
ed: there are some minor parts of the DOM that are not clear or cannot work (e.g., issues with % values).
<pdengler> Agreed, interop is key and if we are not interoperable, these are areas that we could consider
cl: do you have a list?
<ed>
<scribe> ACTION: Erik to raise issues on things that are currently broken in the DOM design. [recorded in]
<trackbot> Created ACTION-3119 - Raise issues on things that are currently broken in the DOM design. [on Erik Dahlström - due 2011-09-29].
<krit> What about using CSS Units instead of SVG units where possible. If we want to add CSS animations and transitions, this could help the process a lot.
<krit> This could affect SVG DOM
ed: how do CSS units differ from SVG units?
cl: they are the same, except that we allow things to be in user units, and CSS does not.
<krit> SVGLength has more untis IIRC
<krit> just as an example
cl: but I think there is equivalence all the time.
<krit> SVGTransform is different to CSSTransform
cl: for angle, SVG also always requires a unit.
ed: when we put SVG in HTML, that would be a problem about specifying the units.
<krit> SVG DOM might not notice any difference if baseVal points to the CSS property value and animVal points to the computed value
cl: when you use the CSS syntax, then you use the CSS syntax. The different parsing rules are only on the presentation attributes.
ed: there are new units added to
the CSS spec. that would be nice to support in SVG. That should
just work.
... for example the 'vh' units (viewport height).
... there are some others.
... they are units that we do not require at the moment.
cl: we have to decide the spec. we depend on. In a f2f meeting, we decided the CSS spec. we should depend on and the CSS Values and Units is one of them. So this is taken care of.
ed: ok then, let's move on.
... for the SVG transform v.s., CSS transforms?
vhardy: there is an FX task force action on Dean, Simon and myself to deliver a consolidated spec. that works on CSS and SVG in a unified way. This problem will be addressed there.
ed: there was a proposal for allowing alternative transforms such as mesh, cone, etc.. from David Dailey?
<krit> We have a problem with SVGLengthList..
cl: this collides with the mapping work. We should ask the mapping task force. They are all non-linear transforms.
<krit> This is also not covered in the discussion on CSS animations of x-Attribute
cl: we can split this between the mapping related transforms and the other ones.
<krit> Is there something similar in CSS?
cl: there was another request to
do the polar coordinate systems and also a request about
perspective transforms.
... these are 3 separate discussions.
<krit> I tried to implement CSS transforms beside SVG transforms. It would work for us in WebKit
<krit> one would get multiplied after the other
<krit> I still think it is a good idea to combine them like we do for other CSS properties (style attributes)
<pdengler> i think we should treat transforms just like any other SVG properity. That is one overrides the other.
ed: Going over the list, there is
nothing more that had a lot of comments on it.
... Everybody please go and update the wiki with your comments.
... Moving on to the list of current resolutions.
ed: we had a resolution to
publish an SVG integration spec.
... we need Doug for this. I am not sure if he has made any changes to it.
... on the resolutions page, I noted that there is no rationale nor actions for most of them. We should have both for resolutions, especially for difficult resolutions. Actions may not always be needed, but often they would be needed.
... for example on constructive geometry.
cl: there is no action required for this because it is already in the vector effects spec.
vhardy: may be we could go over the list during the f2f and agree on Action Items where needed or add rationale if not clear.
ed:
... is the proposal up to date?
tbah: yes.
<tbah>
<scribe> ACTION: tbah to update the proposal to reflect [recorded in]
<trackbot> Created ACTION-3120 - Update the proposal to reflect [on Tavmjong Bah - due 2011-09-29].
ed: we have a resolutaion on adding the mesh gradient.
<krit> Is there a easy way to implement mesh gradients with usual graphic libraries and basic drawing operations?
cl: do we have an agreement on how to proceed with the spec.?
ed: we have the spec. in place. We need to get jwatt to describe how to go about we start editing the spec.
tbah: once we have that, I can add the gradient mesh feature.
<scribe> ACTION: tbah to edit the SVG 2.0 spec. and add the gradient mesh specification. [recorded in]
<trackbot> Created ACTION-3121 - Edit the SVG 2.0 spec. and add the gradient mesh specification. [on Tavmjong Bah - due 2011-09-29].
ed:
... this seems like a fairly large change.
cl: the action should be to look at the changes that need to happen and then propose before including.
ed: heycam seems to be the best person to look into this?
cl: yes.
<scribe> ACTION: heycam to assess required changes to SVG 2.0 text section to complete resolution [recorded in]
<trackbot> Created ACTION-3122 - Assess required changes to SVG 2.0 text section to complete resolution [on Cameron McCormack - due 2011-09-29].
ed: next is.
... do we add this to the reference section.
<scribe> ACTION: cl to add CSS spec. dependencies in the reference section and add proper usage from within the specification text. [recorded in]
<trackbot> Created ACTION-3123 - Add CSS spec. dependencies in the reference section and add proper usage from within the specification text. [on Chris Lilley - due 2011-09-29].
ed:
<scribe> ACTION: vhardy to propose API for [recorded in]
<trackbot> Created ACTION-3124 - Propose API for [on Vincent Hardy - due 2011-09-29].
ed:
<scribe> ACTION: heycam to make a proposal and add to spec. [recorded in]
<trackbot> Created ACTION-3125 - Make a proposal and add to spec. [on Cameron McCormack - due 2011-09-29].
ed: Resolutions
...
vhardy: describes how people use 3d transforms to turn on buffered rendering. I think the attribute is needed.
ed: I think the discussion was
that it is possible to detect what needs to be
rasterized.
... but it seems true that people use 3d transforms to do buffers today.
vhardy: ok. I am happy to proceed and raise this again based on implementaiton or usage feedback.
ed:
... there was a lot of feedback on that.
<scribe> ACTION: ed to rip out the font chapter from the SVG spec. and replace it with SVG tiny fonts from SVG Tiny 1.2. Then move the SVG Full fonts to a separate module.. [recorded in]
<trackbot> Created ACTION-3126 - Rip out the font chapter from the SVG spec. and replace it with SVG tiny fonts from SVG Tiny 1.2. Then move the SVG Full fonts to a separate module.. [on Erik Dahlström - due 2011-09-29].
<krit> SVG Tiny would not be a module?
<pdengler> I thought we were going to completely moduarlize SVG ?
cl: I don't mind taking over the SVG Full Fonts module.
<krit> How much of SVG 1.1 fonts are implemented in other svg viewers? I'm more interessted in the differences. Do some viewers dupport more than SVG Fonts tiny?
cl: we would drop the general font mechanism we use and then only use the bits needed for integration in open type.
<ed> krit: batik does AFAIK, and maybe ASV
<scribe> ACTION: chris to rework the SVG Full font module as an open type glyph table. [recorded in]
<krit> Ok, so we should keep SVG Fonts 1.1 completely?
<trackbot> Created ACTION-3127 - Rework the SVG Full font module as an open type glyph table. [on Chris Lilley - due 2011-09-29].
<krit> .. as a module
<pdengler> I think we should keep the SVG Fonts as a separate module
<pdengler> Then "other" fonts should be handled in Text?
<pdengler> We should "make" the SVG Fonts
cl: eventually, this would be an opentype module. Through the woff support, you would get to the module.
pdengler: ok. This is not svg specific then?
cl: yes, the svg render does not see it directly, it only sees it as a woff font. It could be used in printing for example.
pdengler: yes, SVG fonts are
useful in some contexts.
... I just wanted to support the modularization of the SVG spec. and support what the industry and browsers need.
cl: I don't think there is a problem there.
ed:
<scribe> ACTION: cl to make the changes to the SVG 2.0 spec. for [recorded in]
<trackbot> Created ACTION-3128 - Make the changes to the SVG 2.0 spec. for [on Chris Lilley - due 2011-09-29].
ed:
<scribe> ACTION: cyril to edit SVG 2.0 for [recorded in]
<trackbot> Created ACTION-3129 - Edit SVG 2.0 for [on Cyril Concolato - due 2011-09-29].
ed:
cl: this seems to be working now. It is a bit annoying to not see the reference and the test at the same time, but it does work.
ed:
cl: are there in the spec?
ed: the resolution was to
consider a proposal if there was one.
... there was an action that I'll add to the table.
<ed> ACTION-2681?
<trackbot> ACTION-2681 -- Doug Schepers to write up the connector proposal -- due 2009-10-08 -- OPEN
<trackbot>
ed:
<scribe> ACTION: heycam to edit the spec. for. [recorded in]
<trackbot> Created ACTION-3130 - Edit the spec. for. [on Cameron McCormack - due 2011-09-29].
<scribe> ACTION: heycam to edit the spec. to broaden the references in SVG (any use of href, e.g., a gradient from another SVG fragment). [recorded in]
<trackbot> Created ACTION-3131 - Edit the spec. to broaden the references in SVG (any use of href, e.g., a gradient from another SVG fragment). [on Cameron McCormack - due 2011-09-29].
<ed> trackbot, end telcon
This is scribe.perl Revision: 1.136 of Date: 2011/05/12 12:01:43 Check for newer version at Guessing input format: RRSAgent_Text_Format (score 1.00) Succeeded: s/Frida/Friday/ Succeeded: s/in the SVG namespace/in the per-element partition/ Succeeded: s/resolution/discussion/ Found ScribeNick: vhardy Inferring Scribes: vhardy Default Present: ed, +33.9.53.77.aaaa, tbah, ChrisL, +1.425.868.aabb, +1.425.868.aacc Present: ed +33.9.53.77.aaaa tbah ChrisL +1.425.868.aabb +1.425.868.aacc Regrets: DS CC Agenda: WARNING: No meeting chair found! You should specify the meeting chair like this: <dbooth> Chair: dbooth Found Date: 22 Sep 2011 Guessing minutes URL: People with action items: chris cl cyril ed erik heycam tbah vhardy WARNING: Input appears to use implicit continuation lines. You may need the "-implicitContinuations" option.[End of scribe.perl diagnostic output] | http://www.w3.org/2011/09/22-svg-minutes.html | CC-MAIN-2016-50 | refinedweb | 3,102 | 76.82 |
RealmContent: Add Real-time Updates to your iOS App in 5 Min
Introduction
RealmContent coupled with the Realm Mobile Platform quickly gives the developers the ability to add new content into an iOS App by adding data directly into the Realm Browser app.
RealmContent automatically adds the Realm objects it needs into the app’s own schema and allows the developer to use a number of pre-configured View Controllers to list the available content and to render each of the content “pages”.
The component works mostly automatically while it allows for a lot of customization. It saves developers time and drudgery of implementing the underlying plumbing themselves or even having to roll out a completely new version of the app to change in-app content.
The library is the easiest and fastest way to push content in real-time to your complete user base.
How does it work? The technical crash-course
There are five easy steps to add a dynamic content management system to an iOS app.
1) Import
RealmContent
Add RealmContent via CocoaPods or include the source files directly into your project. Then import both
RealmSwift and
RealmContent in a view controller:
import RealmSwift import RealmContent
Your app would normally have a list of objects it syncs from the Realm Object Server. Once you import
RealmContent it will expose two new models which Realm will add to your default schema:
ContentPage and
ContentElement.
If you’re using multiple Realm files, add
ContentPage and
ContentElement to the desired object schema.
2) Create a content list data source
To display a list of the available content in your app you can use the
ContentListDataSource class that
RealmContent provides you with:
let items = ContentListDataSource(style: .sectionsByTag)
Use
.plain for a plain list or
.sectionsByTag for a list with sections having the pages split by their
tag property.
3) Initialize the data source
Call
loadContent(from:) to set which realm file to use as the content source:
items.loadContent(from: try! Realm())
You can also have the data source automatically tell your table or collection view to reload the data whenever changes come in from the Realm Object Server (best option):
items.updating(view: tableView)
4) Implement your table view or collection view data source methods like usual, but fetch data from the content data source. The class offers few methods to do that like:
numberOfSections,
numberOfItemsIn(section:),
titleForSection(section:), and
itemAt(indexPath:).
This way you can implement your own table view or collection view data source methods and do any UI setup or other logic that you need. (For more detailed code sample check out the demo app in the
RealmContent repo.)
5) Presenting a content “page”
Present a
ContentViewController instance to display content. You can do this from a table/collection view tap delegate method,
prepareForSegue(_:) method, or from an arbitrary piece of code.
Here’s an example of how would it work after a tap on a table cell:
func tableView(_ tableView: UITableView, didSelectRowAt indexPath: IndexPath) { tableView.deselectRow(at: indexPath, animated: true) let item = items.itemAt(indexPath: indexPath) let vc = ContentViewController(page: item) navigationController!.pushViewController(vc, animated: true) }
The presented view controller uses a table view to dynamically show the content from the given
ContentPage element. In case you change the content remotely, the changes will be reflected in real-time on screen.
Dynamically add, edit, or remove content
The best aspect of using
RealmContent.
Try RealmContent now
The component is open source and you can find the source code and a demo app here:.
The quickest way to try
RealmContent in your app is via CocoaPods; add
pod 'RealmContent' to your Podfile.
If you have an idea for a feature, want to report a bug, or would like to help us with feedback reach us on Twitter or create an issue on GitHub.
We’re excited to see how far are you going to push this component and the awesome dynamic apps you’re going to create with it! | https://realm.io/blog/marin-todorov-realmcontent-open-source-cms/ | CC-MAIN-2020-45 | refinedweb | 664 | 51.38 |
Lesson 10 - Serialization and deserialization in C# .NET
In the previous lesson, LINQ to XML in C# .NET, we introduced the LINQ to XML technology. In today's tutorial, we're going to talk about serialization and deserialization.
This article was written by Jan Vargovsky.
Serialization means preserving an object's state. A bit more scientifically, it could be described as converting an object to a stream of bytes and then storing it somewhere in memory, database, or a file. Deserialization is the opposite of serialization. One could say it's a conversion of the byte stream back to the object copy.
What is it good for?
Serialization allows us to save the object state and then, thanks to deserialization, recreate it once more. Serialization is used to do things like send data through a network or save program settings.
Sample application
Let's create a new project of the Windows Forms Application
type. Then, we'll add a class whose instances we'd like to preserve and restore
to the state in which they were when the application was closed. We'll name the
class
User and add the FirstName, LastName, and BirthDate
properties to it. The class might look something like this (note that it's
public):
public class User { public string FirstName { get; set; } public string LastName { get; set; } public DateTime BirthDate { get; set; } }
On the main form, let's add two TextBoxes for the first and last names. We'll also add a DateTimePicker control so we could specify the user's birth date. Next, we'll add a button somewhere which we'll use to add users to our application. We'll add a ListBox control for displaying users. Finally, we'll rename the controls from their default names to something that will help us differentiate them:
Next, we'll create a collection of the List<User> type so we have something to store our users into. Let's move to the main form (Form1) and add a private collection to its class.
private List<User> users = new List<User>();
Move back to the form designer and add a method to the Click event on the addButton control. We'll add the code for adding a user to our collection. Furthermore, we also have to show the users in the usersListBox. We'll use the DataSource property to do so.
private void addButton_Click(object sender, EventArgs e) { // Creates a new user using the data from the controls User user = new User { FirstName = firstNameTextBox.Text, LastName = lastNameTextBox.Text, BirthDate = birthDateDateTimePicker.Value }; // Adds the user to our collection users.Add(user); // Refreshes the data source of our usersListBox usersListBox.DataSource = null; usersListBox.DataSource = users; }
Now we have a fairly functional application. When you launch it and try to add a user, you'll see that it actually adds a "ProjectName.User" item. Let's move on to the User source file and override the ToString() method. Here's what you'll need to change it to:
public override string ToString() { return "First name: " + FirstName + " Last name: " + LastName + " Birth Date: " + BirthDate.ToShortDateString(); }
Try to add several users and see just how much more human-readable they are.
Serialization
Now, we can finally move to data serialization. Let's create a Serialize() method in the form's code-behind.
private void Serialize() { try { // Creates XmlSerializer of the List<User> type XmlSerializer serializer = new XmlSerializer(users.GetType()); // An alternative syntax could also be: //XmlSerializer serializer = new XmlSerializer(typeof(List<User>)); // Creates a stream using which we'll serialize using (StreamWriter sw = new StreamWriter("users.xml")) { // We call the Serialize() method and pass the stream created above as the first parameter // The second parameter is the object which we want to serialize serializer.Serialize(sw, users); } } catch (Exception ex) { MessageBox.Show(ex.Message); } }
We used the serializer for the XML format. There are several types of serializers including the binary one which .NET framework provides. In fact, there's no need to worry about anything since instances are serialized automatically. Let's go back to the form designer and find the OnClosing event, where we'll double click and call our Serialize() method in the handler code.
private void Form1_FormClosing(object sender, FormClosingEventArgs e) { Serialize(); }
Note: This method's name may vary depending on the name of the main form. If you didn't modify it, it should all look the same.
If we start the program, add several users and close it, a collection of the
users will be serialized and stored in
ProjectName/bin/Debug/users.xml. When we open the file, it should
be readable. For me, the file looks like this:
<?xml version="1.0" encoding="utf-8"?> <ArrayOfUser xmlns: <User> <FirstName>John</FirstName> <LastName>Smith</LastName> <BirthDate>2013-07-11T17:27:19</BirthDate> </User> <User> <FirstName>James</FirstName> <LastName>Brown</LastName> <BirthDate>2013-07-11T17:27:19</BirthDate> </User> </ArrayOfUser>
Deserialization
The serializing part is now done, so let's move on to deserialization. From a
coding point of view, it's a bit more complex, so we'll go through the entire
process again. We'll start by creating a Deserialize() method in the main form's
code. Then, we'll need to determine whether the XML file with the data even
exists. The File class and its Exists(string path) bool method
will help us with that. In the condition's body, we'll create an
XmlSerializer instance which will be of the same type as our
users List. Once that's done, we'll create the
StreamReader instance with a path to our file and then simply
call the Deserialize() method on the XmlSerializer. However,
there is a small catch, the Deserialize() method returns an
object. Meaning that we'll have to cast it
before we assign our saved users to our existing ones. The whole method, with
all of this in mind, looks is as follows:
private void Deserialize() { try { if (File.Exists("users.xml")) { XmlSerializer serializer = new XmlSerializer(users.GetType()); using (StreamReader sr = new StreamReader("users.xml")) { users = (List<User>)serializer.Deserialize(sr); } } else throw new FileNotFoundException("File not found"); } catch (Exception ex) { MessageBox.Show(ex.Message); } }
We'll call this method in the form's Load event. Now, let's move to the designer, find the Load event (in the form's properties), and create its handler method. We'll call our Deserialize() method in it and load the users to our ListBox. The entire method looks like this:
private void Form1_Load(object sender, EventArgs e) { Deserialize(); usersListBox.DataSource = users; }
Note: This method's name may also vary just like the previous handler method.*
When you open the application, add data, close it, and open it again. It should include all of the users that you added before.
The conclusion
In conclusion, I'd like to mention a few things which you'd probably find out soon or later when serializing objects for your applications.
- The class being serialized must have a parameterless constructor (or a non-parametric constructor). This is because the first thing the deserializer does is create an empty instance, and then gradually assigns its properties as it reads them from the file (or from any other stream).
- You can't serialize controls, not the .NET ones nor your own (User Controls). There's no need to serialize everything. Only save the things that you really need.
- Serialization includes several attributes, such as:
- [XmlIgnore] - Keeps it from serializing the property.
- [Serializable()] - Marks the class clearly so that it will be serialized. (Implements the ISerializable interface). We can place this one right above the class declaration.
- [XmlAttribute("Name")] changes the XML element from paired to unpaired, and puts the value of the property in the attribute. For example,
<User FirstName="John">rather than
<FirstName>John</FirstName>.
- If you've ever wanted to serialize the Color class, it can be serialized. However, it will have no value in the file. You may serialize it as a hexadecimal number, and when you deserialize it you can convert it back to a color.
In the next lesson, we'll work with binary files.
Download
Downloaded 18x (59.9 kB)
Application includes source codes in language C#
No one has commented yet - be the first! | https://www.ict.social/csharp/files/serialization-and-deserialization-in-csharp-net | CC-MAIN-2020-10 | refinedweb | 1,373 | 57.87 |
.
Spock is an old friend
We came to know Spock as one of our favorite testing and specification frameworks around. Around the time of Grails 1 we could use Spock 0.7 by including the Grails Spock plugin manually. Looking back through the Grails upgrade guides, as of 2.3 Spock became the default. Testing became definately more fun again. 🙂
Optional will return a default answer of Optional.empty()
Here’s one of the newer features you can now use when testing your Grails application.
You know that when there’s no interaction defined for a method call, mocks will return a default value based on their return type? If the return type is a
boolean the mock will return
false, for a number it will return
0 etcetera.
The — in Java 8 introduced —
java.util.Optional had no support yet: when called on a
Mock it would return
null (leading to NPEs further down the road) and called on a
Stub it would trigger a
CannotCreateMockExcdeption.
Well, with Spock 1.1 not anymore. When there would be no interaction defined for a method returning an
Optional, now by default an empty
Optional is returned.
import spock.lang.Specification class OptionalSpec extends Specification { def "default answer for Optional should be Optional.empty()"() { given: TestService service = Stub() when: Optional<String> result = service.value then: !result.present } } interface TestService { Optional<String> getValue() }
For a comprehensive overview of everything in Spock release 1.1 see the release notes.
One thought on “Grails 3.3 has Spock 1.1” | https://tedvinke.wordpress.com/2017/05/12/grails-3-3-has-spock-1-1/ | CC-MAIN-2018-05 | refinedweb | 255 | 60.01 |
ObjectData Spline Priority
On 15/11/2015 at 08:52, xxxxxxxx wrote:
User Information:
Cinema 4D Version: 16
Platform: Windows ; Mac OSX ;
Language(s) : C++ ; XPRESSO ;
---------
Hello Forum,
In the linked scene file there is the following setup:
1. A mesh being deformed by a Skin object.
2. A Spline object with its points being positioned by control Null objects through an Xpresso tag.
3. One of the control Nulls is being clamped to a mesh point by a Constraint tag.
If the Spline object is the active object and the editor is in points mode, the scene works as expected. If the Editor is in object mode, the Spline lags during animation playback. My first goal is to get rid of the lag regardless of editor mode or object selection.
I have tried to create an ObjectData plugin that generates a Spline where I included Texpression in the .res file. I have tried many different Priority settings and cannot get rid of the Spline's lag.
Is there a way to configure an ObjectData plugin that generates a spline where I can control the Priority?
My end goal is to use the Spline Deformer in a character rig where I'm implementing a cage and a Mesh Deformer. The Spline's point positions will be driven by the cage. The Spline Deformer will deform another mesh being deformed by the Mesh Deformer.
I'm open to any suggestions except point caching because that will not work for the client/animator.
Thank you,
Joe Buck
On 16/11/2015 at 07:24, xxxxxxxx wrote:
Hi Joe,
while I have not been able to solve your problem with playing around with the priorities in your scene (it seems as if there's a refresh missing and I can't eliminate the possibility of a bug in Xpresso), I think I have a solution for you using a Python tag.
In your scene simply replace the Xpresso with a Python tag.
Create two User Data on the Python tag. First (ID 1) an integer, which will define the spline point to be influenced. Secondly a Link box (ID 2), where the Null object (or any other object) gets thrown in, which defines the position of the spline point.
Then use the following code for the Spline tag:
import c4d def main() : sp = op.GetObject() if not sp.CheckType(c4d.Ospline) : print "Error: Attach to spline" return idxPoint = op[c4d.ID_USERDATA, 1] # User Data ID 1 is an integer ref = op[c4d.ID_USERDATA, 2] # User Data ID 2 is a link if ref is None: print "Error: No refernce linked" return if idxPoint < 0: idxPoint = 0 lastSplineIndex = sp.GetPointCount() - 1 if idxPoint > lastSplineIndex: idxPoint = lastSplineIndex op[c4d.ID_USERDATA, 1] = lastSplineIndex newPos = ref.GetMg().off #print ref.GetName(), " influences spline point: %d" % idxPoint #print " New pos:", newPos sp.SetPoint(idxPoint, newPos) sp.Message(c4d.MSG_UPDATE) # make spline aware of point change c4d.EventAdd(c4d.EVENT_NOEXPRESSION) # NOTE: EVENT_NOEXPRESSION is needed, # otherwise EventAdd causes reevaluation of scene # (and tag being executed endlessly)!!!
On 16/11/2015 at 08:57, xxxxxxxx wrote:
Hi Andreas,
Thanks for checking this out for me.
Revised scene file:
The revision to the scene does not appear to fix the problem for me.
In my humble opinion, it has something to do with a priority issue between a skin object and how a spline is generated.
Also, it is curious that the spline is generated without lag when the editor is in points mode and the spline is the active object.
Does the priority change for a spline object when its active representation is drawn in the editor? It appears that an inactive spline is drawn with less detail than an active spline. L.O.D. calculations?
Here is a link to a scene where the spline's control null is parented to the joints tip with a constraint tag:
The spline is generated without lag. This setup will not work for a character rig as I need to get point locations from a cage mesh.
Please let me know if you have any other suggestions.
Thanks,
Joe Buck
On 16/11/2015 at 09:40, xxxxxxxx wrote:
Hi Joe,
I checked your splinelag_01.c4d.zip scene, and it seems to work for me. I agree, it might look as if there's some drag, while moving the power slider (the slider below the viewport), but as soon as I release the power slider, the spline hits exactly the tip of the cone. And so it does, if I jump to an arbitrary frame or when the scene is being rendered.
On the other hand, I agree, that the lag during usage of the Power Slider is strange. And so is the difference between object mode and point mode with the spline selected. I'm looking into it...
On 16/11/2015 at 09:45, xxxxxxxx wrote:
Forgot one thing:
I don't think, it is related to the skin and the spline object. Instead, I think, the Xpresso or Python tag, which in your scenario needs to be run below Generator priority (higher prio value, meaning, it is actually handled after Generators), is leading to this strange behavior. But that's still guess work, as I said, I'm looking into it.
On 16/11/2015 at 09:45, xxxxxxxx wrote:
Cool. Thanks Andreas.
On 16/11/2015 at 10:51, xxxxxxxx wrote:
Hi Andreas,
I have tried many priorities and stacking orders with no success. Perhaps I have not stumbled across the magic combination. I was hoping you could take some of the magic out of it for me.
Since we are on this topic:
Does object manager order take precedence over priority settings in the attribute manager?
Much Thanks,
Joe Buck
On 20/11/2015 at 09:42, xxxxxxxx wrote:
Hi,
no, priorities precede Object Manager order. Only if two entities have the same priority, the order in the Object Manager comes into play. Actually the priority settings are there, so you are not dependent on Object Manager hierarchy alone.
By now, I have talked to several people about this issue and I'm afraid there is no real solution.
The reason is, that the Skin Object (despite its priority setting) is a Generator like all other deformers and thus runs on Generator/0 priority. The Spline Object is a generator as well, equally running on Generator/0 priority. In your setup, you need to read values from the Skin Object (so you would want to run on Generator/1 or higher to read the latest generated values) and on the other hand you need to feed these values into the Spline Object, which of course should happen before it is generated, latest on Generator/-1. So you basically need data earlier than you can get it... Unfortunately this situation is not really solvable in the current system.
I still think, my Python tag version may be a workaround, but of course it heavily slows down a scene (due to double evaluation) and it still has the drawback of the "virtual" viewport lag. Virtual, because I think, it is only noticeable, while user interaction (dragging a slider), but it should actually work during rendering.
On 20/11/2015 at 20:49, xxxxxxxx wrote:
Hi Andreas,
I think the real issue is when ObjectData::GetContour() is called. After further testing, I think using PriorityList::Add() in ObjectData::AddToExecution() does not change when GetContour() gets called. It only changes when ObjectData::Execute() is called. So having a higher priority than 0 makes Execute() get called after GetContour(). Having 0 or lower makes Execute() get called before GetContour(). Of course this is only another guess on my part.
There should probably be one more level( EXECUTIONPRIORITY_FUBAR ) added to the pipeline that only plugin developers can use to pick up the pieces.
After some research I found out that Cactus Dan had this figured out years ago. It appears that his CD Skin tag is called early in the pipeline and you can use it to rig with greater flexibility.
Thanks for your time Andreas.
Joe Buck
On 23/11/2015 at 12:19, xxxxxxxx wrote:
Hi Joe,
yes, that's exactly the point. With the priority you can only influence, when Execute() is called, GetContour(), GetVirtualObjects() and ModifyObject() are not influenced by the priority.
Unfortunately another priority level would be no solution either. As in your case you would need to get in between ModifyObject() of the Skin deformer and GetContour() of the Spline object.
Perhaps Cactus Dan reads this thread and could shed some light. I'd be interested, how he pulled it of.
On 23/11/2015 at 12:51, xxxxxxxx wrote:
not sure if this will work, did you try correction deformer?
edit:
you have a cage object, which got a rig, this cage object is used inside a mesh deformer.
old Hierarchy:
+your_cage
++skin_deformer (or any deformer)
+your_spline
++mesh_deformer(using your_cage)
in a typical case, this may lag at some point "happened a lot here"
what I usually do:
+your_cage
++skin_deformer
++correction_cage
+your_spline
++mesh_deformer(using your correction_cage)
++correction_spline
use correction_spline deformer as your input, it shouldn't lag.
edit 2:
just tested your scene and it didn't work with the clamp constraint, so it didn't work.
On 23/11/2015 at 13:17, xxxxxxxx wrote:
Hi Andreas,
Since I'm obviously not grasping the concept of EXECUTIONPRIORITY, I have a few more questions if you have time to answer:
1. Why does a deformer object need to be called at EXECUTIONPRIORITY_GENERATOR? Why cant it be called at EXECUTIONPRIORITY_EXPRESSION?
2. If I want to change the position of an object's points with a plugin, does that plugin have to derived from ObjectData?
Thanks again for your time and patience.
Joe Buck
On 23/11/2015 at 13:21, xxxxxxxx wrote:
@ MohamedSakr
Thanks for taking a shot at it for me.
Joe | https://plugincafe.maxon.net/topic/9208/12235_objectdata-spline-priority | CC-MAIN-2019-39 | refinedweb | 1,655 | 62.78 |
There are a few ways to build modern applications, two of the most common applications include single-page applications and server-rendered applications.
Single-page applications can be pretty useful for applications that will require better performance. Although Google has made some updates on how their crawler processes single-page applications, we still have a lack of SEO results. Server-side rendered applications can achieve better SEO results in search engines and still have a pretty decent performance.
The release of some awesome JavaScript frameworks, such as Next and Gatsby, has lead to more server-side applications being made. Let’s see a few reasons why single-page applications are not the best choice for some cases, especially for applications that are going to depend heavily on SEO.
The problem with single-page applications (SPA)
Something that you should consider before choosing to build single-page applications or server-side rendered applications is the content that you want to show.
A single-page application (SPA), is an application that is not served from a new HTML page every time new content should be rendered, but it’s dynamically generated by JavaScript manipulating the DOM. Since there’s no need to load a new HTML page every time something needs to change, what’s the problem with SEO in SPA?
The problem with SEO in SPA is that the application cannot be properly indexed by search engines, which is different from server-side rendered applications. A SPA serves only an initial HTML file, so the search engines cannot index the content because in a single-page application you have the JavaScript generating new HTML every time something changes. Although SPAs have a lot of other advantages such as performance, saving time and bandwidth, better responsiveness on mobile devices, increased performance in slower internet connections, etc.
With server-side rendered applications, especially with Next.js, you can create a performant application and have good SEO at the same time.
SEO (search engine optimization)
SEO stands for search engine optimization, which is the activity of optimizing your website to get more organic traffic from search engines. SEO involves a lot of different techniques and aspects that we should pay attention to to make our website more attractive and accessible to a search engine.
Next.js
Next.js is a React framework for building statically generated and server-rendered React applications. It comes with a lot of benefits to help us create and scale our applications, such as zero-configuration, automatic code-splitting, ready for production, static exporting, etc.
With Next.js you can achieve a nice SEO result with simple steps, by just creating a new application. This is not a specific feature from Next.js, but from server-side rendered applications.
Let’s go through an example with Next.js and see how it works.
You can create a new Next.js application in one command, by using the setup “Create Next App“:
npx create-next-app
After creating your project, you may notice that it has some differences from other famous boilerplates such as Create React App. Each page of your applications will be hosted inside the
pages folder, and each page is defined as a React component.
To create a new route inside your application, all you have to do is create a new file inside the
pages folder and create a new React component for it:
// pages/about.js const About = () => ( <div> <h1>About page</h1> </div> ); export default About;
Note: As you begin to build your application, you can do some SEO reports, Lighthouse is helpful for this.
Creating a new application using Next.js is pretty easy. Let’s look at some ways to improve SEO with Next.js and improve our organic results in search.
Improve SEO with Next.js
Using Next.js will improve your SEO result a lot, but you still need to pay attention to other aspects of your app. Here are some things that you should pay attention to in order to get a nice SEO result:
Meta tags
Meta tags provide data about your page to search engines and website visitors, they can affect the way users see your site in the search results, and can make a difference whether they will access your site or not. They’re only visible in the code, but they are a very important part of applications that want to prioritize SEO results.
A meta tag basically tells the search engines what the content of that specific page is, what exactly that page is about, and how the search engine should show it.
Next.js has a built-in component for appending meta tags to the head of the page:
import Head from 'next/head'
To insert a meta tag on a specific page, use the
Head built-in component and add the specific meta tag:
import Head from 'next/head' const Example = () => { return ( <div> <Head> <title>Example</title> <meta name="viewport" content="initial-scale=1.0, width=device-width" /> </Head> <p>Hi Next.js!</p> </div> ) } export default Example
A nice thing about the
Head built-in component is that when you are adding a new meta tag, and you want to make sure that this meta tag will not be duplicated, you can use the key property and it will be rendered only once:
<meta name="viewport" content="initial-scale=1.0, width=device-width" key="viewport" />
A good SEO result can be achieved by simply starting to use some meta tags in your application. Here is a list of some important meta tags that you should be using to improve your SEO results.
Do a review on your application right now, and check if you’re making use of meta tags (and the right ones). It can totally make a huge difference in your SEO result and improve your organic traffic.
Performance
Visitors don’t want to wait an eternity for your page to load. Performance should be the main concern when building an app. Performance is actually a crucial factor for SEO.
Search engines, especially Google, use the First Contentful Paint (FCP) of your page as a crucial performance metric. FCP metric measures the time from when the page starts loading to when any part of the page’s content is rendered on the screen. A page with a poor First Contentful Paint performance will result in a bad SEO result.
You can use Next.js to measure some metrics such as FCP or LCP (Largest Contentful Paint). All you have to do is create a custom App component and define a function called
reportWebVitals:
// pages/_app.js export function reportWebVitals(metric) { console.log(metric) }
The
reportWebVitals function will be triggered when the final values of any of the metrics have finished on the page.
You can find out more about measuring performance in Next.js applications here. Here you can find a list of points that you should improve in order to get a nice FCP result.
SSL certificate
In August 2014, Google declared HTTPS as a ranking signal. The Hypertext Transfer Protocol Secure (HTTPS) gives your users an extra layer of protection when they share information with you.
In order to use HTTPS, you should have an SSL (secure socket layers) certificate. A very nice SSL certificate can be expensive sometimes. How can you have an SSL certificate in your Next.js application for free?
You can use a cloud platform like Vercel for deploying. Vercel is also the company that created Next.js, so the integration is pretty smooth. To deploy a Next.js application using Vercel, just install the Vercel CLI:
yarn global add vercel
And inside your project, give it a command:
vercel
Your project will be deployed to Vercel by default using an SSL certificate.
Content is important
Showing your content to your clients the right way makes a big difference. Giving your clients a refined experience is the main concern and priority of every developer.
The whole point of deciding to use a single-page application instead of server-side rendered, or vice-versa, should be about the content that you want to show and the principal goal that you want to achieve with your clients.
The goal of Next.js is to provide a React server-side rendered application, and with that, we achieve nice levels of SEO, UX, performance, etc. It can help companies and developers to improve their websites and projects, and gain more organic traffic from search engines.
Now is a good time to start using Next.js and unlock the power of server-side rendered applications, they are really amazing and can help you and your company a lot. You will be surprised, guaranteed.
Conclusion
In this article, we learned more about Next.js and how it can help achieve nice SEO results in modern applications. We also learned about SEO in general and the important points that we should pay attention to such as meta tags, performance, SSL certificate, etc.. | http://blog.logrocket.com/how-next-js-can-help-improve-seo/ | CC-MAIN-2020-40 | refinedweb | 1,502 | 62.48 |
Well, I don't use dev-cpp for windows console apps and I hadn't tried it. (I use command line cygwin gcc for most things.)Well, I don't use dev-cpp for windows console apps and I hadn't tried it. (I use command line cygwin gcc for most things.)
Originally Posted by swoopyOriginally Posted by swoopy
Here's what I found about printing 64-bit ints with dev-cpp:
No need to "update". I doubt that it would change things. Format type specifiers can be found (the "right way") in <inttypes.h> (which includes <stdint.h>)
Here's something that runs on dev-cpp and other gnu gcc on my Windows XP box:
output isoutput isCode:#include <stdio.h> #include <inttypes.h> uint64_t make_longlong(uint32_t, uint32_t); int main () { uint64_t start, end, diff; uint32_t hi, lo; hi = 0x12345678; lo = 0x9abcdef0; start = make_longlong(hi, lo); hi = 0x55555555; lo = 0x11111111; end = ((uint64_t)hi << 32) | lo; diff = end - start; printf("end = %"PRIu64" (%"PRIx64" hex)\n",end, end); printf("start = %"PRIu64" (%"PRIx64" hex)\n",start, start); printf("diff = %"PRIu64" (%"PRIx64" hex)\n",diff, diff); getchar(); return 0; } /* if you want to make a function to hide the bit shifting * here's a possibility */ uint64_t make_longlong(uint32_t hi, uint32_t lo) { return ((uint64_t)hi << 32) | lo; }
Regards,Regards,Code:end = 6148914690091192593 (5555555511111111 hex) start = 1311768467463790320 (123456789abcdef0 hex) diff = 4837146222627402273 (4320fedc76543221 hex)
Dave | http://cboard.cprogramming.com/c-programming/61648-converting-unsigned-int-unsigned-long-long-64-bit-2.html | CC-MAIN-2016-07 | refinedweb | 228 | 62.17 |
Can a Static Field Be Initialized Multiple Times?
The Plot
Take a look at the following code:
public static class StaticClass { private static readonly Lazy<string> Lazy = new Lazy<string>(() => DateTime.Now.ToString()); public static string Singleton => Lazy.Value; }
Under what circumstances can the static field
Lazy be initialized multiple times, causing the
Singleton property to return different values? C# language specification makes it pretty clear that static fields are initialized only once, before the class is first used:
The static field variable initializers of a class.
I already knew that when I was recently involved in troubleshooting an issue caused by the lazy factory function being invoked multiple times. This meant, it was time for some creative thinking. Below are the potential causes we thought off in the process of resolving the issue.
Multiple Application Domains
Code running in different application domains is isolated from each other, hence the same static class will live independently in each application domain. It will need to get initialized in each application domain separately. This could be a reason for the factory method to be run multiple times. However, it couldn't have happened in our case. We were developing a universal application which doesn't support multiple application domains.
Reflection
Although the static field is marked as
readonly, this doesn't completely prevent its value from changing. The following code utilizing reflection could be used to reset the field value:
public void ResetField() { var lazyField = typeof(StaticClass).GetTypeInfo().DeclaredFields .First(field => field.Name == "Lazy"); lazyField.SetValue(null, new Lazy<string>(() => DateTime.Now.ToString())); }
Well, there was no such rogue code in our application. We had to look for another reason.
Shared Files Across Projects
In the end it turned out, there were actually two different static classes involved, all along. We just didn't notice it. How could that happen?
The static class was located in a portable class library and referenced from multiple assemblies. We were in the middle of porting a Universal Windows 8 application to Windows 10. Of course, we didn't convert all the projects involved at the same time. Windows 10 universal projects can reference Universal Windows 8 assemblies, which allowed us to first convert the application and work our way down the dependencies.
Since we will keep maintaining the Windows 8 application, we couldn't just upgrade the class library containing our static class. Instead, we decided to create a new Windows 10 library, which could take advantage of the new APIs and share the existing files with the old portable class library. We ended up with the following dependencies:
The static class was present both in
SharedLibrary.PCL and
SharedLibrary.UWP assemblies. It even had the same namespace since the code files were shared. Still, the class used directly from
App.UWP was a different class from the one used indirectly via
Component.Win8. Each one of them was initialized separately, therefore the factory method was of course called twice.
Since the code file was shared, there was only one document tab opened in Visual Studio. Even during debugging, this made it difficult to notice that two different classes were involved. The same breakpoints worked for both classes. Even the context switcher in the top left corner of the editor didn't show the correct value, when the execution stopped at a breakpoint.
Once we finally determined that there were actually two classes, not one, fixing the issue wasn't a problem any more. | http://www.damirscorner.com/blog/posts/20151212-CanAStaticFieldInitializeMultipleTimes.html | CC-MAIN-2018-51 | refinedweb | 579 | 57.06 |
Just like last year, I’ll be blogging from XML Conference 2007. Rather than imposing some editorial structure, this’ll simply be a serialization of the things I hear from various speakers in various sessions.
Some random notes:
- By looking at the slide up when we walked in (though no one said anything), it seems that next year the conference will be
in Arlington, VAon December 8-10, 2008. [WARNING: Others have suggested that I was probably reading that wrong and that it’ll still be in Boston.]
- 300 people are attending this year (less than last year).
- The weather here is nasty, so the smallish turnout for the opening plenary isn’t surprising.
Does XML have a future on the web?
The conference started with a 3 panelists (Douglas Crockford, Michael Day, & C. Michael Sperberg-McQueen) discussing XML’s future with regard to the web.
Michael Day
Michael has spent the last few years working on bringing together print and web rendering technologies for XML (and HTML) with Prince.
Has XML really ever been on the web? Well, not really as an alternative to HTML, which it’s never really replaced for websites (cue discussion of invalid XHTML). If we think of “the web” simply as documents served over HTTP, XML springs to much greater importance. It’s been far easier to get XML integrated into the server-side of the (human-facing) web than getting it properly used by the clients. That said, there have been some surprising jumps of server-side technology into the client (Java and XSLT). [Oops, I got this backward, see Michael’s comment]
Douglas Crockford
Does XML have a future on the web? Yes, see COBOL’s continued existence, for an example of the inability of a technology to ever actually leave the enterprise. Rather than guessing if XML will continue on the web, Douglas thinks it’s more important to look at the trends and he sees a downward trend for XML and an upward one for JSON. He believes that JSON is far superior as a data format than XML [surprise!]. Aiding that move away from XML, the web community has never really fully adopted well-formedness, which was perhaps a mistake from the start.
A more pressing question is perhaps the future of the web itself, which seems to be tremendously endangered by security concerns and a lack of forward movement.
C. Michael Sperberg-McQueen
XML will have a future on the web, in part because it should have a future on the web. However, Michael is thinking of the “web” as something larger than web browsers or documents served over HTTP. Instead, consider the “web” simply a way of getting at a variety of addressable resources. Some parts of this web may need fancy UIs but other parts may care more about reliability and data integrity. Others may be very interested in internationalization and localization. XML wins as a technology simply by its
standardization (as an alternative to the cost of implementing a new notation), by promoting loose coupling (despite some loss of speed), and because of its support of rich information (all browsers supporting XSLT for lossless presentation of XML rather than lossy HTML, for example). “Any notation that has acquired so many enemies… has got to be doing something right.”
Audience Questions
- Elliotte Harold: Programmers (especially non-XML specialists) do indeed hate working with XML, but they only hate it because they’re only given access to the DOM. Shouldn’t we kill the DOM instead?
- Douglas: The difference between JSON and XML isn’t notation, it’s the (data) structure, which is much closer to what programmers need. While this structure could be imposed over XML with a better API, that complexity isn’t necessary.
- D Peters: People are talking about security, “do-overs”, etc. How does this impact the coming “Software as a Service” infrastructure?
- Douglas: The current browser implementations have problems because they share all of the information between the current sessions (problems with cookie stealing, replay attacks, and chrome changes). That’s the dangerous web 1.0. Now, we’re trying to intentionally mashing stuff up (which we’d always tried to prevent when unintentional). Developing an engine in this environment that prevents against the evaluation of the “evil” scripts is tremendously hard. Fixing this problem for the web will be very difficult (especially with threats from Adobe and Microsoft, among others). The web started as a document delivery system that morphed into an application delivery system. That’s the part of the web that other (closed) technologies are trying to steal.
- Michael S-McQ: OK, security is important, but how can you have specified a format [JSON] that is most easily implemented using
eval?
- JSON is no worse that HTML, so if you trust your server you are OK. More safely, there are libraries on json.org that load JSON securely.
- Simon St. Laurent: We’ve abandoned the initial goals of XML with some long sidetracks. Microformats are now the most promising bits of XML, where we specify the bits we actually care about.
- Michael D: While it might be nice to ignore the past, more attempts should be made at reconciling the split between document presentation and the web.
- Michael S-McQ: Microformats were suggested at the start of the XML specification but were rejected. [I don’t really follow his explanation of why.]
- Douglas: Be pragmatic: use whatever works/fits.
- Tony L: “Aside from the march of the paired delimiters”, how is JSON different than XML? Aren’t they just serialized trees? XML has problems with entities,
- Douglas: Yes, they’re both serializations of trees. The major difference (as before) is that the basic structures map onto what programmers use for data structures whereas XML structure map onto document structure. JSON was “standardized” because people couldn’t use JSON without it being standardized. “I am a standards body.” “Specialization in tools tends to make workers more
productive.”
- Michael S-McQ: There is isomorphism between XML and JSON, but it only goes so far. JSON has a
sweet spot around a subset of XML. The part of XML that lives outside that subset is the stuff that historically had problems fitting into relational models (mixed markup, “variation”). Perhaps that’s what Douglas means when talks about XML being good for “document structure.”
- Michael D: JSON works well with current programming languages. XML doesn’t fit well with C and Java’s data structures or relational databases. Perhaps in another 10 years it’ll fit better.
- Michael Debinko?: Before XML, people who needed to interoperate had to specify syntax and vocabulary. XML made the need to specify syntax uniform. HTML5 seems to be specifying both again. Why?
- Michael D: HTML4 should have specified itself long ago.
- Douglas: Browsers are standardized. The HTML group is trying to specify browser technology.
- Michael S-McQ: The separation of syntax and vocabulary does help in a lot of cases. The problem with HTML5 is defining parsing behavior, which has nothing for authors. If you want consistent handling, write valid HTML4!
- ?: When do we have to use XML? For what kind of data?
- Douglas: I don’t understand data in XML. JSON databases are evolving.
- Robin L: Java and XSLT surprisingly ended up on the browser. What are the coming surprises?
- Michael D: Dreaming, I would love to see CSS as a stylesheet language for printing high-quality documents.
- Douglas: The industry will “discover” horrendous security problems in the current browsers.
- Michael S-McQ: Soon the world will rediscover ASCII terminals. IBM will reintroduce the 3270.
Melissa Utzinger (MITRE): Microformats: Catching on like wildfire
What are microformats?
Embed semantics into web pages. Melissa will focus on (X)HTML. They’ve only formally been around since 2005, but now have interest from both small (cork’d, Satisfaction) and large (Yahoo, Google) companies. The first book on microformats was published this year. Web 2.0 is pushing the Semantic Web, but the Semantic Web itself is very hard to learn and complex. Microformats try to squeeze utility out of some limited semantics. Microformats don’t try to solve the problem of the Semantic Web.
Early semantic extensions of HTML just tacked on values into allowed attributes on many HTML elements.
One example
hCard is a microformat to replace vCard. It uses attributes on HTML to define contact information. When using hCard, point to the hcard profile in your
<head> element.
Viewing microformats
There are some browser tools for viewing microformats embedded in the regular web (like Yahoo Local). There’s also a wonderful degradation strategy for microformats, as the underlying HTML can be rendered normally.
Plugins are available for all the major browsers, but two browsers are planning native support Mozilla Firefox 3.0 (native for developers) & Internet Explorer 8.0. Operator is a common plugin for Firefox.
What about marking up information other than contacts? There are currently 20 microformats with 9 in draft form, among them calendar information, . No one standards body controls microformats, with work happening in both the W3C and IETF.
Adopting microformats for the XML community
Two communities want to speak the same language but use different markup:
<Battalion>54th infrantry</Battalion> <Units>96th infantry</Units>
Add an attribute to help them interoperate:
<Battalion class="org">54th infrantry</Battalion> <Units class="org">96th infantry</Units>
This seem simple? Well, that’s one of the goals of microformats.
Challenges
- Users don’t want to install add-ons, but perhaps they wouldn’t care if the UI was seamless.
- Development tools are not there, especially for validation.
Conclusion
Microformats are new and will hopefully continue to grow rapidly.
To learn more, visit microformats.org. To see websites using microformats internally, visit Google Maps, Yahoo Local/Tech/Flickr/Upcoming, and Technorati Kitchen.
[Surprise, the conference schedule uses the hCard microformat!]
Taylor Cowan: TripBlox: creating travel standards on the web
[Yay, a pure researcher not worried too much about business applicability!] How can ideas of the Semantic Web be applied to travel? Travel writers should be able to allow their content to be aggregatable and discoverable. In particular, blog postings about travel can be broken into interesting pieces (people, places, etc). After breaking these travel descriptions into consistent pieces, everyone’s posts about Colorado Springs can be found. After aggregation, users can be pointed to more specialized sites for more information (mapping, for example). If they share the same activity, maybe they share some feelings and would love to have some relationship in a social network.
How to bring microformats and RDF/OWL together? Designers love microformats, and they do indeed provide semantics. In the back-end, the underlying graphs are stored as RDF triples. RDF isn’t good for humans, but is nice for computers. Microformats have some issues for computers [crazy example of XHTML2vCard XPath].
So, what about travel? People want to have their “wish trip”, travel agents want to promote their Top 10, and others want to publish travel blogs. “On my trip, I want to…” drink wine, lie at the beach… The microformats community is very resistant to developing totally new microformats (want existing use cases live on the web). Travel microformats don’t have existing examples on the web. Happily, Atom already supports much of what is necessed (title, summary, categories, licenses, dates, names). There is already a microformat for Atom (hAtom).
How do you go from hAtom into RDF? Put together a bunch of other tools, first Tidy, then XSLT, ROME, and the Jena API for RDF. Taylor wrote JenaBean to make the Jena API more attractive.
How to get started? microformats.org (suggestions, discussion), w3c.org (working on OWL and RDF), planetrdf.com (other people’s work) and geonames.org (location help).
His ontology is available here.
Mark Jacobson, Charlton Barreto, Jeff Deskins, Laurens van den Oever: Where are XML authoring tools today, where are they going, and what do we want?
What do authors, editors, and copy-editors actually need to do their work? The panel presents their products: XMetal is made for technical publishing and will continue to be, Adobe is adding XML support to help automation and allow reuse (supporting RelaxNG in the future [Woot!]), xOpus only focuses on authoring XML, but it is browser-based and intended for non-technical users.
Two non-technical challenges for XML editors (from Mark J.): many people simply want to use Word and have no interest in adding structure.
Questions
- How should authoring tools be effectively (and cheaply) deployed across a large, nationwide organization?
- xOpus is browser-based tool, so would make quite a bit of sense if you’ve already got a CMS (they don’t provide a CMS). It costs ~€180/user (with volume discounts). Adobe is providing hosted services (this?).
- “RelaxNG gives the little guy a chance.” [nodding from panel] Both DocBook and TEI have both gone to RelaxNG (probably for customization). How important is customization (available to a shop with a solitary developer) to the authoring tool? What about XML-back-ended wiki systems (with “upconversion” [ha!])?
- Customization is important to xOpus (via XSLT); everyone else says they care about it too. RelaxNG support is coming.
- How can we make XML editors UIs less confusing (b, i, u buttons) for people who don’t know XML? Many authors find the behavior of XML tools broken.
- [muddled response]
Bob DuCharme: XHTML 2 for Publishers: New opportunities for storing interoperable content and metadata
Many small web designers are proud of the fact that they create valid XHTML1. They may not even know what “well-formed” is, but they like the idea of passing some validation checker. They also understand the value of separating content and structure using CSS (to save them work). With the modularization of XHTML1.1, subsets of the markup can stand alone and individual modules can be updated/customized.
XHTML2 is trying to solve a number of problems. It targets a lot of “thou shalt not” guidelines around XHTML1 (metadata, accessiblity, etc). XHTML2 also provides a lot more opportunity to encode semantics. It “hits a sweet spot” between the flat dumbness of XHTML1 and the complexity of DocBook. XHTML2 may not be your content master, though it might make sense for your first dip into XML, but it might fill bigger shoes than just sending content to the browser.
XHTML1 has no nesting, only flat siblings. XHTML2 has nesting
<section> elements (similar to
div but with more sematic meaning). This helps promoting/demoting sections inside documents (especially when copy+pasting). Another example of better semantics is
hr => separator.
XHTML and the Semantic Web? XHTML has elements like
address and
kbd, but no one was using them. This is why the huge number of requests for adding semantic elements for XHTML2 was punted in favor of user-extensibility using RDFa. In addition to the broader semantics, RDFa brings along the ability to add rich metadata as well.
So, what’s the takeaway? XHTML2 is more appropriate for your workflow and may be more familiar to a wider variety of content authors.
Norman Walsh: XProc: An XML Pipeline Language
W3C recently said that after XML and XSLT, XProc was the most important standard.
XProc Development
XProc is a W3C working group starting in 2005 with two goals: produce a XML pipeline language and a processing model which describes a default processing model for XML documents.
One year ago, Norm though they’d be finished by now (before their charter ran out on 31 Oct 2007), but they are at Last Call (but there will be another one). The last working draft was published 29 November 2007. Today there are both open source and commercial implementations of XProc in the works.
What’s New
Some new stuff since last year:
- A defaulting story has been developed for syntactic simplicity.
- Parameters handling has been revised
- There is now a mechanism for handling complex namespaces
- XPath 1.0 + 2.0
Common Features
- Start with a document or documents
- Apply one or more processes, perhaps conditionally, perhaps iteratively
- Catch and recover from errors, if they occur
- Produce a document or documents
Details
XProc tries to be: amenable to streaming, fairly declarative, and be as simple as possible. It is based on a pipeline, which is based on steps. Each step performs a specific process. Each step is glued together with some help via XPath. “Most steps are atomic, black boxes that perform a task” (XInclude, Load from uri, XSLT 1.0, Render-to-PDF, Compare [XSLT2
deep-equal]). Documents flow through pipelines (not random subtrees). The non-atomic steps are wrappers around sub-pipelines, and are the basic control structures of XProc: Grouping, Conditional evaluation, Exception handling…. Pipelines themselves can become atomic steps for other pipelines.
Steps always have both a name (”db2html”) and a type (”XSLT”). They have ports, which are fixed by the step type. An XSLT step would have “source” and “stylesheet” ports for input and output to “result” and “secondary”. Steps are encoded like so
<p:xslt. Ports for steps are defined by
p:declare-step, which encodes input and output ports and also options for a step.
Many pipelines are linear or mostly linear with obvious “primary” input and “primary” output (like XInclude). These two observations led XProc to specify some default syntax for chaining together sequential steps.
Inputs come from a URI, an inline document, another step’s port, or [one I didn’t get]. Options can be computed from XPath expressions or from literal markup (untypedAtomic). Steps must specify the options they accept. Parameters are the final bit, with messiness coming from XSLT’s parameters. Unlike options, the names of parameters are not known in advance.
Conditional processing is available via the p:choose syntax. It looks very similar to XSLT’s xsl:choose. Iteration is handled by p:for-each, with a p:iteration-source. Exception handling comes from a simple try-catch model.
Stewart Taylor: XML and XPath in the Wild
This talk focuses on three studies, all centered on scraping XML and XPath from various sources and then looking at the statistics. The motivation was to develop high-performance XML parsing tools. Collecting qualitative and quantitative information on “real” XML was tremendously helpful for testing.
Scraping the documents was done through Google’s API. They collected hundreds to thousands of XHTML, RSS, Voice XML, SVG, SAML, and SMIL files. These documents may have been non-representative if they were tutorial material, but who knows. Simplistic statistical analysis on document and word length was done, but they also used Principal Component Analysis for more rigorous study. Now that they had some statistical analysis from example documents, they fed that to their trained XML generator, which produced the XML for the parser to test. [Stewart discusses the specifics of the XMLRand syntax, which sorta follows XML Schema, with some other interesting bits like (continuous)
<rand> and (discrete)
<set>.]
Results? Shakespeare’s plays have 4 scenes per act, but can be meaningfully modeled with their techniques. More interestingly, RSS 2.0 off of the web had 3.5 paragraphs per description, for example.
[He shows an example of the randomly generated XHTML. It looks pretty funny due to the gobbledygook (random characters), but does look cromulent if you squint… The random SVG is even more hilarious.]
The results on the XPath analysis were more interesting: 97% of XPaths within XSLT were single-step [questions from the audience on what exactly they were looking at]. That’s clearly a design choice, but still fairly striking. Finding XPath in other languages source code can be difficult, but thankfully there are searching tools (from Google and others). Some random bits: 18% of studied XPath expressions used predicates, of which half tested a attribute against a static value, and the next most common was
[1]. 51% used functions.
Conclusion: They feel happy with their statistical-model-backed XML generator. Many thousands of open source projects use XML (and DOM is more common than XPath).
“Any notation that has acquired so many enemies has got to be doing something right.” MSMcQ said, there have been some surprising jumps of server-side technology into the client (Java and XSLT)."
That quote is actually backwards; my point was that Java and XSLT were designed as client-side technologies, but ended up relegated to the server instead, much like XML (and RDF for that matter).
Which appears to be the trend overall, Michael. Wherever the client-side languages don't move to the server, they are being quickly replaced by ones that do. This is the core of the standards battle in virtual worlds and I suspect elsewhere given the failure of peer-to-peer technologies to get more traction while the giant-data-center-web-on-a-chip market investments are soaring..
"How can we make XML editors UIs less confusing (b, i, u buttons) for people who don’t know XML? Many authors find the behavior of XML tools broken."
That is _exactly_ what Xopus does; it allows you to edit any XML format as intuitively as editing in Word, or the like. Just specify what elements should behave as paragraphs, emphasis, sections, lists, tables, etcetera; and all the shortcut keys and toolbar buttons you know from Word come to life in your browser. | http://www.oreillynet.com/xml/blog/2007/12/xml_conf_2007_first_day.html | crawl-001 | refinedweb | 3,553 | 57.16 |
I was given the task with the merge-insertion sort described as(paraphrased):
Starting off with merge sort, once a threshold S(small positive integer) is reached, the algorithm will then sort the sub arrays with insertion sort.
We are tasked to find the optimal S value for varying length of inputs to achieve minimum key comparisons. I implemented the code by modifying what was available online to get:
def mergeSort(arr, l, r, cutoff): if l < r: m = l+(r-l)//2 if len(arr[l:r+1]) > cutoff: return mergeSort(arr, l, m, cutoff)+mergeSort(arr, m+1, r, cutoff)+merge(arr, l, m, r) else: return insertionSort(arr, l, r+1) return 0 def merge(arr, l, m, r): comp = 0 n1 = m - l + 1 n2 = r - m L = [0] * (n1) R = [0] * (n2) for i in range(0, n1): L[i] = arr[l + i] for j in range(0, n2): R[j] = arr[m + 1 + j] i = 0 j = 0 k = l while i < n1 and j < n2: if L[i] <= R[j]: arr[k] = L[i] i += 1 else: arr[k] = R[j] j += 1 k += 1 comp +=1 while i < n1: arr[k] = L[i] i += 1 k += 1 while j < n2: arr[k] = R[j] j += 1 k += 1 return comp def insertionSort(arr, l, r): comp = 0 for i in range(l+1, r): key = arr[i] j = i-1 while j >= l: if key >= arr[j]: comp += 1 break arr[j + 1] = arr[j] j -= 1 comp += 1 arr[j + 1] = key return comp
However the graph I get for the minimum value of S against length is:
This means that a near-pure mergesort is almost always preferred over the hybrid. Which is against what is available online, saying that insertion sort will perform faster than mergesort at low values of S(~10-25). I can’t seem to find any error with my code, so is hybrid sort really better than merge sort?
Answer
IMO the question is flawed.
Mergesort always performs N Lg(N) key comparisons, while Insertionsort takes N²/2 of them. Hence as of N=2, the comparison count favors Mergesort in all cases. (This is only approximate, as N does not always divide evenly).
But the number of moves as well as the overhead will tend to favor Insertionsort. So a more relevant metric is the actual running time which, unfortunately, will depend on the key length and type. | https://www.tutorialguruji.com/python/key-comparisons-in-a-merge-insertion-hybrid-sort/ | CC-MAIN-2021-43 | refinedweb | 414 | 55 |
In this tutorial about pointers, we will discuss pointers to pointers or double pointers in C/C++. In the previous guides, we have learned some basics about pointers and how to handle them. Now, let us move a bit further today and look at a situation where pointers are a bit challenging to understand. Our aim will be to make you understand how to work with pointers to pointers in a fairly easier manner.
Pointer Basics (Memory Allocation)
Below you can see a logical horizontal view of the system’s memory. As you already know, each division allocates 1 byte of memory. The diagram shows the address associated with each byte as well. These addresses are for example purposes and do not depict the actual values. We are incrementing the address as we move from left to right.
Now, from the diagram above we can see that the first byte is at address 199, followed by 200, 201, 202 and so on.
For example, if we declare an integer variable called ‘x’ and store the value 10 in it.
int x = 10;
As integer data types use 4 bytes of memory space in a typical machine and each memory location can save only one byte. Hence, integer variable x is stored in four consecutive memory locations. Considering these four bytes are stored at starting address 200. This block of 4 bytes is allocated for ‘x’ and the value stored in this block is ’10.’
Now, we will declare a pointer variable that will store the address of ‘x.’ To do that we will have to declare a pointer to integer as shown below. An asterisk sign is used with the pointer variable declaration.
int *ptr;
This will cause some amount of memory to be reserved for the pointer variable. A pointer is also stored in 4 bytes. To store the address of ‘x’ in the variable ‘ptr’ we will use the following line of code. This way ‘ptr’ will point to ‘x’.
ptr = &x;
Considering these four bytes are stored at starting address 205. This block of 4 bytes is allocated for ‘ptr’ and the value stored in this block is the address of ‘x’. In our case it is ‘200.’
Remember we are able to store the address of ‘x’ in ‘ptr’ as ‘ptr’ is a pointer to an integer variable. If ‘ptr’ was a pointer to some other variable example float or string then the above line of code would not work. Both variable types should match.
Dereferencing the address
Additionally, the pointer variable is not only used to store the address of the variable but also used to dereference the address and store any value there. An example of this can be seen below.
*ptr = 6;
This will store the value ‘6’ in the variable ‘x’ instead of ’10.’ This way we dereferenced the address and stored a different in the variable.
Declare a Double Pointer or Pointer to a Pointer in C/C++
In the previous section, we saw how to create a pointer variable to an integer variable. Now let us learn how to create a pointer to a pointer variable or double pointer. The ‘ptr’ variable is a pointer to an integer and we want to create a pointer to ‘ptr.’ Suppose we create a variable called ‘q.’ This will store the address of ‘ptr.’ The important thing to note here is determining the data type of this variable ‘q.’ As discussed previously, we need a pointer of a particular type to store the address of a particular type of variable.
To store a pointer to integer we want a pointer to pointers. We will put two asterisks signs in front of the variable name. In our case **q. This way we will create a pointer to a pointer variable.
int **q;
To store the address of ‘ptr’ in the variable ‘q’ we will use the following line of code. This way ‘q’ will point to ‘ptr’.
int **q; q = &ptr;
Now as we know a pointer is stored in 4 bytes hence considering these four bytes are stored at starting address 211. This block of 4 bytes is allocated for ‘q’ and the value stored in this block is the address of ‘ptr’. In our case it is ‘205.’
The diagram below shows the relationships between the three variables, x, ptr and q.
Creating Triple Pointers
Moreover, you can also create a pointer to pointer to pointer. This will be done by adding three asterisks in front of the variable name or at the end of the variable type e.g. int *** r or int ***r. Note that this variable ‘r’ will only store the address of a variable type int** thus in our case ‘q’.
int ***r; r = &q;
To view it in the system’s memory, as an example ‘r’ got allotted at starting address 211. This block of 4 bytes is allocated for ‘r’ and the value stored in this block is the address of ‘q’. In our case it is ‘211.’ As you can see, ‘r’ points to ‘q’, ‘q’ points to ‘ptr’ and ‘ptr’ points to ‘x’
POINTER TO POINTER BASIC EXAMPLE IN C / C++
The below example code is the complete demonstration of all the concepts that we have learnt about pointer to pointer till now. We have complied all the individual lines of code together that we explained individually before.
#include <stdio.h> int main() { int x = 10; int *ptr = &x; *ptr = 6; int **q = &ptr; int ***r = &q; }
Double Pointers Application
In this example, we allocate memory on a heap through a function without returning the address of allocated memory through a double pointer as an argument to a function. In other words, we want to indirectly assign the address of allocated memory to a pointer. This pointer will define inside the main function.
The function allocate(int **a) allocates memory for 10 integer elements on the heap and assigns the address of the first element to local pointer p.
; }
Double Pointers Applications
- Link list data structures
- Dynamic memory allocation through a function without returing address of allocated memory
- Dynamic arrays
Points to Note:
- The type of variable ‘x’ is integer. Hence, to store the address of ‘x’ we will require a pointer of type int *. We will require one asterisk to denote that this is a pointer to that particular variable type.
- Similarly, to store the address of ‘ptr’ we will require a pointer to int **. We will require two asterisks to denote that this is a pointer to a pointer variable.
- Likewise, if we can also declare a pointer to pointer to pointer by adding three asterisk in front of the variable name. For example int ***y. This will store the address of int ** Also, r=&q will be the only valid statement in the example set we provided.
You may also like to read: | https://csgeekshub.com/c-programming/double-pointers-or-pointers-to-pointers/ | CC-MAIN-2021-49 | refinedweb | 1,152 | 72.05 |
the scalable persistence tier for Java
Using siena is very easy. You just create your model classes and start to use them. The only configuration is the annotation in your clases and a simple properties file. The model classes must follow some constraints:
siena.Model(not mandatory but most pratical). See the examples bellow for more information.
all()method like in the example. Nevertheless this is an optional step.
@Tableannotation. The meaning of this annotation will depend on the underlying siena implementation. This will be the table name in a relational database, the entity name in the Google App Engine datastore... The annotation is just named Table because is an easy to recognize nomenclature.
@Columnannotations to the fields. This is very similar to the
@Tableannotation. The meaning of the annotation value will depend on the underlying siena implementation.
@Idannotation. Some keys can be generated automatically by the application or the database. Manual keys are also allowed. See the example.
import siena.*; import static siena.Json.*; @Table("employees") public class Employee extends Model { @Id(Generator.AUTO_INCREMENT) public Long id; @Column("first_name") @Max(200) @NotNull public String firstName; @Column("last_name") @Max(200) @NotNull public String lastName; public Long age; @Column("boss") @Index("boss_index") public Employee boss; @Filter("boss") public Query<Employee> employees; @Column("contact_info") public Json contactInfo; @EmbeddedMap public static class Contact { public String name; public List
tags; } @Embedded public Map contacts; @Embedded public List otherContacts; public byte[] photo; public static enum ServiceEnum { ALPHA, BETA, DELTA, GAMMA, EPSILON; } public ServiceEnum service; public static Query<Employee> all() { return Model.all(Employee.class); } public static Batch<Employee> batch() { return Model.batch(Employee.class); } }
This example shows a class with:
bossfield representing a reference to another entity,
employeesfield being an automatic query to manage a one2many owned relation,
contactInfothat may contain a complex data structure stored JSON serialized into the DB,
contacts&
otherContactscontaining embedded data structures stored JSON serialized in the DB,
Now you just need to write a simple configuration file. In siena all the classes in the same package
are configured with the same
PersistenceManager. The configuration file must be
called
siena.properties and must be placed in the same package than the model classes.
The parameters in this configuration file depend on the siena implementation. The only shared configuration parameter is the implementation parameter. With that parameter you set what siena implementation will be used for the classes in that pacakge. Example:
implementation = siena.jdbc.JdbcPersistenceManager driver = com.mysql.jdbc.Driver url = jdbc:mysql://localhost/siena-example user = root password = 1234
The example has configured the siena-jdbc implementation. As you can see the siena-jdbc implementation requires some other configuration parameters. You can learn more about each siena implementation in the specific documentation for each implementation.
Now that your model is created and configured you can start using siena.
First of all an example:
List<Employee> employees = Employee.all() .filter("firstName", "Mark") .order("-lastName") .fetch(10);
The
all() method is a good starting point for executing queries.
That method returns a
Query object that is a representation of
a query. The
Query interface has four main methods:
filter(),
order(),
fetch() and
get().
filter()method
This method puts restrictions to the query.
It requires two parameters: the field name (optionally
with an operator) and the restricted value. If no opeartor is specified
then "=" is assumed. You can use other operators: <, >, <= or >=. You
can call
filter() several times. The query will only return
those objects that match all restrictions. There is no way to specify
that the query will return objects that match some restrictions or others.
Comparing to SQL the
filter() method is like an
AND
operator in a
WHERE clause and there is no support for
a
OR operator.
order()method
This is the method you will use for sorting. It requires one parameter that is the name of the field that will be used for sorting. You can concatenate a "-" before the field name for descending sort. You can call this method several times.
fetch()method
This method will return a list of objects that match the given
constraints sorted by the given fields. There are three versions
of this method to implement pagination. If you pass no arguments
all the objects that match the constraints will be returned.
Be careful with that because if you have several objects stored.
You can limit the maximum size of returned objects with the first
argument. And optionally you can define an
offset
as second argument.
get()method
If you just want the first result of the query you can use
get(). This method will return
null
if there query returns no objects.
It is important to note that nothing is really queried until you call
fetch() or
get(). The methods
all(),
filter() and
order()
just prepare the query but they don't execute it.
The Siena API is very easy. If your model classes inherith the methods
of the
siena.Model class. These methods will let
you insert, update, delete or load single objects. If you don't want
to use inheritance because you can't or you want to follow the POJO
philosophy you can use the methods of the
PersistenceManager
class instead of using the methods of the
siena.Model class.
For example:
Employee e = getSomeEmployee(); // retrieve the configured PersistenceManager: PersistenceManager pm = PersistenceManagerFactory.getPersistenceManager(Employee.class); // the following two lines are equivalent: pm.update(e); // you will use it if you don't extend siena.Model e.update(); // if Employee extends siena.Model
Note: all the examples bellow use the
siena.Model methods but you will
find equivalent methods in the
PersistenceManager class.
If you have the primary keys of an object and you want to load
all its fields you just need to create an empty object,
put the primary key field values and call
get().
Employee e = new Employee(); e.id = 123; // we know the primary keys e.get(); // this loads the object System.out.println(e.firstName);
If no object is found with those primary keys then a
SienaException
will be thrown.
If you don't want to handle exceptions you can use the
Query interface.
Example:
Employee e = Employee.all().filter("id", 123).get(); System.out.println(e.firstName);
This way you will get
null instead of an exception if there is no such object.
You can even create your own static method:
public static Employee get(long id) { return Employee.all().filter("id", 123).get(); }
Inserting an object is very simple. Just load the fields with the appropiate values
and then call
insert(). If your object has some generated primary keys
you will be able to get them just after inserting the object.
Employee e = new Employee(); e.firstName = "John"; e.lastName = "Smith"; e.insert(); System.out.println(e.id); // the generated key is available
The
update() method lets you update an object in the persistence storage.
Employee e = Employee.get(123); e.firstName = "Mark"; e.update();
Finally you can delete an object using the
delete() method.
Employee e = Employee.get(123); e.delete();
The
delete() method only needs the primary keys to be loaded. So if you know
them you don't even need to execute a query to previously load the object.
In order to create a one-to-many relationship you just need to create a reference from the child class to the parent class. Example:
public class Pet extends Model { @Column("owner") public Person owner; // each pet has an owner // more fields }
It is strongly recommended to use the
@Column annotation when declaring relationships
if you are using sinea-jdbc.
To fetch all the pets of a given person you can just query filtering by that field.
Person p = somePerson(); Query<Pet> pets = Pet.all().filter("owner", p); List<Pet> somePets = pets.fetch(10);
Or you can also create an automatic-query in the
Person class.
public class Person extends Model { @Filter("owner") public Query<Pet> pets; // this is called an "automatic-query" // more fields }
Person p = somePerson(); List<Pet> somePets = p.pets.fetch(10);
As you can see the
@Filter annotation tells which field must be used to query against the
Pet class.
In the first example at this page you can see that a
Json field has been used to store complex
data structures into one field. Well, there is another way to store complex data strctures into
one field: using embedded objects. Suppose you have a web page whose users have "profile images". Each image
has a filename, a title and a counter that counts how many times the image has been displayed.
public class User extends Model { @Embedded public List<Image> profileImages; // more fields }
You just need to put the
@Embedded annotation to the field that will store the embedded data.
This field can be an object, a
java.util.List or a
java.util.Map.
However the sotored objects must be of a class properly annotated. Let's see how:
@EmbeddedMap public class Image { public String filename; public String title; public int views; }
You have annotate your embeddec classes either with
@EmbedMap or
@EmbedList.
The embedded object will be serialized into JSON when inserted in the database. An example of
how the
profileImages field could be serialized:
[{"title": "Example 1", "views": 2, "filename": "1.jpg"}, {"title": "Example 2", "views": 20, "filename": "2.jpg"}]
When using
@EmbedList the object will be serialized into a JSON list. In this case
you must annotate the fields using
@At. This is an example:
@EmbeddedList public class Image { @At(0) public String filename; @At(1) public String title; @At(2) public int views; }
The fields must be in order as well. This is how the object would be serialized:
[["1.jpg", "Example 1", 2], ["2.jpg", "Example 2", 20]]
The JSON result is shorter but harder to understand.
The "embedded objects" feature is very powerful because you can nest embedded objects into other embedded
objects. So for example the
Image class could have other objects nested or other lists or maps
inside it. This feature also has de avantaje of knowing the structure of the data at compile time. | http://www.sienaproject.com/documentation-getting-started.html | CC-MAIN-2014-35 | refinedweb | 1,678 | 50.02 |
Getting Straight to the Point with Scraping and Natural Language Processing
Nowadays, the internet has become the main source of information for most of us. When we need to learn about something or master something, we typically go online, using a web search engine like Google to obtain necessary information. Reviewing the retrieved results however, may take considerable time, requiring you to look into each link to see whether the information it contains really suits you. You can significantly shorten your research time when you know exactly what you want to find and can narrow down your search accordingly.
The problem is, though, that sometimes it’s too hard to explain all your requirements to the search engine. For example, you may need to obtain only the latest information about a business entity, thus obtaining only those resources that were published, say, within the last week. To address this problem, you can conduct an advanced search with a search engine. To accomplish this programmatically, you may take advantage of a web scraping API, allowing you to specify the necessary parameters of the search being conducted from within a script.
The results of an advanced search conducted via a scraping API may still contain a lot of links that will be not helpful. To choose the most useful links automatically, you can apply some NLP techniques to the snippets assigned to each link, trying to find only those that contain certain types of phrases, such as expressions of monetary values, percentages, etc.
Getting the Most Relevant Articles with Scraping
Let’s start with how you can scrap google search results. There are several Python libraries that allow you to conduct a web search programmatically. Some of them can be used for free while the others providing richer result sets are paid alternatives. The code snippet below illustrates how you can conduct a web search from within your Python script using SerpApi ():
phrase = 'Tesla stock'
…
from serpapi.google_search_results import GoogleSearchResults
GoogleSearchResults.SERP_API_KEY = "your_serp_api_key_here"
client = GoogleSearchResults({"q": phrase})
rslt = client.get_dict()
If you now print out the rslt dictionary variable, you’ll see that it contains a JSON document. Looking through it, you may notice that the organic_results list contains the list of retrieved links. Each link in the list is assigned to a dictionary that includes the following fields: the url, date, and snippet, among others.
{
… "organic_results": [
{
"position": 1,
"title": "Judge rules Musk's 'Tesla stock too high imo' tweet …",
"link": "",
"displayed_link": "thenextweb.com › Hard Fork",
"thumbnail": null,
"date": "21 hours ago",
"snippet": "Remember when Tesla shares tanked by 10% moments after its CEO Elon Musk tweeted: “Tesla stock price is too high imo? Turns out it was …",
"cached_page_link": ""
},
{
…
]
…
}
As mentioned, snippets can be of the most interest when you want to use NLP to further narrow down your search result set. As for now, let’s just look at how you can get to each snippet:
for article in rslt['organic_results']:
print(article['snippet'])
Another API that you might use to conduct an advanced search programmatically is News API (). Below is the code snippet that can give you an idea of how this API works:
phrase = 'Tesla stock'
…
from newsapi import NewsApiClient
from datetime import date, timedelta
newsapi = NewsApiClient(api_key='your_news_api_key_here')
my_date = date.today() — timedelta(days = 7)
articles = newsapi.get_everything(q=phrase,
from_param = my_date.isoformat(),
language="en",
sort_by="relevancy",
page_size = 100)
The structure of a result set returned by News API differs from the structure of a serpapi result set. You can obtain the description (snippet) of each article as follows:
for article in articles['articles']:
print(article['description'])
Using NLP to Narrow Down Your Search Results
Perhaps the most interesting part is using NLP techniques to filter out the links in your result set based on their descriptions (snippets). The following code illustrates how this concept might be implemented with the help of spaCy, a leading Python natural language processing library:
phrase = ‘Tesla stock’
…
import spacy
nlp = spacy.load('en')
nlp.add_pipe(nlp.create_pipe('merge_noun_chunks'))
answers[]
for article in articles['articles']:
flg = 1
article_content = str(article['description'])
doc = nlp(article_content)
for sent in doc.sents:
for token in sent:
if phrase.lower() in token.text.lower():
doc2 = nlp(sent.text)
for ent in doc2.ents:
if (ent.label_ == 'MONEY'):
answers.append(sent.text.strip() + '| '+ article['publishedAt'] + '| '+ article['url'])
flg = 0
break
break
if flg == 0:
breakprint(answers)
Where to See It Working
The idea discussed in this article has been implemented in the stocknewstip bot, which is available at. The bot can find and bring to you the latest information about a company’s stock, as well as other interesting information related to the company. All you needs to do is to type in the name of a company, say, Apple, Google or Tesla, or Gold or Bitcoin:
The same results will go to the stocknewstip channel available at. You can preview the channel without having a Telegram account at:
| https://medium.com/swlh/getting-straight-to-the-point-with-scraping-and-natural-language-processing-1a62aba65586?source=post_internal_links---------6---------------------------- | CC-MAIN-2020-50 | refinedweb | 820 | 51.99 |
09 March 2009 15:01 [Source: ICIS news]
LONDON (ICIS news)--Polyethylene (PE) producers have cut back output to such an extent that they are able to push through heavy price increases, in spite of the current weak global economic outlook, sources said on Monday.
“We have been offered increases up to €120/tonne ($152/tonne) by all our suppliers and it doesn’t look as though we will be able to avoid a big part of it at least,” said one buyer.
The sentiment was echoed throughout ?xml:namespace>
“We will have to pay more, that’s clear. But these increases come at a time when our demand is down and the future is very uncertain,” said another buyer.
Sources estimated that PE production rates were running at 70-80%.
PE producers had cut back capacity in a move to secure margin in the polymer business. They had also exported big volumes in January and February, when arbitrage opportunities were high.
“Stocks are at an all-time low,” said a PE producer. “It’s true, some sectors of the market are showing no signs of recovery, but product is tight. The output of ethylene across
Ethylene contract monomer prices had risen by €85/tonne in March, improving ethylene producers’ margins. This increase upstream had exerted more pressure to downstream PE players, however.
Some PE buyers questioned the validity of the new higher monomer price in March.
“I don’t see how they can justify this increase, said one puzzled European PE buyer. “How is this ethylene price settled? Who decides it? Oil has been stable for some weeks now, and there’s been no big change in naphtha between February and March.”
Demand for PE was weaker than in early 2008, but selling sources reported surprisingly good volumes in March.
One producer was even considering further increases in April, in spite of fundamentally poor demand and the start-up of new capacities, mainly in the Middle East, which would inevitably affect
“Even if we get €120/tonne for March, that won’t be enough to get us back to profitability” said the producer.
“We have to cut back further so we don’t end up making non-profitable business in April and May. We will have a tight supply/demand balance and crazily weak demand.”
Buyers were looking for salvation in the new capacities which were now coming on stream, but they had been long in coming, and they were still not offered huge volumes of imported material.
The success of April hikes would depend much on the demand pull from Asia, and continued cutbacks in
New capacities in the Middle East had been planned with
PE producers in
( | http://www.icis.com/Articles/2009/03/09/9198660/europe-pe-producer-cutbacks-curb-availability-to-regain-margin.html | CC-MAIN-2014-52 | refinedweb | 450 | 60.95 |
Home » Support » Index of All Documentation » How-Tos » How-Tos for Rendering and Compositing Systems »
Wing IDE is an integrated development environment that can be used to write, test, and debug Python code that is written for The Foundry's NUKE and NUKEX digital compositing tool..
Project Configuration
First, launch Wing IDE and create a new project from the Project menu and save it to disk. Files can be added to the project with the Project menu. This is not a requirement for working with NUKE but recommended so that Wing IDE's source analysis, search, and revision control features know which files are part of the project.
Next, make sure Wing IDE is using NUKE's Python installation, or a Python that matches NUKE's Python version.
Configuring for Licensed NUKE/NUKEX
If you have NUKE or NUKEX licensed and are not using the Personal Learning Edition, then you can create a script to run NUKE's Python in terminal mode and use that as the Python Executable in Wing's Project Properties. For example on OS X create a script like this:
#!/bin/sh /Applications/Nuke6.3v8/Nuke6.3v8.app/Nuke6.3v8 -t -i "$@"
Then perform chmod +x on this script to make it executable. On Windows, you can create a batch file like this:
@echo off "c:\Program Files\Nuke7.0v9\Nuke7.0.exe" -t -i %*
Next, you will make the following changes in Project Properties, from the Project menu in Wing:
- Set Python Executable to point to this script
- Change Python Options under the Debug tab to Custom with a blank entry area (no options instead of -u)
Apply these changes and Wing will use NUKE's Python in its Python Shell (after restarting from its Options menu), for debugging, and for source analysis.
Configuring for Personal Learning Edition of NUKE
The above will not work in the Personal Learning Edition of NUKE because it does not support terminal mode. In that case, install a Python version that matches NUKE's Python and use that instead. You can determine the correct version to use by by looking at sys.version in NUKE's Script Editor. Then point Wing to that Python with Python Executable in Project Properties. Using a matching Python version is a good idea to avoid confusion caused by differences in Python versions, but is not critical for Wing to function. However, Wing must be able to find some Python version or many of its features will be disabled.
Additional Project Configuration
When using Personal Learning Edition, and possibly in other cases, some additional configuration is needed to obtain auto-completion on the NUKE API also when the debugger is not connected or not paused. The API is located inside the NUKE installation, in the plugins directory. The plugins directory (parent directory of the nuke package directory) should be added to the Python Path configured in Wing's Project Properties (as accessed from the Project menu). On OS X this directory is within the NUKE application bundle, for example /Applications/Nuke6.3v8/Nuke6.3v8.app/Contents/MacOS/plugins.
Replacing the NUKE Script Editor with Wing IDE Pro
Wing IDE Pro can be used as a full-featured Python IDE to replace NUKE's Script Editor component. This is done by downloading and configuring NukeExternalControl.
First set up and test the client/server connection as described in the documentation for NukeExternalControl. Once this works, create a Python source file that contains the necessary client-side setup code and save this to disk.
Next, set a breakpoint in the code after the NUKE connection has been made, by clicking on the breakpoint margin on the left in Wing's editor or by clicking on the line and using Add Breakpoint in the Debug menu or the breakpoint icon in the toolbar.
Then debug the file in Wing IDE Pro by pressing the green run icon in the toolbar or with Start/Continue in the Debug menu. After reaching the breakpoint, use the Debug Probe in Wing to work interactively in that context.
You can also work on a source file in Wing's editor and evaluate selections within the file in the Debug Probe, by right-clicking on the editor.
Both the Debug Probe and Wing's editor should offer auto-completion on the NUKE API, at least while the debugger is active and paused in code that is being edited. The Source Assistant in Wing IDE Pro provides additional information for symbols in the auto-completer, editor, and other tools in Wing.
This technique will not work in Wing IDE Personal because it lacks the Debug Probe feature. However, debugging is still possible using the alternate method described in the next section.
Debugging Python Running Under NUKE
Another way to work with Wing IDE and NUKE is to connect Wing IDE directly to the Python instance running under NUKE. In order to do this, you need to import a special module in your code, as follows:
import wingdbstub
You will need to copy wingdbstub.py out of your Wing IDE installation and may need to set WINGHOME inside wingdbstub.py to the location where Wing IDE is installed if this value is not already set by the Wing IDE installer. On OS X, WINGHOME should be set to the Contents/MacOS directory within Wing's .app folder.
Before debugging will work within NUKE, you must also set the kEmbedded flag inside wingdbstub.py to 1.
Next click on the bug icon in the lower left of Wing IDE's main window and make sure that Accept Debug Connections is checked.
Then execute the code that imports the debugger. For example, right click on one of NUKE's tool tabs and select Script Editor. Then in the bottom panel of the Script Editor enter import wingstub and press the Run button in NUKE's Script Editor tool area. You should see the bug icon in the lower left of Wing IDE's window turn green, indicating that the debugger is connected.
If the import fails to find the module, you may need to add to the Python Path as follows:
import sys sys.path.append("/path/to/wingdbstub") import wingdbstub
After that, any breakpoints set in Python code should be reached and Wing IDE's debugger can be used to inspect, step through code, and try out new code in the live runtime.
For example, place the following code in a module named testnuke.py that is located in the same directory as wingdbstub.py or anywhere on the sys.path used by NUKE:
def wingtest(): import nuke nuke.createNode('Blur')
Then set a breakpoint on the line import nuke by clicking in the breakpoint margin to the left, in Wing's editor.
Next enter the following and press the Run button in NUKE's Script Editor (just as you did when importing wingdbstub above):
import testnuke testnuke.wingtest()
As soon as the second line is executed, Wing should reach the breakpoint. Then try looking around with the Stack Data and Debug Probe (in Wing Pro only).
Debugger Configuration Detail
If the debugger import is placed into a script file, you may also want to call Ensure on the debugger, which will make sure that the debugger is active and connected:
import wingdbstub wingdbstub.Ensure()
This way it will work even after the Stop icon has been pressed in Wing, or if Wing is restarted or the debugger connection is lost for any other reason.
For additional details on configuring the debugger see Debugging Externally Launched Code.
Limitations and Notes
When Wing's debugger is connected directly to NUKE and at a breakpoint or exception, NUKE's GUI will become unresponsive because NUKE scripts are run in a way that prevents the main GUI loop from continuing while the script is paused by the debugger. To regain access to the GUI, continue the paused script or disconnect from the debug process with the Stop icon in Wing's toolbar.
NUKE will also not update its UI to reflect changes made when stepping through a script or otherwise executing code line by line. For example, typing import nuke; nuke.createNode('Blur') in the Debug Probe will cause creation of a node but NUKE's GUI will not update until the script is continued.
When the NUKE debug process is connected to the IDE but not paused, setting a breakpoint in Wing will display the breakpoint as a red line rather than a red dot during the time where it has not yet been confirmed by the debugger. This can be any length of time, if NUKE is not executing any Python code. Once Python code is executed, the breakpoint should be confirmed and will be reached. This delay in confirming the breakpoint does not occur if the breakpoint is set while the debug process is already paused, or before the debug connection is made.
These problems should only occur when Wing IDE's debugger is attached directly to NUKE, and can be avoided by working through NukeExternalControl instead, as described in the first part of this document.
Related Documents
Wing IDE provides many other options and tools. For more information:
- Wing IDE Reference Manual, which describes Wing IDE in detail.
- NUKE/NUKEX home page, which provides links to documentation.
- Wing IDE Quickstart Guide which contains additional basic information about getting started with Wing IDE. | http://www.wingware.com/doc/howtos/nuke | CC-MAIN-2014-10 | refinedweb | 1,567 | 58.92 |
Revisiting Castigliano with SciPy
January 6, 2013 at 2:16 PM by Dr. Drang
On Friday, a colleague asked me if I had a quick solution for determining the spring stiffness of a tapered leaf spring. Yup. This may be the first time I’ve been able to use an old blog post directly for work.
But as I read through the solution, I realized I’d done the numerical integration in Octave, whereas now I’d prefer to do it in Python and SciPy. The dilemma: do the problem immediately in Octave, where I had a complete solution, or do it in SciPy, where I’d have to search to find and learn the equivalent functionality? I decided to go the latter route because:
- My colleague need a quick answer, but not an instantaneous one.
- I figured numerical integration would have to be one of the most prominent SciPy features—easy to find and use.
- It’s in doing practical problems like this that I really learn.
I’m not going to rewrite the logic of that older post. In a nutshell, the problem of a tapered leaf spring is tailor-made for Castigliano’s Second Theorem, which states that the deflection of point upon which a load is acting is equal to the derivative of the complementary strain energy of the structure with respect to that load.
The tapered leaf spring can be reduced to a cantilever beam that looks like this:
The complementary strain energy is[U^* = \int_0^L \frac{M^2}{2EI} dx]
where [M = Fx] is the bending moment function, [E] is Young’s modulus for the material, and [I] is the moment of inertia for the cross-section,[I = \frac{b (t_0 + \alpha x)^3}{12}]
where [b] is the (constant) width of the beam and [\alpha = (t_1 - t_0)/L] is the rate at which the thickness increases.
Putting these expressions together and taking the derivative, we get[\Delta = \frac{\,dU^*}{dF} = \frac{F}{E} \int_0^L \frac{\,x^2}{I}dx = \frac{12F}{Eb} \int_0^L \frac{x^2}{(t_0 + \alpha x)^3} dx]
and the equivalent spring stiffness is[k = \frac{F}{\Delta} = \frac{E}{\int_0^L \frac{\,x^2}{I}dx} = \frac{Eb}{12 \int_0^L \frac{x^2}{(t_0 + \alpha x)^3} dx}]
OK, now it’s time to move onto SciPy to handle that integral. I could, of course do the integration analytically, but that would involve integration by parts and I’d almost certainly end up with a big algebraic mess. And I need a number for an answer, so I might as well go numerical now as later.
SciPy’s integration functions are described here. There are basically two sets: one set for which the functional form of the integrand is given, and another set for which the integrand is known only as a set of [(x, y)] pairs. We’re going to use the function
quad from the former set. In its simplest form,
quad takes three arguments: the function to be integrated, the upper bound, and the lower bound. Here’s a short Python script that calculates the stiffness for a steel leaf spring with [L = 25\;\mathrm{in}], [b = 3\;\mathrm{in}], [t_0 = 0.5\;\mathrm{in}], and [t_1 = 1\;\mathrm{in}].
python: 1: #!/usr/bin/python 2: 3: from scipy import integrate 4: 5: # Given parameters 6: E = 29e6 # Young's modulus, psi 7: L = 25 # length, in 8: b = 3 # width, in 9: t0 = .5 # tip thickness, in 10: t1 = 1 # end thickness, in 11: 12: # Derived parameter 13: alpha = (t1 - t0)/L 14: 15: # Integrand 16: def integrand(x): 17: return x**2/(t0 + alpha*x)**3 18: 19: # Stiffness 20: integral = integrate.quad(integrand, 0, L) 21: print integral 22: print E*b/12/integral[0]
The
quad function returns a tuple with the answer and the error, so I have that printed out on a line by itself so we can check the accuracy. The results are
(8518.397569993163, 9.457321115117394e-11) 851.099040686
If you look back at the older post, you’ll see that this is the same answer we got using Octave, which isn’t surprising—this isn’t an especially tricky integral to perform numerically.
You’ll note that the first argument to
quad is itself a function that takes only one argument. What if you wanted to define the moment of inertia parametrically, like this:
def I(x, b=3, t0=.5, t1=1): alpha = (t1 - t0)/L return b*(t0 + alpha*x)**3/12
You’d do this if, for example, you had several different leaf spring designs to consider, and you didn’t want to keep rewriting the integrand function over and over again.
The solution is to use the
partial function from Python’s standard functools library, which allows you to define functions in terms of other functions. Here’s an example:
python: 1: #!/usr/bin/python 2: 3: from scipy import integrate 4: from functools import partial 5: 6: # Design parameters 7: E = 29e6 # Always steel 8: L = 25 # Always this long 9: params = [] 10: params.append(dict(b = 3, t0 = .5, t1 = 1)) 11: params.append(dict(b = 3, t0 = .5, t1 = 1.25)) 12: params.append(dict(b = 3, t0 = .75, t1 = 1)) 13: 14: # Moment of inertia function 15: def I(x, b=3, t0=.5, t1=1): 16: alpha = (t1 - t0)/L 17: return b*(t0 + alpha*x)**3/12 18: 19: # Integral 20: def integral(p): 21: myI = partial(I, **p) 22: def integrand(x): 23: return x**2/myI(x) 24: return integrate.quad(integrand, 0, L) 25: 26: # Stiffnesses 27: for i,p in enumerate(params): 28: print "k%d = %f" % (i, E/integral(p)[0])
In this example, I’ve assumed that the leaf spring will always be made of steel and that its length is set. The design parameters that can be altered are the cross-sectional dimensions.
Lines 9-12 set up a list of dictionaries of design parameters. The
integral definition in Lines 20-24 takes a dictionary as its argument, defines a one-argument moment of inertia function based on that dictionary, and performs the integration for that moment of inertia function. The loop in Lines 27-28 then goes through the list of parameters and prints out the equivalent spring stiffness for each design. This is not only more compact than writing a new
integrand function for each design, it’s also more flexible when the time comes to investigate new designs. This flexibility is where Python and SciPy have a distinct advantage over Octave.
By the way, the output of this script is
k0 = 851.099041 k1 = 1436.267876 k2 = 1127.163920
which shows that if you’re going to add a little thickness to the beam, you’re better off adding it to the fixed end than to the free end. Which you probably would’ve guessed anyway, but here you get to see how much more effective it is.
One last thing: the
quad function gets its name from quadrature, a commonly used synonym for integration, especially numerical integration. The quad in quadrature refers not to the number four (not directly, anyway), but to a square. In classic compass-and-straightedge geometry, quadrature meant squaring, the process of constructing a square of the same area as some other shape. That definition morphed into calculating an area without constructing a square, and from there it was a short jump to integration.
If you follow Gus Mueller’s blog, you might remember a short post he did a few months ago about quadratic and cubic Bézier curves. That’s another case where quad comes from square rather than four. Of course, squares are associated with quad because squares have four sides, so ultimately quadrature and quadratic equations get their names from the number four. But it’s a pretty roundabout trip. | http://leancrew.com/all-this/2013/01/revisiting-castigliano-with-scipy/ | CC-MAIN-2017-26 | refinedweb | 1,325 | 59.94 |
Agenda
See also: IRC log
SW: Minutes from last week:
... Approved as circulated
... Agenda for today:
... Approved as circulated
... Next meeting 19 June, DanC to scribe
... Regrets from NM for 19 June, likely regrets from DO
SW: I've added some links, propose to approve with a typo correction as noted by JR
RESOLUTION: F2F Minutes (, 20-minutes, 21-minutes) approved
SW: A new standing agenda topic to allow for late-breaking news
HST: Going to publish it Real Soon Now
DO: Haven't yet solicited external reviewers' comments on the most recent draft:
SW: Some positive feedback from MEZ
<DanC> "Every scenario that involves possibly transmitting passwords in the clear can be redesigned for the desired functionality without a cleartext password transmission."
DC: I explored the W3C's
situation wrt passwords in the clear
... Spellchecker tool, for example, uses a form to collect name/pwd for access to a member-only document
... We have never found a way to provide this functionality w/o pwd in the clear
DO: Rather than hold up/change the doc't, we should query the Sec'ty Context WG folks on how to get this functionality w/o pwd in the clear
SW: DanC, can you bring that up on our list, as a direct question to Security Context WG?
DC: Will do
<DanC> yes, jar, I suppose capabilities are about the only thing I've seen that could work
<jar> Since Dan brings this up, the reference for capabilities would be erights.org.
<jar> well... yes, I brought up capabilities. Let me turn that into a hyperlink:
<DanC> nifty stuff, in theory, though it involves a whole new operating system etc.
<jar> right.
<jar> well, it can all be done in user mode of course, but it is a sort of mini OS kernel to manage the object-capabilities.
TVR: Deep questions behind
specific integration issues, e.g. SVG, : vs. -, MathML
... HTML4 had problems, we said we would move to XML, which gave us XHTML, HTML5 is a temporary blip but XML is the long-term goal
TVR: that's one extreme
TVR: The other extreme is that
XML has no future on the Web, all we need is HTML, HTML5 is the
future
... Maybe there's a middle way, as suggested by Tim's statements 18 months ago and at the Beijing AC meeting
TVR: The hard question, as at the end of the last weeks call, is how can XML change a bit, HTML change a bit, to foster a convergance
scribe: My concern is that the
parts of the two communities willing to consider change is so
small that it doesn't matter whether we can find a technical
solution or not
... So maybe we have to just accept that we are going to have two parallel tracks indefinitely
<noah> I'm afraid I agree with Raman, at least on the XML side. I think that in practice the XML community values its base of installed code to such a significant degree that changes will be very hard to deploy in practice.
<Stuart> noah... is that not true of both worlds: install base restricts flexibility, both way round.
<noah> Probably. I just don't feel that I am as well informed regarding the HTML community.
AM: But what's your opinion?
TVR: I'm reserving my position to avoid prejudicing the discussion
TVR: Where I am isn't the question, the question is whether there is any possiblity of a critical mass forming to support convergence
SW: So you want us to say whether we think convergence is possible
TVR: No, that's not the question,
we have technical solutions, the question is about
willingness to adopt those solutions
... Attribute quotation, for instance, is not the issue that matters. What matters is things like
document.write
... The HTML world is not waiting for unquoted attributes and then they'll say Yes, we're ready to converge
TVR: The major issue is social, not technical
HT: A lot of the issues are
social
... I think it's none-the-less worth getting clear on what the substantive technical issues are
... because they are the hooks the social dynamic is going to swing from
... mime-type-based ns-declarations seems viable, and would probably be accepted by the core XML community because it wouldn't affect them
... What else?
TVR: There's lots of 'real' namespace use in business/commerce
TVR: Well-formedness -- a real problem from the HTML side
<raman> xml community: clarity around namespaces, especially null namespace vs no-namespace
TVR: ns decls from mime type, yes, although that doesn't get us all the way to distributed extensiblility, where someone designs there own mini-language
HT: That needs a change on the HTML side
TVR: Right
NM: A lot of sympathy with TVR's
concerns, we've gone too far in the past ignoring
implementation/deployment difficulties
... But not sure separating issues in two piles is helpful
... For many of our users, what you call it matters: XML means very high expectations of interoperability
... Unlike, for example, C
... So it's really hard to get change through on the XML side, and that's right at the boundary between technical and social
... Asking "Will the community accept media-type-ns-defaulting, or unquoted attrs?" I don't know -- maybe more likely for the first, but possibly hard for both
... People worry about any change destabilising the interop guarantee
DO: Simple use case I hit -- The
BEA Aqualogic Portal project, has remote portlets, you can drag
and drop XML on them
... The engineers said: We can't mix HTML and XML easily, what do we do? The XML guy won, and enforced strict well-formedness
... But the product managers were upset, because the said customers' expectations would not be met
... Can we rev XML? Suppose we relaxed a bunch of constraints, maybe that would make a bunch of the HTML folks happy.
... The core issue for some of the HTML WG is namespaces/distributed extensiblity -- they don't want it in any form
... Because the two worlds are so different, it's easy to reject any kind of convergence
... but if we relaxed some of the XML constraints, that might make a change
<Zakim> Stuart, you wanted to introduce a question arising from Steve Pemberton's message
<Stuart> With these things in mind, we feel the best course of action is to declare that all documents using the xhtml namespace are capable of being interpreted to produce RDF triples.
SW: This is an ambition for all documents -- doesn't that mean there's a need for liaison between the two developers of languages in the namespace, i.e. XHTML2 and HTML5
TVR: There is a lot of opposition to RDFa from people in the HTML WG, partly because of its use of namespaces, partly because of an antipathy to RDF itself
SW: HTML5 WG is positioning itself as the successor to both HTML4 and XHTML1
DC: The WGs were chartered to compete
TVR: From a TAG perspective, the
question is, is the community which is commited to finding a
convergent path large and significant enough to make a
difference
... Bearing in mind the TimBL counts for a lot
... Alternatively, if we are resigned to the two tracks running in parallel, can we see any route towards peaceful co-existence
... Both these technologies have a place on the Web, and will survive, with or w/o the W3C
DO: Helping to reconcile the XML and HTML communities should get a lot of our attention
SW: HST, can you summarize activity on relevant email threads?
HST: No, sorry, have not had time to give the threads the attention they need
NM: There have been concerns
expressed about how well the TAG coordinated in the lead-up to
our note to the community
... I'm happy with what we did on the technical front, but our care in ensuring that people aren't blindsided could have been improved
... We should take note of the coordination concern
NM: and try to have a "no surprises" approach to communication
AM: What response is now appropriate?
HST notes that Stuart and Tim are working on an official response
SW: Should we encourage any kind of dialogue? I think we should
<noah> NM: I would like to do what we can moving forward to ensure that we can work cooperatively with the Oasis community to find whatever is the right answer in the long term.
SW: On the basis that we will try to understand their requirements, and to help them understand our concerns
<DanC> +1 invite XRI folk to a telecon
DO: The idea that XRI was surprised that the TAG should speak out against XRI is itself surprising
.
DO: HST and DO engaged with them
over URNsAndRegistries-50 two years ago, and it was clear at
that time that they were not going to be convinced of our
position wrt the potential utility of http: to meet their
needs
... So they can't have been under any illusions that we weren't happy
<raman> the technical community is always guilty of doing the type of marketing that Dave is describing, e.g. use SOAP, you can get through firewalls is one bogus argument I remember from the past;-)
DO: I doubt the utility of engaging in a huge amount of effort to end up where we started
<raman> I dont think what DO is saying here is material to making positive progress
DO: I don't think we have
anything to apologise for
... I have finally heard an interesting use-case in the area of synonym identifiers, but it's still not clear that that is worth the cost of creating a whole parallel naming authority mechanism
... Going forward, the XRI TC are already looking towards the question of when they can go to ballot again
... Does that mean they've heard our message?
SW: XRI TC have been receiving comments that they should engage in more dialog with the TAG.
TVR: Allowing a confusion between the TAG speaking technically and the W3C speaking hurts both sides
<Zakim> DanC, you wanted to sympathize with the concern that we didn't close the loop with the XRI TC.
<DanC> 29 Feb msg
DC: We still owe the XRI TC a response to their email of 29 Feb
SW: You happy with the goals I suggested above for a call?
DC: Yes, and asking people to a telcon sounds good
HST: We have to be carefully not to suggest to them that there are things they can do to 'fix' XRIs
SW: If we don't talk to them, the likely outcome is that our concerns will be lost sight of, because we will appear to be uncooperative
<Zakim> noah, you wanted to talk about better coordination vs. we own the Web
NM: There is a real positive inclination on the part of the XRI TC to talk to us, and we should meet them on that basis
<DanC> (re "doing nothing is a good answer" ... well, I think doing something on top of http/dns is the sweet spot.)
NM: They need to change how they think about this, so that their job is to prove that there is enough value to overcome the very real costs
<raman> dropping off
<Zakim> jar, you wanted to float the idea of eventually putting application of http: to naming problems on rec track
JR: A durable solution needs more than a TAG finding - there needs to be a manual or something that helps groups like XRI when they have a need for a naming scheme
<DanC> not obvious? really? everybody and his brother makes namespaces out of http/dns. It might be worth writing up/down, but LOTS of people figure it out by themselves.
<noah> NM: Maybe time to tilt at the Scheme/Protocols finding again?
<DanC> e.g. flickr tags, wikipedia pages, and zillions of others | http://www.w3.org/2008/06/12-tagmem-minutes | CC-MAIN-2014-49 | refinedweb | 2,000 | 61.4 |
Yesterday I released an update on Pypi for a Django reusable app I wrote: django-geoportail. Actually, I released two updates because the first one wasn't fully operational and it makes me think all the previous releases weren't either. First lesson learned: check the packages you upload to pypi actually work.
So, I had to learn the hard way how to package a Django app. The biggest difference with a standard python package is that you may want to include non-python files, such as media files and templates. Let's have a look at a basic setup.py:
# -*- coding: utf-8 -*- from distutils.core import setup setup( name='django-geoportail', version='0.3.1', author=u'Bruno Renié', author_email='bruno.renie.fr', packages=['geoportal'], url='', license='BSD licence, see LICENCE.txt', description='Add maps and photos from the French National Geographic' + \ ' Institute to GeoDjango', long_description=open('README.txt').read(), zip_safe=False, )
And here is the structure of what I want to package:
geoportal/ |-- admin.py |-- forms | |-- fields.py | |-- __init__.py | `-- widgets.py |-- __init__.py |-- models.py |-- templates | |-- geoportal | | |-- map.html | | `-- widget.html | `-- gis | `-- admin | |-- geoportal.html | `-- geoportal.js |-- templatetags | |-- geoportal_tags.py | `-- __init__.py |-- tests.py `-- utils.py
I was naively thinking that having packages=['geoportal'] in setup.py plus including the static files in a MANIFEST.in file would magically include everything. Well, the template files are included in the source distribution, but they are skipped at the time of the installation. Even worse, sub-packages are skipped as well, leaving me without the forms and the templatetags directories.
The solution for including sub-packages is to use find_packages:
from distutils.core import setup from setuptools import find_packages setup( name='django-geoportail', # ... packages=find_packages(), )
find_packages will look for every directory containing a __init__.py file and include it.
Next, the templates. Two things are needed to include them:
add include_package_data=True in the setup parameters
create a MANIFEST.in file to declare the files to include:
include AUTHORS CHANGES README INSTALL LICENCE recursive-include geoportal *.py *.html *.js include docs/Makefile recursive-include docs/source *
Here, I include all the HTML and javascript file under the geoportal directory, the standard authors / changes / readme files and the documentation. During installation, only the files in a python package will be copied to the site-packages. The docs and text files at the root level are just useful for people who manually download the tarball: one unzipped, it provides all the documentation and information about the package.
Finally, I like the zip_safe=False option. It prevents the package manager to install a python egg, instead you'll get a real directory with files in it. I find it very convenient for debugging, when some information can be found by looking at the code.
If you're a packaging expert and think I'm doing anything wrong, I'd like to read your thoughts!
UPDATE 2010-06-08: in the first version of this post, I explained that I didn't understood what the MANIFEST.in file was for. This is now clarified, include_package_data can only work if some data is included in the MANIFEST.in file.
#1 August 16, 2010 — Fabien
Thanks to you, it saved a lot of my time :)
#2 September 3, 2010 — Jens Hoffmann
Thanks a lot for this quick and good howto!
Add a comment | http://bruno.im/2010/may/05/packaging-django-reusable-app/ | CC-MAIN-2019-18 | refinedweb | 559 | 50.53 |
I am writing a program that has a membership class. Each Membership object contains details of a person's name, and the month and year in which
they joined the club. All membership details are filled out when a Membership object is created. Then there is a club class that has a field for an array list, a method that returns the current size of the collection and a join method. I am not sure how to get the join method to compile. Anew Membership object should be added to a Club's objects collection via the Club's objects join method, which has the following
signature and description:
/**
* Add a new member to the club's collection of members.
* @param member The member object to be added.
*/
public void join(Membership member)
e following description
Here is my code for the whole class:
public class Club { // Define any necessary fields here ... private ArrayList members; /** * Constructor for objects of class Club */ public Club() { // Initialise any fields here ... members = new ArrayList(); } /** * Add a new member to the club's list of members. * @param member The member object to be added. */ public void join(Membership member) { members.add(Membership.member); } /** * @return The number of members (Membership objects) in * the club. */ public int numberOfMembers() { return members.size(); } }
Code tags added -Narue | https://www.daniweb.com/programming/software-development/threads/19535/help-with-arrays-methods | CC-MAIN-2017-17 | refinedweb | 218 | 65.93 |
Most servers are accessed at well-known Internet port numbers or UNIX family names. Example 2-6 illustrates the main loop of a remote-login server.
main(argc, argv) int argc; char *argv[]; { int f; struct sockaddr_in6 from; struct sockaddr_in6 sin; struct servent *sp; sp = getservbyname("login", "tcp"); if (sp == (struct servent *) NULL) { fprintf(stderr, "rlogind: tcp/login: unknown service"); exit(1); } ... #ifndef DEBUG /* Disassociate server from controlling terminal. */ ... #endif sin.sin6_port = sp->s_port; /* Restricted port */ sin.sin6_addr.s6_addr = in6addr_any; ... f = socket(AF_INET6, SOCK_STREAM, 0); ... if (bind( f, (struct sockaddr *) &sin, sizeof sin ) == -1) { ... } ... listen(f, 5); while (TRUE) { int g, len = sizeof from; g = accept(f, (struct sockaddr *) &from, &len); if (g == -1) { if (errno != EINTR) syslog(LOG_ERR, "rlogind: accept: %m"); continue; } if (fork() == 0) { close(f); doit(g, &from); } close(g); } exit(0); }
Example 2-7 shows how the server gets its service definition.
sp = getservbyname("login", "tcp"); if (sp == (struct servent *) NULL) { fprintf(stderr, "rlogind: tcp/login: unknown service\n"); exit(1); }
The result from getservbyname(3SOCKET) is used later to define the Internet port at which the program listens for service requests. Some standard port numbers are in /usr/include/netinet/in.h.
Example 2-8 shows how the server dissociates from the controlling terminal of its invoker in the non-DEBUG mode of operation.
(void) close(0); (void) close(1); (void) close(2); (void) open("/", O_RDONLY); (void) dup2(0, 1); (void) dup2(0, 2); setsid();
This prevents the server from receiving signals from the process group of the controlling terminal. After a server has dissociated itself, it cannot send reports of errors to a terminal and must log errors with syslog(3C).
A server next creates a socket and listens for service requests. bind(3SOCKET) ensures that the server listens at the expected location. (The remote login server listens at a restricted port number, so it runs as superuser.)
Example 2-9 illustrates the main body of the loop.
while(TRUE) { int g, len = sizeof(from); if (g = accept(f, (struct sockaddr *) &from, &len) == -1) { if (errno != EINTR) syslog(LOG_ERR, "rlogind: accept: %m"); continue; } if (fork() == 0) { /* Child */ close(f); doit(g, &from); } close(g); /* Parent */ }
accept(3SOCKET) blocks messages until a client requests service. accept(3SOCKET) returns a failure indication if it is interrupted by a signal, such as SIGCHLD. The return value from accept(3SOCKET) is checked and an error is logged with syslog(3C) if an error has occurred.
The server then fork(2)s performs the actual application protocol with the client, for authenticating clients. | https://docs.oracle.com/cd/E19455-01/806-1017/sockets-27/index.html | CC-MAIN-2018-13 | refinedweb | 424 | 55.13 |
#include <QMap>
#include <QString>
#include <QVariant>
Go to the source code of this file.
Holds a set of configuration parameters for a editor widget wrapper.
It's basically a set of key => value pairs.
If you need more advanced structures than a simple key => value pair, you can use a value to hold any structure a QVariant can handle (and that's about anything you get through your compiler)
These are the user configurable options in the field properties tab of the vector layer properties. They are saved in the project file per layer and field. You get these passed, for every new widget wrapper.
Definition at line 19 of file qgseditorwidgetconfig.h. | http://www.qgis.org/api/qgseditorwidgetconfig_8h.html | CC-MAIN-2014-15 | refinedweb | 113 | 64 |
from ask 12 Ask, and it shall be given # 12.
Page contents
Projects
- Stratographic Years Slot Calculator Example, Age of Earth
- Game kingdom of strategy
- Babylonian trailing edge algorithm and reverse sequence algorithm for reciprocals, eTCL demo example calculator, numerical analysis
- [Sumerian Counting Boards, multiplication operation placement strategy, and eTCL demo example, numerical analysis ]
- [Babylonian Combined Work Norm Algorithm and eTCL Slot Calculator Demo Example, numerical analysis]
- Weighted Decision and example eTCL demo calculator, numerical analysis
- Division into Parts by Multiple Ratios and eTCL demo example calculator, numerical analysis
- Combined Availability and example eTCL demo calculator, numerical analysis
- Chinese Horse Race Problems from Suanshu, DFP, and example eTCL demo calculator, numerical analysis
- Ancient Egyptian Double False Position Algorithm, and example eTCL demo calculator, numerical analysis
- Babylonian Multiplicatiion Algorithm and example demo eTCL calculator, numerical analysis
- Babylonian Weight Riddle Problems and eTCL demo example calculator, numerical analysis
- Babylonian Babylonian Irregular Reciprocal Algorithm and eTCL demo example calculator, numerical analysis
- Babylonian Field Expansion Procedure Algorithm and example demo eTCL calculator, numerical analysis
- Babylonian Trapezoid Bisection Algorithm and eTCL demo example calculator, numerical analysis
- Babylonian False Position Algorithm and eTCL demo example calculator, numerical analysis
- Babylonian Combined Market Rates and eTCL demo example calculator, numerical analysis
- Babylonian Cubic Equation Problem and eTCL demo example calculator, numerical analysis
- Sumerian Base 60 conversion and eTCL demo example calculator, numerical analysis
- Aryabhat Sum of Squares and Cubes and eTCL demo example calculator, numerical analysis
- Sumerian Approximate Area Quadrilateral and eTCL Slot Calculator Demo Example , numerical analysis
- [Capsule Surface Area & Volume and eTCL demo example calculator ]
- [Babylonian Number Series and eTCL demo example calculator ]
- Brahmagupta Area of Cyclic Quadrilateral and eTCL demo example calculator
- Gauss Approximate Number of Primes and eTCL demo example calculator
- [Old Babylonian Interest Rates and eTCL demo example calculator ]
- Twin Lead Folded Dipole Antenna and example demo eTCL calculator
- Refrigerator_Pinyin_Poetry
- Random Poetry Chalkboard
- Oneliner's Pie in the Sky
- Mahjong_Style_Deletion
- Example Linear Interpolation Calculator
- Fuel Cost Estimate Log Slot Calculator Example
- 2010-08-17 19:03:29 Seaching for Babylonian Triplets Slot Calculator Example
- 2010-08-17 15:24:36 Biruni Estimate of Earth Diameter Slot Calculator eample
- 2010-08-16 15:20:57 Fuel Cost Estimate Log Slot Calculator Example
- 2010-08-11 23:43:40 Stratographic Years Slot Calculator Example, Age of Earth
- 2010-08-01 21:11:58 Binomial Probability Slot Calculator Example
- 2010-06-27 21:33:56 Slot_Calculator_Demo
- Chinese Fortune Casting Example Demo
- 2010-08-16 20:04:34 Chinese Sun Stick Accuracy for Console Example
- 2010-08-01 01:15:15 Chinese Iching Hexagrams on Chou Bronzes : TCL Example
- 2010-07-29 01:12:32 Chinese Iching Random Weather Predictions
- 2010-07-18 01:53:25 Chinese Xiangqi Chessboard
- Iching_Fortunes
- application_runner_&_wrapper
- Testing Normality of Pi, Console Example
- horoscope pie plotter
- [Stonehenge Circle Accuracy Slot Calculator Example]
- Call Procedure Like Fortran Example
- [Stonehenge Circle Accuracy Slot Calculator Example]
- [Drake Intelligent Life Equation Slot Calculator Example]
- [Sumerian Equivalency Values, Ratios, and the Law of Proportions with Demo Example Calculator]
- [Piece wise Profits and eTCL Slot Calculator Demo Example]
- [Drake Intelligent Life Equation Slot Calculator Example]
- [Stonehenge Circle Accuracy Slot Calculator Example]
- [Stonehenge Circle Accuracy Slot Calculator Example]
- Babylonian Sexagesimal Notation for Math on Clay Tablets in Console Example
- Call Procedure Like Fortran Example
- Canvas Object Movement Example
- [Drake Intelligent Life Equation Slot Calculator Example ]
- Ellipse Properties Slot Calculator Example
- Sea Island Height Slot Calculator Example
- Captioning Photo Image under Pixane Example
- Timing Photo Image Loading under Pixane
- Poker Probability and Calculator Demo Example
- Canvas Object Movement Example
- Generic Calculator Namespace Package Example
- Sumerian Circular Segment Coefficients and Calculator Demo Example
- Tonnage of Ancient Sumerian Ships and Slot Calculator Demo Example
- Heat Engine Combustion and Calculator Demo Example
- Rectangular Radio Antenna and etcl Slot Calculator Demo Example
- [Sumerian Equivalency Values, Ratios, and the Law of Proportions with Demo Example Calculator]
- Sumerian Construction Rates and eTCL Slot Calculator Demo Example
- [Piece wise Profits and eTCL Slot Calculator Demo Example]
- Sumerian Coefficients in the Pottery Factory and Calculator Demo Example
- [Sumerian Equivalency Values, Ratios, and the Law of Proportions with Demo Example Calculator ]
- Sumerian Coefficients at the Weavers Factory and eTCL Slot Calculator Demo Example
- Sumerian Coefficients at the Bitumen Works and eTCL Slot Calculator Demo Example edit
- [Sumerian Beveled Bowl Volume and eTCL Slot Calculator Demo Example]
- [Sumerian Population Density and eTCL Slot Calculator Demo Example]
- Babylonian Sexagesimal Notation for Math on Clay Tablets in Console Example
- [Easy Eye Calculator and eTCL Slot Calculator Demo Example, Numerical Analysis]
- [Piece wise Profits and eTCL Slot Calculator Demo Example]
- [Paper & Felt Rolls and eTCL Slot Calculator Demo Example]
- [Human Language Root Words & Lexicostatistics Calculator and eTCL Slot Calculator Demo Example, numerical analysis]
- [Sumerian Workday Time & Account Calculator and eTCL Slot Calculator Demo Example, numerical analysis]
- contribution to Counting Elements in a List
- contribution to A Program That Learns
- contribution to Simple Canvas Demo
- contribution to lremove
- added many pictures to other pages
- added pix to A fancier little calculator
- added pix to 3 Triangles
- [Sumerian Sheep and Herd Animal Calculator and eTCL Slot Calculator Demo Example, numerical analysis]
- Probability Exponential Density Calculator and eTCL Slot Calculator Demo Example, numerical analysis
- [Electronic Failure Rate FITS and eTCL Slot Calculator Demo Example]
- [Sumerian Seeding Rates and eTCL Slot Calculator Demo Example , numerical analysis]
- Sumerian Porters Pay Formula and eTCL Slot Calculator Demo Example, numerical analysis
- One Dimension Heat Flow Model and eTCL Slot Calculator Demo Example, numerical analysis
- Sumerian Surveyor Area Formula and eTCL Slot Calculator Demo Example, numerical analysis
- Babylonian Sexagesimal Notation for Math on Clay Tablets in Console Example
- Binomial Probability Slot Calculator Example
- Call Procedure Like Fortran Example
- canvas Object Movement Example
- Captioning Photo Image under Pixane Example
- Chinese Fortune Casting Example Demo
- Chinese Iching Hexagrams on Chou Bronzes : TCL Example
- Chinese Sun Stick Accuracy for Console Example
- [Command Line Calculator in Namespace Package Example]
- Crater Production Power Law Slot Calculator Example
- [Drake Intelligent Life Equation Slot Calculator Example]
- Ellipse Properties Slot Calculator Example
- Estimating Mountain Height Using Look Angles, Etcl Console Example
- [Example Linear Interpolation Calculator]
- Finding Seked Angles of Ancient Egypt, Console Example
- Fuel Cost Estimate Log Slot Calculator Example
- Generic Calculator Namespace Package Example
- Heat Engine Combustion and Calculator Demo Example
- [Piece wise Profits and eTCL Slot Calculator Demo Example]
- Population Density and eTCL Slot Calculator Demo Example]
- Testing Normality of Pi, Console Example
- Tonnage of Ancient Sumerian Ships and Slot Calculator Demo Example
- Sumerian Bronze & Alloy Calculator with demo examples eTCL numerical analysis
- [Population Density Rectangular City Calculator and eTCL Slot Calculator Demo Example]
- [Over-21 Game Shell and eTCL Slot Calculator Demo Example , numerical analysis]
- [Sales Optimal Lot Order Size and eTCL Slot Calculator Demo Example]
- [Over-21 Game Shell and eTCL Slot Calculator Demo Example , numerical analysis]
- [Sales Optimal Lot Order Size and eTCL Slot Calculator Demo Example]
- Spare Parts from Normal Distribution and eTCL Slot Calculator Demo Example , numerical analysis
- [Sumerian Beer Ingredients and eTCL Slot Calculator Demo Example , numerical analysis]
- Sumerian Coefficients at the Dog Keepers and eTCL Slot Calculator Demo Example , numerical analysis
- Timing Photo Image Loading under Pixane
- Babylonian Shadow Length & Angles and eTCL Slot Calculator Demo Example, numerical analysis
- Sumerian Surveyor Area Formula and eTCL Slot Calculator Demo Example, numerical analysis
Babylonian Sexagesimal Notation for Math on Clay Tablets in Console Example Binomial Probability Slot Calculator Example Biruni Estimate of Earth Diameter Slot Calculator eample Chinese Fortune Casting Example Demo Chinese Sun Stick Accuracy for Console Example Command Line Calculator in Namespace Package Example Crater Production Power Law Slot Calculator Example Drake Intelligent Life Equation Slot Calculator Example Easy Eye Calculator and eTCL Slot Calculator Demo Example, Numerical Analysis Ellipse Properties Slot Calculator Example Fuel Cost Estimate Log Slot Calculator Example Generic Calculator Namespace Package Example Heat Engine Combustion and Calculator Demo Example Human Language Root Words & Lexicostatistics Calculator and eTCL Slot Calculator Demo Example, numerical analysis Mahjong_Style_Deletion Oil Molecule Length Calculator and eTCL Slot Calculator Demo Example, numerical analysis Oneliner's Pie in the Sky Paper & Felt Rolls and eTCL Slot Calculator Demo Example Penny Packing Calculator and eTCL Slot Calculator Demo Example, numerical analysis Piece wise Profits and eTCL Slot Calculator Demo Example Planet Mass Calculator and eTCL Slot Calculator Demo Example, numerical analysis Slot_Calculator_Demo Paint & Bitumen Coating and eTCL Slot Calculator Demo Example Sumerian Population Density and eTCL Slot Calculator Demo Example Tonnage of Ancient Sumerian Ships and Slot Calculator Demo Example
AMG: Please note that in this Wiki, spaces at the beginning of the line interfere with proper formatting.
What is the meaning of your "#start of deck" and "#end of deck" comments in your code examples? And why do you need a plural set of the comments at the start and end?gold Like i said, I'm an old Fortran programmer. The multiple start and stop statements including subroutine stop and return statements were used in scanning big reams of Fortran code with 10E5+ lines. I can tell you that obvious stop,end,and return statements are helpful in that size of code. Come to think of it, I have seen lots of TCL wiki code with out end statements, obvious exit paths,obvious return statements, and graphic displays without exit buttons. The advantage of the wiki is that people can cut and paste what they want to use.Ah, I see. Also, consider that maintaining a consistent code indentation and spacing can help immensely when scanning a block of code, and not just at the start and end of the chunk. Compare the before and after code look here: Example Linear Interpolation Calculator
aspect The amount of code you're pumping out in here is impressive, and I find some of the topics very interesting, but I'm finding the presentation less than useful. A few pointers, which I hope should be of value to yourself as well as anyone reading (and considering contributing) to your pages:
- Try to keep the wiki content example-focussed. Having multiple versions of the same long program in one page is distracting and confusing. If you need an online repository for your code I'd suggest using
keeping the wiki page for interesting extracts others can learn from or comment on.
- Please be consistent with indentation and formatting! Your code will become much more readable to others and easier to maintain for yourself. The Tcl Style Guide (start with the linked PDF) is a good place to start.
- Wherever you can, breaking out shared functions into a separate library script that can be loaded via source (or better yet package require), will make the library more solid and the main program more readable.
- If your explanatory commentary can be moved closer in the page to the code it concerns, that will also aid readability. Use inline comments (#) or the "if 0" trick -- an example is diff in tcl
- a set of labelled boxes for user input
- a "Solve" button which calls your main logic with the inputs as arguments
- a few "Testcase" buttons to illustrate examples and test the code
- "About", "Clear" and "Exit" buttons
namespace eval linear_interp { set name "Linear Interpolation Calculator" set inputs {"First X value" x1 "First Y value" y1 "Second X value" x2 "Second y value" y2 "Solve for x" xx} set about "This is Gold's linear interpolation calculator, © 2011 .. with some more information on how it works and is used, etc" set testcases { {10. 10. 200. 50. 123.} # etc } proc calculate {x1 x2 y1 y2 xx} { return [expr {some magic here to calculate the result}] } } load generic_calculator_gui.tcl generic_calculator_gui linear_interpAs a secondary advantage, your calculator could then without modification be used in other contexts, such as a command line or web tool or automatically invoking all the test cases. Of course, the GUI can also be easily re-skinned to the user's preferences without impacting the main code.I hope you don't find the above overly critical or discouraging, that's certainly not my intention -- but I do think keeping the above points in mind will make your pages more appealing to other wiki'ers and encourage collaboration .. which is what we're all here for, after all!
gold pasted trial namespace at bottom of [Drake Intelligent Life Equation Slot Calculator Example]
gold Procedures for pretty print on tcl wiki. Found website for removing blank lines online etc(free).One can paste wiki text into the (free) rough-draft editor and get a spell-check. Ased is free editor that has pretty print for TCL scripts.
-
-
-
-
large number values from sumerian,babylon,cuneiform math from clay tablets
kilometers,degrees etc are modern equivalents12 960 000 * 10 * .309 = 40 046 400 (greek feet)(10 x 602) x 60
12960000 * 10 * .309 = 40 046 400 The ratio of a japanese rin to a ri is 10/12,960,000 The ratio of a japanese shaku to a ri is 6/12,960 The ratio of a japanese bu to a ri is 10/1,296,000 a shaku is effectively a japanese foot a ri is effectively 3.9 km 36*60=2160 30*60=1800 5/6 = 1800/2160 129600 * .309 * factor = 40046.4 sumerian foot 129600 * .333 = 129 600 * .333 = 43 156.8 12 960 000 * .333 * (11 / 12) = 3 956 040Note that as you say above that you are using Tcl 8.5.6, then Tcl 8.5 already contains a built-in "lreverse" command, so your proc "lreverse5" above could be deleted, and calls to "lreverse5" can be replaced by calls to the built-in "lreverse" command.gold Changes. Removed proc lreverse5 and using 8.5 lreverse command. Proc sexagesimalfraction not working right.The change from lreverse5 to built in lreverse does not appear to be the cause:
% set list [ list 5 3 1 8 9 4 7 ] 5 3 1 8 9 4 7 % proc lreverse5 {l} { # KPV set end [llength $l] foreach tmp $l { lset l [incr end -1] $tmp } return $l } % lreverse5 $list 7 4 9 8 1 3 5 % lreverse $list 7 4 9 8 1 3 5 %The result from lreverse5 and the built in lreverse are identical. Note that you did not change one call in sexagesimal to use the built in lreverse.The stonehenge audrey holes seem to measure 0.5 degrees. At least some stone circles have a diameter of 32 meters and appear to measure 6/360 part of the sky. Some of medicine wheels in North America have divisions of 28.
Special solar/lunar octagon tableAzimuth Plotting
First ExampleCalculator with big fonts for bad eyes and used on my computer windows desktop. Console show provides the console and a paper tape record of calculations, which can be cut and pasted to a word processor like notepad. Also, program is good example of namespace.
# autoindent from ased editor # program " 2 Line Calculator in Namespace" # written on Windows XP on eTCL # working under TCL version 8.5.6 and eTCL 1.0.1 # TCL WIKI , 25may2011 namespace path {::tcl::mathop ::tcl::mathfunc} package provide calculatorliner 1.0 namespace eval liner { proc initdisplay {} { pack [entry .e -textvar e -width 50 ] bind .e <Return> {catch {expr [string map {/ *1./} $e]} res; set e $res} ;# RS & FR }} proc linershell {} { namespace import liner::* liner::initdisplay .e configure -bg palegreen .e configure -fg black .e configure -font {helvetica 50 bold} .e configure -highlightcolor tan -relief raised -border 30 focus .e button .b -text clear -command {set e ""} button .c -text exit -command {exit} pack .b .c -side left -padx 5 . configure -bg palegreen wm title . "Suchenwirth 2 Line Calculator" } console show linershell
In some of the Sumerian literature, the constants for gold are called tube of gold or kus of gold, which possibly refer to a wire or rod. The Sumerians were experts at gold wire jewelery and used wire etc for trade in the early days. For example, the gold constant was 1:48 or decimal 108 in unspecified units. From modern estimates of density, a gold rod of 1mm diameter about a cubit (50cm) would have 0.15158 modern grams per cubit or 0.904 gin per cubit. There were about 8.3 metric grams in a Sumerian shekel or gin. Prospective formula is alpha times circumference squared equals 2 sila. sexagesimal 4:48 or decimal 288 is reciprocal 0;0,12,30 or decimal 12/3600+30/216000. alpha*circumference squared eguals 2 sila. height is circumference * sqrt(thickness/alpha) 1mm diameter silver wire of one cubit length .7854 * 10.5 gm/cc = 0.08246 gm/cm 0.08246 gm/cm, mass/length 0.08246 gm/cm (1gin/8.33grams) (49.7 cm/cubit) = 0.491 gin/cubit or sexagesimal 0;30 gin/cubit 288/6300=0.04571 gin/nindan 12*288/6300 equals 0.548 gin/cubit constant has units nindan*nandin/( volume in sar) formula is area*density=mass/lengthIn some of the Sumerian literature, the constant for gold are called tube of gold or kus of gold, which possibly refer to a wire or rod. The Sumerians were experts at gold wire jewelry and used wire for trade in the early days. Several remaining tablets give coefficients for the metals and the thickness of coefficient. For example, the gold constant was 1:48 or decimal 108 in unspecified units. From modern estimates of density, a gold rod of 1mm diameter about a cubit (50cm) would have 745.75 modern grams per cubit or 82 Sumerian grains per cubit. Normally the Sumerians measured gold, silver, and electrum in shekels or gin. There were about 8.3 metric grams in a Sumerian shekel or gin. The thickness of log coefficient is sexagesimal 4:48 or decimal 288. The reciprocal thickness of log coefficient is 0;0,12,30 or fraction 12/3600+30/216000, or decimal 0.003472 . An ancient math problem ref. Thoureau Dangin helps define the log thickness coefficient on a cylinder as the thickness of log coefficient (alpha) times circumference squared ( 0;25 or fraction 25/3600) equals the answer ( 2 sila). From inference on the math problem, the units of alpha are sila/(nindan*nindan) or volume per length*length. This method sets up a reference unit on the cylinder such that one nindan of the cylinder length approximates 2 sila. For example, the ratios for a half nindan length would be 1/2 nindan/1 nindan is as 1 sila to 2 sila. This method would have a possible use in breweries, a nindan stick and the circumference of the vat could used to find volume in a vat of beer. Continuing further with other uses for the thickness of log coefficient, the volume per length times density or mass per volume gives mass per length. Dividing the thickness constant by the gold constant gives the gin per unit length or gin per nindian. Multiplying by twelve gives the gin per cubit.Although subject to interpretation of Sanskrit text, the Sanskrit number words were used in Vedic formulas which are prologues to the atomic theory. #developed from instances of zero/error handling in the calculators on this TCLwiki.For example, an attempt to divide by zero will produce an error (1/0). In numerical analysis, erratic conditions can develop from subtracting a set of nearly equal numbers or very small numbers approaching precision limits (1.002-1.001 or 0.00002 - 0.00001 or [1.002-1.001]/[0.00002 - 0.00001] ). If divide by zero is a problem error, sets of numbers may contain zero values or numbers that approach zero value from the negative side (eg. 0.0001,0.0002,-0.0001,0.0003). Also, clipping,quantization, or reduction of real numbers may produce zero. values ( eg. 0.0001 at precision .02 clips to zero).Errors can be avoided using control ps for testing division by zero, offsetting numbers from zero, deleting zeros from sets of numbers, or atleast warning the operator that the calculations are approaching erratic conditions.For the push buttons, the recommended procedure is push testcase and fill frame, change entries , push solve, and then push report. Report allows copy and paste from console, but takes away from computer "efficiency".In planning any software, there is a need to develop testcases.
Testcase 1.
operation result 1/0 Inf, defined as error condition here 0/0 0/1 zero, defined as correct TestcaseThe Indian astronomy texts of 620 CE. used multiple Sanskrit words for zero (and numbers 1-9). The Sanskrit aternate words for zero were kha,ambara,akasa,antariksa,gagana,abhra,viyat,nabhas,sunya,bindu.The Sanskirt word sunya (void) is more common in the online wordlists. In transliterated Sanskrit , the decimal number 1000 could be expressed as left to right (0001) viyad(sky or zero)/ambar(atmosphere or zero)/akasa(space or zero)/eka(1). #Trying to find some earlier estimates of atomic theory from other cultures. The Svetasvatara Upanisad of Vedic literature indicated an atman was one ten thousanth of the diameter of a human hair, expressed as (1/00)*(1/100). or 10-4 .A human hair averages 80 microns or 8E4 nanometers.An atman would be 8E4/1E-4 or 8 nanometers. Since an insulin molecule is 5 nanometers and a hemogoblin molecule is 6 nanometers, an atman of 8 nanometers compares to human molecules within an order of magnitude.Possibly, the Sanskrit word atman (soul) was derived from atman (breath) and in some texts the root word ama (mother) seems associated or used as meaning soul. The Sumerians used oil films in bowls for divination purposes under tutoring of gods Enhil, Enki, and Ea, ref WG. Lambert (Enmeduranki,pg115). Early reference to atomic theory in English.It is as easy to count atomies as to resolve the propositions of a lover. Shakespear,As You Like It " 1590 CE. This Shakespear quote is believed to be derived from the Roman Lucretius, 20 CE.
set sanskritword "1 2 3 4 5 6 7 8 9 0 , . / +"eka dvi tri catur panca sat sapta asta nava sunya , . / + 1% dvinavaambarasatambarapancanavaastaasta 1%Oil Molecule Length, Slot Calculator Example \This page is under development. Comments are welcome, but please load any comments in the comments section at the middle of the page. Thanks,gold
20 drops = 1 milliliter 1 drop = 0.05 milliliter gold Here is an eTCL script to estimate the length of an oil molecule.
# pretty print from autoindent and ased editor # oil molecule equation # written on Windowws XP on eTCL # working under TCL version 8.5.6 and eTCL 1.0.1 # gold on TCL WIKI , 20jan2012 package require Tk frame .frame -relief flat -bg aquamarine4 pack .frame -side top -fill y -anchor center set names {{} {initial drop volume mm3} } lappend names {diameter oil slick millimeters:} lappend names {number of atoms} lappend names {answer nanometers:} lappend names {answer nanometers:} foreach i {1 2 3 4 5 } { label .frame.label$i -text [lindex $names $i] -anchor e entry .frame.entry$i -width 35 -textvariable side$i grid .frame.label$i .frame.entry$i -sticky ew -pady 2 -padx 1 } proc about {} { set msg "Calculator for Oil Molecule Dimension . from TCL WIKI, written on eTCL " tk_messageBox -title "About" -message $msg } proc pi {} {expr acos(-1)} proc calculate { } { global answer2 global side1 side2 side3 side4 side5 set term1 0 set term2 0 set term3 0 set height [ expr { (4.*$side1*1E6)/([pi]*$side2*$side2) } ] set side4 $height set side5 [ expr { ($side4/$side3)} ] return $side5 } proc fillup {aa bb cc dd ee } { .frame.entry1 insert 0 "$aa" .frame.entry2 insert 0 "$bb" .frame.entry3 insert 0 "$cc" .frame.entry4 insert 0 "$dd" .frame.entry5 insert 0 "$ee" } proc clearx {} { foreach i {1 2 3 4 5 } { .frame.entry$i delete 0 end } } proc reportx {} { global side1 side2 side3 side4 side5 console show; puts " $side1 " puts " $side2 " puts " $side3 " puts " $side4 " puts " $side5 " puts "answer $side5 " } frame .buttons -bg aquamarine4 ::ttk::button .calculator -text "Solve" -command { calculate } ::ttk::button .test2 -text "Testcase1" -command {clearx;fillup .005 60. 12. 1.76 .17 } ::ttk::button .test3 -text "Testcase2" -command {clearx;fillup .065 220. 12. 2. .17 } ::ttk::button .test4 -text "Testcase3" -command {clearx;fillup .125 280. 12. 2. .17 } : . "Oil Molecule Dimension Calculator "There are coefficients for concave square figures, which are of uncertain shape. These coefficients of transverse length 1 and area coefficient 0;26:26 or 0.43988. For a unit circle inscribed inside a unit square, the total area of the bits in the four corners is (area of square) - (area of circle), 1*1-.5*pi*.25*.25, 1-0.78539, decimal 0.2146. For a possible area formula, area is constant*transverse*transverse, 0.43988*1*1 or 0.43988 area units. Hence, eight of the corner bits or 2*0.2146 or 0.4292 would be closer to the formula result. Other coefficients and possible other geometric figures refer to short transverse of 0;33,20 (decimal 0.5555) and long transverse of 0;48 (decimal 0.8) with an area constant of 0;53,20 (decimal 0.8888) and a concave triangle of 0;15. For a possible area formula, area is constant*transverse*transverse, 0.8888*.8*.8 or 0.5688 area units. If a generic formula for is N1 * s. transverse * l. transverse equals area units, then rearranging terms gives N1 equals s. transverse * l. transverse / area units. N1 = 0.5555*0.8/0.5688 or 0.78. The simple shapes such as triangles, rhombus, and trapezoids usually have a factor of 1/2 involved. One can factor out 0.78 as 1.56 * 0.5 or even (pi/2) * 0.5 for a semicircle. What about an hourglass figure with 2 back to back concave?Some of the ship constants range from sexagesimal 0:05 to 0:12 or decimal fractions 5/60 to 13/60. The reciprocal constant for the Akkadian long ship (elippi or elonga type) was listed as 0;07:13.The ship constant times ship length cubed gives ship volume. The ship constant times ship length cubed times density gives ship mass (eg. cargo mass).In planning any software, it is advisable to gather a number of testcases to check the results of the program.
pseudocode: enter ship length,ship constant,density pseudocode: ship constants of 5/60, 6/60, & 7/60 fractions pseudocode: answers are ship volume, cargo mass pseudocode: go/ no_go condition (7/60)* [(ship length)**3] = 0.1166 cubic units (7/60) *[ 1 cubit**3] * 740 kg/[cubits**3] = 74 kilograms If cargo mass is greater than ship b., flag go/no go .
Testcase 1.The trading ship hull has a length of 18.3 meters, beam of 3.96 meters, and a body height of 1.82 meters. With normal loading, the draw is 0.914 meters and the freeboard is 0.914 meters. The displacement is 30,000 kilograms with a hull weight estimated at 10,000 kilograms. The float or potential cargo is 20,000 kilograms. The surface area of the deck was estimated to be 46.4 square meters. The perimeter of the entire deck was estimated to be 44 meters. The arclength on one side of the deckship arclength is 22.65 meters or 45.57 kus, which was used in Sumerian calculations. Allowing for crew, spare consumables, and equipment at 8,000 kilograms, this is believed to be a "20 gir ship" with a cargo of 12,000 kilograms, 6000 liters (= 20 gir units), or 12,000 rations of grain. The normal ship crew is 30 rowers, 4 steermen, and 3 officers for a forty day cruise. Two rowers each are assigned to a 20 foot oar. Also, there are 2 steering oars at the back of the ship. Under oars alone, the trading ship has a speed of 6000 meters per hour. Under sail alone and ideal conditions, the speed is 160 kilometers per day or average 6600 meters per hour. However, the trading ship is rarely under power at night.
0.5*18.3*3.98=36.4 sq meters deck area 36.4 sq. meters radius figure 12.727922 meters ship arclength is 22.65 meters or 45.57 kus. trading ship is 64 gurs by modern rating formula. liter wheat = 0.78 kg liter barley = 0.62 kg constant*sq. deck area = silas? 7/60 * 371 * 371 = 16058 silas, 16058 silas * 1 gur/300 sila = 53 gur 95 sq.cubits*95 liters sq cubit/300 liters/gur =30 gurs
Testcase 2.The grain storage ship has a length of 29.2 meters, beam of 9.7 meters, and body height of 3.96 meters. The deck area of the grain ship approximates 0.5*29.5*9.7 or 143 sq meters. The grain ship arclength is 36.14 meters or 72.7 kus. The displacement is 245,000 kilograms with a hull weight estimated at 82,000 kilograms. The float or cargo is 163,000 kilograms. The storage ship allows for crew at 6,000 kilograms, spare consumables at 73,000 kilograms, and equipment at 48,000 kilograms. The storage ship is believed to be a "60 gir ship" with a cargo of 36,000 kilograms, 18,000 liters (= 60 gir units), or 36,000 rations of grain. The normal ship crew is 50 rowers, 5 steermen, and 5 officers for a ninety day cruise. Under oars alone, the storage ship has a speed of 4000 meters per hour. Under sail alone and ideal conditions, the speed is 160 kilometers per day or average 6600 meters per hour. There are 2 steering oars at the back of the ship. The grain ship hull has a length of 25 meters, beam of 6 meters, and a body height of 4 meters. With normal loading, the draw is 2 meters and the freeboard is 2 meters.
grain ship: 58.4 cubits 935 sq cubits. 59.1 cubits long grain ship arclength is 36.14 meters or 72.7 kus. grain ship is 585 gur, by modern formula deck area of grain ship approximates 0.5*29.5*9.7 or 143 sq meters constant*sq. deck area = silas? sq cubits deck area 996.8*95literspergur*/300=315.4 gurs 88/.3 92.4690 sq. meters 25 meters =35 cubits deck area = 349 square cubits
Testcase 1., Sumerian coefficients on ships
Testcase 3., Sumerian coefficients at the basket factory
Testcase 4., Sumerian coefficients at the shipyard
Testcase 5., Sumerian coefficients at the bitumin refineryNote: esir-e-a (bitumen watery) measured by barrels ( or jars) of 60 liters eq. esir-had (bitumen dry) measured by weights of 30 kilogram (manload). Any coefficient calculation would have to account for units of wet (ŠE system Š* for wet capacity) or dry (EN system E)In crude oil, the tar fraction from 10-14 %, 5.25 kg to 7.3 kg out of 52.5 kilograms of converted barig 60 liter unit. Babylon sold 40 liters of dry pitch (esir had) for 1 silver piece and 60 liters of construction and waterproofing pitch for 1 silver piece. If price is measure of petroleum fractions for heavy (esir a) fraction and tar fraction, the tar fraction was 40 liters (for a shekel)/ 60 liters (for a shekel) , 2/3, or 40/60 of the heavy esir a fraction. The texts mention both boiling/cooking and (implied) sun dry. Suppose that the Babylon products were derived from successive boiling or sun-dry processes, then a crude production line or process could be outlined: 100 liters crude oil > 85 liter lamp oil > 60 liters construction & waterproofing pitch > 40 liters dry pitch. Starting with 100 per cent, boiling would remove spare water impurities, ( straining) plant matter,gasoline, and naphtha, leaving about 85 percent for lamp fuel (eg. kerosene) and medicine. Further boiilg would remove kerosene and some mineral oil leaving 30 percent of the original crude oil for a heavy oil/pitch fraction (esir a) for waterproofing woven products and construction of floors, walls, and waterproofing bricks. The next stage would boiling.
*mathematical coefficients of bitumen, Paul BRY (01-2002)7
Table of Sumerian Ship Coefficients etc.
the other hand quotas which must have been set in a more esoteric fashion, such as the 15 workdays expended per gur capacity in barge construction (a barge of 30 gur capacity shoud be built with 450 workdays) attested in TCL 5, 5673 (MVN 2, 3 seems to record a quota of ca. 10 days per gur capacity). 4·3) KWU p. 132 c. F. Thureau
Sumerian coefficients at the trades
(under proofreading) The research made some trial calculations with density of bronze from 1 )modern density values and 2) ancient density values from the Old Babylonian coefficients. For the modern density values in kilograms per cubic meters, the values were tin (7176),bronze (8189), and copper (8890). Using r1:1 as the copper:tin ratio, the generic formula was tin_density*(1/(r1+1))+copper_density*(r1/(r1+1)=bronze_density. Multipling (ri+1) term and substituting, 7176+r1*8890=8169+r1*8169, and combining r1*99.3=721. The alloy ratio r1 for the modern bronze density value was 721/99.3, 7.26. (under proofreading
Table 4 , Old babylonian coefficients , comparison metal density
Testcases SectionIn planning any software, it
# pretty print from autoindent and ased editor # Ship length from arclength and ship beam # written on Windows XP on eTCL # working under TCL version 8.5.6 and eTCL 1.0.1 # gold on TCL WIKI , 17jul2013 package require Tk console show proc shiplengthx { a b } { set length [expr { sqrt($a*$a - (16./3.) *$b*$b*.5*.5 )}] return $length } lappend shiplist [ shiplengthx 3 .68 ] lappend shiplist [ shiplengthx 6 1.35 ] lappend shiplist [ shiplengthx 9 2.03 ] lappend shiplist [ shiplengthx 12 2.7 ] lappend shiplist [ shiplengthx 15 3.38 ] lappend shiplist [ shiplengthx 18 4.05 ] lappend shiplist [ shiplengthx 21 4.73 ] lappend shiplist [ shiplengthx 30 6.75 ] lappend shiplist [ shiplengthx 18 5.12 ] puts " $shiplist"
# pseudocode can be developed from rules of thumb. pseudocode: enter triangle height ,triangle width , penny or coin diameter pseudocode: rules of thumb can be 3 to 15 percent off, partly since g..in g..out. pseudocode: packing pennies in equilateral triangle, pseudocode: base of triangle 10 pennies wide. pseudocode: height of triangle 20 pennies tall pseudocode: pennies will be paced in layers equal to width of diameters, non optimal spacing layers diameter of coin, initially 10 coins wide find width of every stack for each stack layers foreach layer {1 2 3 ... N coins high} {calc. coins} pack number of circles in each layer, short of sides, add circles to total, when finished print result need console show set addlayer 0 set level 0 set numberpennies 0 incr $addlayer set level [ expr { $level +$addlayer } ] set width [ expr { 2.* 10.* asin($level/$width) } ] set numberpennies [ expr { $numberpennies+ int($width) } ] pseudocode: need test cases > small,medium, giant within range of expected operation. pseudocode: are there any cases too small or large to be solved? pseudocode: Could this be problem similar to grains on chessboard?
# counting pennies in equilateral triangle # eTCL console example program # written on Windows XP on eTCL # working under TCL version 8.5.6 and eTCL 1.0.1 # gold on TCL WIKI , 18jul2013 package require Tk console show set addlayer 0 set level 1 set numberpennies 0 set width 10 # height is 20 pennies foreach layer {1 2 3 4 5 6 7 8 9 10 11 12 14 16 17 18 19 20} { incr $addlayer 1 set level [ expr { $level +$addlayer } ] set sintarget [ expr { 1.*$level/$width } ] set width [ expr { 2.* 10.* asin($sintarget) } ] set numberpennies [ expr { $numberpennies+ int ($width) } ] } puts " $numberpennies "
# pretty print from autoindent and ased editor # Ship beam from 1/4 ship length # accepts multiplication factor N*list # example as list of Sumerian ship lengths # and dividing by 4 for poss. ship beam # (max width) # written on Windows XP on eTCL # working under TCL version 8.5.6 and eTCL 1.0.1 # gold on TCL WIKI , 24jul2013 package require Tk namespace path {::tcl::mathop ::tcl::mathfunc} console show proc multiplylist { factorx args } { set factor_x " is constant or expression " set args_x " targeted list of numbers " set icount -1 foreach item $args { incr icount lappend result_list [* $factorx $item 1. ] } return $result_list } proc shiplengthx { a b } { set ship_arc_length "a" set ship_beam "b" set ship_length 1 set ship_length [sqrt([* $a $a ]-[* [/ 16. 3.][* $b $b .5 .5 ]])] return $ship_length } puts " ship arc conv to arc meters as 6*N >> [ multiplylist 6 .5 1. 1.5 2. 2.5 3 3.5 5.0 ] " puts " ship beam as .25*L meters >> [ multiplylist .25 2.89 5.79 8.68 11.58 14.48 17.38 20.27 28.96 ]" set ships {2.89 5.79 8.68 11.58 14.48 17.38 20.27 28.96} puts " test of math ops, mean >> set mean [/ [+ {*}$ships] [double [llength $ships]]]"
Schedule for the Gades brick piles, many assumptions.Locally in the Umma tablets , the local temple was called the Shara or Sara. The Sumerian word “Sar,Sa, or Sagina” is a root word meaning king, general, or royal officer in some contexts (ref. sag means head). The temple at Umma is sometimes referred as Sara on the quay, house of Sara, Sara of Umma. The temple was supported by the province of Umma and further received revenues and products locally. While not all the Umma tablets can be dated to the reign of King Amar-suen ( 2046-2038 BCE ), many tablets can be correlated by the local calendar of Umma, seals/names of the project personnel, and by personal names such as Ur-Sara (steward of Sara), Lu-Sara (man servant of Sara), Sara-mutum (woman servant of Sara. There was even a common beer called Sarazi (Sara Beer) and a common ration cereal called Sara-emmer (Sara Wheat).There was an Inanna temple near Zabalum. Men were normally forbidden in the temple (after consecration). Aside from the priestess there were about 60 women singers who served in the temple rites, but probably were housed on the dower estate of Girsana). While not all the bricks can be established as fired, the fired bricks were probably intended as foundation bricks, underground dedication shrine boxes ,and high use floors/thresholds (ref. the Nimintabba temple at Ur and the Inanna temple at Nippur. Some of the fired bricks were probably used in rebuilding the Karzida quay in front of the moon temple. By custom, the buttressed walls of the en-priestess residence were exceptionally thick and the foundation under the walls was extra strong..
Schedule for Gaes bricks, tentative and many assumptionscanal regulators and (hence substantial burnt brick yards) were located at Umma,Girsu,Lagash,Shurruppak,Larsa,and Isin.1.3E6 bricks were used at Isin. 68.6E3 bricks were estimated at Girsu 4.46E5 bricks used at Lagash Rebuild (second reg) 4.325E5 bricks used at Lagash (first reg.)Wages at the Girsu Resthouse and Prison.
In one coefficient list, there was a coefficient for wool (igigubbum-hi-a) as 48 in a base 60 fraction (48/60). The term igigubbum is Akkadian, apparently borrowed from Su, igi-gub-ba ( I see fraction (used as reciprocal)). The term hi-a appears in URIII texts associated with wool and textiles, and means processed wool or processed fleece. In the Nippur lexical lists, the term siki al-hi-a was translated as processed wool (work). A math problem using the wool coefficient is not available, but will try to convert 48 in base 60 into modern decimal units and proportions. In modern terms, the equation is coverage area * reciprocal coefficient equals material. Starting with Sumerian units, the coverage area in sar units times reciprocal coefficient (60/48) equals weight of wool in gu units, simply sar*(60/48)=gur. Rearranging terms, the proportion is gu units/ sar units = 60/48 or sar units / gu units = 48/60. The proportions hold true if 60 /48 is reduced to 5/4, meaning 5 gus of wool equals 4 sar of woven cloth. 5 gu/4 sar equals (5*60*0.4977) kg/4*32 sqm, 1.166 kg/sqm, 1166 grams/sqm for wool cloth. This figure of 1166 grams/sqm is probably one weight of wool cloth and probably the Sumerians used other cloth weights also.For the clothing, its easier to use square cubits than sars. A sar equals 144 square cubits, and a gu equals 60 manas. So the 5/4 ration above would equal (5*60)/(4*144), 0.521, rounding 0.5 manas per square cubit. Or the reciprocal was equivalent to 2 square cubits per mana. The 2 mana ration to low status men would be equivalent to 2*2 or 4 sq cubits of woolen cloth. The 3 mana ration to craftswomen would be equivalent to 3*2 or 6 sq. cubits. The 4 mana payment to project overseers would be 4*2 or 8 sq. cubits. The child's garment above would be 3*2 or 6 sq. cubits. The high status female full skirt would be 3.5*2 or 7 sq. cubits. The tug-guz-za long shirt for the high statue male weighed 5.474 manas and was 5.5*2 or 11 sq. cubits. In general, the wool ration and the weight of clothing was a measure of status.
Is Thales theorem easier to use ?Dear uniquename, I noticed you referred to Thales Theorem in one of your posts. I have a question on the use of Thales Theorem by Old Babylonian mathematicians (circa 1900 BCE). By remainders and artifacts in base 60 problems and math coefficients, there appears to be one tradition of Babylonian mathematicians that derived formulas from Thales Theorem and one tradition of Babylonian mathematicians that derived formulas using Pythagoras or sqrt(sum squares). Apparently, the Greek Thales was of the former persuasion in a different era. Especially the dual use of Thales Theorem and sqrt(sum squares) is seen in the Old Babylonian coefficient lists (base 60), ref. Eleanor Robson paper on coefficient lists at Oxford. Supposing no computers or slide rules, is the Thales Theorem easier to use than sqrt(sum squares)? Does the Thales Theorem produce numbers that are easier to factor in simple primes (2,3,5),use in pi multiplication, use in base 10, or surveying land (with equilateral triangles)? Old Babylonians are known to have avoided division and usually numbers 7 and 9 in math problems, as producing numbers difficult to factor in the base 60 system. My email service is erratic lately, please post your reply at the end of my tcl wiki homepage, thanks gold
Gaming solutions or false positions with eTCL calculatorIt is possible to find solutions to equations using the method of false positions (regula falsi) or gaming with the eTCL calculator. Suppose a coating of some known thickness is applied on known surface area and one wished to estimate the coating coefficient. From an initial solution either from the testcases already loaded into the calculator or from order of magnitude calculations, the coating coefficient can be estimated from a series of guesses or false positions in incremented steps. The accuracy of the coefficient solution depends on step size and is usually given as half step size (0.5*step). For example, the solution if found in the series <1, 1000, 2000, 3000, 4000,5000> would have a accuracy of plus or minus 1000/2 or 500. Loosely speaking, this is operating the eTCL calculator in reverse. Finding the coefficient that gives a solution of specified input (2 sides of a surface area) and specified output (volume).This is the gist of an Old Babylonian (OB.) math problem for water irrigation, converted to metric units from clay tablet YBC4186. A cubic water cistern of L/W/H 60/60/60 meters was used to irrigate a square field to a depth of 0.015 meters. The volume of the cistern would be L*W*H, 60*60*60, 2.16E5 cubic meters. What are the dimensions of the field, assuming square field? In terms of a modern algebraic equation, the answer was field area equals L*W*H/D, 60*60*60/0.015, 14.4E6 square meters, or 14.4 square km. Each side of the field would be sqrt(14.4) or 3.8 km. To initialize the eTCL calculator, press testcase 1 and push solve, returning the first test case solution. In the length and width fields, enter 3.8E3 meters and solve should return the correct surface area (14.4E6).Continuing with the OB. water irrigation problem, it is possible to game on a eTCL solution for the coefficient (not in the OB. solution or outside the OB. text). Loading and solving for possible test solutions (regula falsi) as (coefficient = 1000, 2000, 3000, 3500, 4000), the coefficient in the test problem is close to 3400.
RLE (2014-05-20):You have the following statement in almost all of your pages:
-
- Report allows copy and paste from console, but takes away from computer "efficiency".
for {set i 0} {$i < 10000} {incr i} { for {set j 0} {$j < 10000} {incr j} { lset matrix $i $j [ expr { [lindex $matrix $i $j] * [lindex $matris $j $i] } ] puts "$i $j [ lindex $matrix $i $j]" } }This (made up) numerical computation loop above iterates 100,000,000 times. The result is it runs puts 100,000,000 times, which will slow down the loop because puts has to do output (lots of other code to execute to ultimately achieve the "output" operation). Removing that puts (or commenting it out) would speed up that loop by quite a large amount.Also, note that while debugging, sometimes running a puts inside the inner loop to see intermediate results can help you locate a bug or a math error.But there is a difference when it comes to output of results to a human (or another machine). At that point, you have little choice but to perform at least one puts (assuming you appended your result together beforehand). For an "output of results" routine, the point is to perform output, so running puts is not reducing "efficiency", rather it is the whole point of the output routine.With that said, if you were doing something like outputting one character at a time of the result, with a puts per character, then yes, that would have been inefficient.So the "gripe" was most likely not a blanket "this applies to everything" statement, it was much more likely a "given what you have here, you are being inefficient" "gripe". The difference is important. It seems you may have over-generalized the "gripe". gold Text on console puts and efficiency mostly pulled from my wiki pages, 22may2014
Here is some eTCL starter code for cleaning TCL lists of empty, blank, or small length elements. This eTCL code is a compilation of various posts or ideas on this wiki. Sometimes, bare statements and variables in context were swapped and wrapped inside a consistent subroutine for testing. For testing the procs, lists of words or numbers can be generated , screen grabbed, or dumped into a TCL list with many empty, blank, or small elements. The orignal target list is multiple decks of 52 cards with added jokers as {}(empty element) { }(one blank space) {.} {} {;}.
original {2 3 4 5 6 7 8 9 ...{} { } {.} {} {;} {} { } {.}...} routine #2 {2 3 4 5 6 7 8 9 ... { } {.} {;} { } {.} ...} , empty elements removed routine #1 13.0177 microseconds per iteration routine #2 172.884 microseconds per iteration routine #3 8.0467 microseconds per iteration routine #4 6.9682 microseconds per iteration routine #5 1.8228 microseconds per iteration routine #6 4.7997 microseconds per iteration
# written on Windows XP on eTCL # working under TCL version 8.5.6 and eTCL 1.0.1 # gold on TCL WIKI, 15oct2014 # Console program for list of Unicode cards, values, multiple N1 decks # N1*decks one liner from FW # cleanup from RS, working for small lists <25 e., problem with medium >500 e. # idea using regsub statements for blanks from RL # routine5 using statement from H package require Tk namespace path {::tcl::mathop ::tcl::mathfunc} console show global counter set cards { 2 3 4 5 6 7 8 9 10 J Q K A } set list_values { 2 3 4 5 6 7 8 9 10 10 10 10 11 } set jokers { {} { } {.} {} {;} {} { } {.} {} {;} {@} { } {.} {} {;} } set list_cards [ concat $cards $cards $cards $cards $jokers] proc lrepeat_FW {count args} {string repeat "$args " $count} set list_cards [ lrepeat_FW 10 $list_cards ] proc empty_elements_in_list1 {lister} { set new_lister {} foreach item $lister { if { $item != "" } {lappend new_lister $item} } return $new_lister} proc empty_elements_in_list2 {lister} { regsub -all "{}" $lister "" new_lister5 return $new_lister5 } proc empty_elements_in_list3 {lister} { set take_out2 {} foreach item $lister { if { [string length $item ] > 0 } {lappend take_out2 $item}} return $take_out2 } proc empty_elements_in_list4 {lister} { set take_out5 {} foreach item $lister { if { {expr {[string length $item]}} > 0 } {lappend take_out5 $item}} return $take_out5 } proc empty_elements_in_list5 {lister} { set take_out6 {} while {[lsearch $lister {}] >= 0} { set lister [lreplace $lister [lsearch $lister {}] [lsearch $lister {}]] } set take_out6 $lister return $take_out6 } proc cleaner {target args} { set res $target foreach unwant [split $args ] { set res [lsearch -all -inline -not -exact $res $unwant ]} # suchenworth idea return $res } puts " original $list_cards " puts " routine #1 [ empty_elements_in_list1 $list_cards ] " puts " routine #2 [ empty_elements_in_list2 $list_cards ] " puts " routine #3 [ empty_elements_in_list3 $list_cards ] " puts " routine #4 [ empty_elements_in_list4 $list_cards ] " puts " routine #5 [ empty_elements_in_list5 $list_cards ] " puts " routine #6 [ cleaner $list_cards A ] " puts " routine #1 [ time { empty_elements_in_list1 $list_cards} 10000 ] " puts " routine #2 [ time { empty_elements_in_list2 $list_cards } 10000 ] " puts " routine #3 [ time {empty_elements_in_list3 $list_cards } 10000 ] " puts " routine #4 [ time {empty_elements_in_list4 $list_cards } 10000] " puts " routine #5 [ time {empty_elements_in_list5 $list_cards } 10000] " puts " routine #6 [ time {cleaner $list_cards A } 10000] "Bypublic nameon October 13, 2013 Format: Paperback “Life Giving Sword” or Yagyu Family Memorial by Yagyu Munenori has some interest in study of katana, kendo, taichi sword, or taji jian. Alternate English translations by William Scott Wilson in “Life Giving Sword” and Thomas Cleary in “Book of Five Rings” desparately need a glossary for common definition of terms. I have supplied a starter glossary for “Life Giving Sword”. As a common ground between east and west, a Romanji equilvalent text on shujishuriken (paraphrased) terms would be useful. I hope others can make contribution to meaning of shujishuriken and other terms in Life Giving Sword. If we can not build the whole bridge, we can add a few blocks.Glossary of Yagyu-Ryu terms and words in Yagyu Family Memorial text (c. 1632 CE) . Terms below are from 17th century text, not necessarily same as Modern Japanese usage. Takuan Soho, Yagyu Munenori, and Miyamoto Musashi used homonyms, puns, or specialized terms, which are not found in conventional Romanji dictionaries. Romanji dictionaries contain homonyms, which are words that sound the same but have different meanings. Special combined terms in martial arts (Buddhist traditions) are noted by capitals, hyphens, or quotes. bo: wooden staff bocuto: wooden sword bokken: wooden sword, usually heavy wood for exercise. chi: vital energy, or broadly energy from earth and sky. Sometimes in Japanese texts by extension, chi or ki refers to manifested chi or force. The manifested chi in Chinese texts is called jing (muscular power), jinli (martial power), or jin ( combination of emitted chi and muscular power applied to a specific target spot). daiki taiyu: divine transformation. Usually, transformation from potential or resting energy to active motion and force. human, or heaven in Chinese philosophy. chudan: sword held in middle position gedan: sword held tilted down ha: attack hachimaki: headband hakarigoto: "strategy" hakama: pleatted skirt or culottes, usually worn for exercise. hara: navel or belly heihou : “strategy”, literal “dark hidden deception” hiro: color ho:martial art hyori: deception inka: martial arts diploma isshin, “One Mind” isshin itto, “One Heart, One Sword” kan: listening with mind and contemplative insight. ken: sword or used as homonym for “plain sight or ordinary sight” as opposed to contemplative insight (kan). kendo:way of the sword kenjutsu : swordsmanship. kannen:mind should see through one’s emotions or mind should be clear of emotions. kizen:”take initiative” jo: preliminary attack jodan:sword held above forehead kage-ryu: shadow sword style, sometimes refers to following, reacting, and basing actions on opponent’s shadow. Especially, staying outside opponent’s cast shadow until closing for attack. kami: shinto diety or dieties kanshin: seeing with mind or insight. katana: long sword. katsu: refers to attainment of essential nature or “Life-Giving” katsujinken: “Life-Giving-Sword”, sometimes refers to resolution of problems without force. ken: sword kenshogodo: seeing into essential nature. ki: vital energy kiai: focused shouts, loud scream used to disturb opponent. koku: empty space kyusho: vital point kuji: 9 hand signs or mudras used in kendo training. kyu: counter strike majutsu, techniques of invisibility mondo: question and answer in Zen dialogue. mu: “Non-existence”, sometimes refers to Yin side (left) of opponent or hidden side (shadow) of object. mu-kyu: “Non-existence counterstrike”, sometimes refers to circling counterclockwise (in Yin direction) around opponent for one or more paces and attacking the “Non-Existence” (left,Yin) side of the opponent. Here, “Existence” may refer to sword held by right handed swordsman and “Non-Existence” may either refer to empty hand on left side or the palm of the right sword hand viewed from the left (by the opponent). munen muso: (literal) No-Desires, No-Thought muto: “No-Sword”, sometimes refers to resolution of problems without force. Also techniques of unarmed combat. munen: “No-Thought” or refers to actions under suspension of consciousness. mushin, "No-Mind", suspension of consciousness, usually during meditation. mushinjo: suspension of consciousness, usually during meditation. myo (na) : strange, odd, without reason naginata, long spear with heavy blade. nakazumi: “mysterious-sword” is holding sword around navel or hara. nitto ryu: “two-swords-style” ryu: sword style or school satori: "enlightenment" setsuninken: “death-dealing-sword”, sometimes refers to solving problems by force only as opposed to solving problems without force. satsuninto: “death-dealing-sword”, sometimes refers to solving problems by force only as opposed to solving problems without force. seiza:kneeling position for meditation practice sensei:teacher shin: mind shinken: “Real-Sword”. shinken sho-bu: contests with “Real-Sword”. shinku: emptiness of mind shinmyo: “Mysterious”, refers to the combination of mind (shin) and strange outside action (myo). Usually found in combination as “Mysterious-Sword” or implied sword. shinmyoken: “Mysterious-Sword” , refers to the combination of mind (shin) and strange outside action (myo) holding the sword (ken) around navel, just as hara is considered center of being/energy. suigetsu, literally moon on water, refers to keeping 3 pace distance from opponent or out of opponent’s cast shadow. Note: sun and moon both cast shadows. shuji: crosspattern sword block (literal from Sino. characters, hand ji (noun suffix)). Sometimes refers to crosscounterpoint target on body of opponent. shujishuriken: (literal from Sino. characters, hand ji (noun suffix) hand inside see) perception of abilities and intentions. By extension, see inside technique of opponent. Sometimes refers to the 9 healing sounds and ideographs (mudras) used to increase alertness, warmup shoulders, and loosen hands prior to combat. tachi: great sword tsumeru: deflection or block leading to counterstrike. not a hard block. yang: positive energy or active principle . heavenly energy. clockwise movement. yin: negative energy or inactive principle. earthly energy, counterclockwise movement. tai: substance or fundamental property of all things tao: way of philosophy wakizashi: sidearm sword or short sword yari: spear zazen: meditation practice zen: meditation practice towards Self-Realization Katsujinken, “life-giving-sword by Yagyu Munori,C. 1632 CE”.Heiho Kadensho of Yagyu Munenori.Also known as Yagyu Family Memorial text in paraphrased Romanji terms, kk kk k lll jjj kkk bbb bb nn nnn mmm mmm mmm mmm
From a clay tablet, internal and external volumes of a hollow cylinder is equated to squared ratio of radius1*radius1 over radius2*radius2. Not sure about the accurate math derivation and maybe numerical coincidence, but the tablet appears to be using inner volume equals outer volume times radius1*radius1 over radius2*radius2. Hereafter, the paragraph will use the modern decimal notation, PI (3.14...), and carry extra decimal points from the eTCL calculator, whereas the Sumerians used 3 and round numbers. radius1 would be the radius of the hollow and radius2 would be the radius of the outer cylinder. In the tablet, the circumference of the outer cylinder was 1.5 units and the ratio of the inner radius to outer radius would be 1:4. The diameter of the outer cylinder would be 1.5/PI or 0.4774, and radius2 would be 0.4774/2 or .2387. radius1 would be .2387/4 or 0.0597. The height of the cylinder would be 1 unit. Using conventional formulas the volume of the outer cylinder would be 2*PI*radius2*radius2*height, substituting 2*3.14*.2387*.2387*1, 0.3578. The conventional volume of the inner cylinder would be 2*PI*radius2*radius2*height, substituting 2*3.14*.0597*.0597*1, 0.0224. The volume of the hollow cylinder would be outer cylinder minus inner cylinder, 0.3578-0.0224, 0,3354. In squared proportions, the radius1*radius1 over radius2*radius2 would be (1*1)/(4*4),1/16,0.0625. The Sumerians found the inner cylinder vol (hollow) as (1/16) * outer cylinder vol, (1/16) * 0.3578, 0.0224 in modern notation. Not on the tablet, but it follows that the hollow or outer cylinder volume would be (1-1/16)* outer cylinder vol, (15/16)* 0.3578, 0.3354.In Sumerian base60, the factor would be 1/16 or 3/60+45/3600.
set inner_cylinder_a=b*(c*c/d*d)_ [* 0.3578 [/ [* 1. 1. ] [* 4. 4. ] ]] # 0.0223625 set hollow_cylinder_a=b*(c*c/d*d)_ [* 0.3578 [- 1. [/ [* 1. 1. ] [* 4. 4. ]] ]] # 0.3354375 set inner_cylinder_ [ eval expr 2*[pi]*.0597*.0597*1 ] # 0.0224 set outer_cylinder_ [ eval expr 2*[pi]*.2387*.2387*1 ] # 0.3578derivation setup
barge_area =2*((pi*r*r/4)-r*r/2) # 2 times segment of circle for quarter section. barge_area = coefficient * circumference * circumference coefficient = barge_area / ( circumference * circumference) coefficient = ((2*pi*r*r/4)-2*(r*r/2) ) / ( 2*pi*r * 2*pi*r ) coefficient = (1/(4*PI*PI*r*r)*(2*pi*r*r/4) - (1/(4*PI*PI*r*r)* (2*r*r/2) reduction >> coefficient = 1/(8*PI)- 1/( 4*PI*PI ) Sumerian 3 for PI, coefficient = 1/24-1/36 ??? Sumerian text = 2/9 , 2/9 = 0.2222 decimal, base60 value = 13/60+ 20/3600
# derivation or problem of concave square, r=.5, d=1.0, ref Robson and Friberg # square of unit one on side minus 4 quarter circles of radius = 0.5,sumerian pi = 3. set concave_square_coefficient_modern_notation [ eval expr 1. - 4.* (1./4.)*[pi]*.5*.5 ] # decimal answer= 0.21460183660255172 set concave_square_coefficient_babylonian_notation [ eval expr 1. - 4.*(1./4.)* 3.*.5*.5 ] # decimal answer= 0.25, conv .25*60 ??? # above was defining bound for radius=.5, but problem wants quarter circle arc equal one. # coefficient is 4 times area of inscribed quarter_circle with arc of quarter_circle set to 1. set concave_square_coeff [ eval expr (2.*[pi] )**2. - (2.*[pi] )**2. ] Sumerian text = concave_square_coefficient = base60 26_40, 4/9Uses of even powers (eg 2**2 and 3**2). Several Babylonian tables of powers have been published. Some of these tables are equivalent to even powers of prime numbers. For example the table of nines would give numbers, equivalent to even powers of 3. The table of 16 would give equivalent numbers based on even powers of 2. Conjecture is that these tables could have been used to generate Babylonian triplets. Ref, Babylonian Pythagorean Triplets by Michael Fowler.
# following statements can be pasted into eTCL console set aside [expr 3**4-2**6] # 17 set cside [expr 3**4+2**6] # 145 set bside [expr 2*(3**2)*(2**3)] # 144 # 3**4 => 9**2 # 2**6 => 16**1 set aside [expr 3**6-2**12] # 3367 set cside [expr 3**6+2**12] # 4825 set bside [expr 2*(3**3)*(2**6)] # 3456 #3**6 => 9**3 #2**12 => 4093 +> 16**3 # The next even power of 3 would be 3**8 and next step? might be 16**4? # 12709,13500,18548, 2 2 • 3 3 • 5 3 set bside [expr 2*( ** )*( ** )]# set bside [expr 2*(3**3)*(2**8)] # 13500 set cside [expr 125**2+54**2] # 18541 set aside [expr 125**2-54**2] # 12709 #12709 71 179 13500 2 2 3 3 3 5 5 5 18548 2 2 4637
Following one liners need mathops and math libraries edit
proc list_squares { aa bb} { for {set i 1} {$i<=$bb} {incr i} {lappend booboo [ * $i $i 1. ] };return $booboo} (tclprograms) 14 % list_squares 1 5 1.0 4.0 9.0 16.0 25.0 proc listnumbers { aa bb} { for {set i 1} {$i<=$bb} {incr i} {lappend booboo [ expr 1.* $i] };return $booboo} #returns list of integer numbers from aa to bb as reals with decimals,usage [listnumbers 1 5] , answer is 1.0 2.0 3.0 4.0 5.0 proc listfib { aa bb} { for {set i 1} {$i<=$bb} {incr i} {lappend booboo [ int [ binet $i] ] };return $booboo} proc binet { n} {set n [int $n ]; return [int [* [/ 1 [sqrt 5]] [- [** [/ [+ 1 [sqrt 5]] 2 ] $n ] [** [/ [- 1 [sqrt 5]] 2 ] $n ] ] ] ] } # usage, set binet1 [ binet 8],answer 21, removing int's will return real numbers # usage, set fibno [ listfib 1 8 ], answer 1 1 2 3 5 8 13 21 proc fibonacci_approx_for_large_N {n} { set phi [/ [+ 1 [sqrt 5]] 2 ] ; return [/ [** $phi $n ] [sqrt 5 ]] } % proc fibonacci_approx_for_large_N {n} { set phi [/ [+ 1 [sqrt 5]] 2 ] ; return [int [/ [** $phi $n ] [sqrt 5 ]]] } (tclprograms) 8 % [fibonacci_approx_for_large_N 1] invalid command name "0.7236067977499789" (tclprograms) 9 % [fibonacci_approx_for_large_N 2] invalid command name "1.1708203932499368" (tclprograms) 10 % [fibonacci_approx_for_large_N 3] invalid command name "1.8944271909999157" (tclprograms) 11 % [fibonacci_approx_for_large_N 4] invalid command name "3.065247584249853" proc add {args} {return [ ::tcl::mathop::+ 0. {*}$args]}; (tclprograms) 2 % add 12 11 10 9 8 7 6 5 4 3 2 1 78.0 console show package require math::numtheory #namespace path {math::numtheory} namespace path {::tcl::mathop ::tcl::mathfunc math::numtheory } set tcl_precision 17 }
gold This page is copyrighted under the TCL/TK license terms, this license
| http://wiki.tcl.tk/17977 | CC-MAIN-2017-26 | refinedweb | 10,344 | 54.32 |
When a user types the name of your object into an object box, Max looks for an external of this name in the searchpath and, upon finding it, loads the bundle or dll and calls the main() function. More...
When a user types the name of your object into an object box, Max looks for an external of this name in the searchpath and, upon finding it, loads the bundle or dll and calls the main() function.
Thus, Max classes are typically defined in the main() function of an external.
Historically, Max classes have been defined using an API that includes functions like setup() and addmess(). This interface is still supported, and the relevant documentation can be found in Old-Style Classes.
A more recent and more flexible interface for creating objects was introduced with Jitter 1.0 and later included directly in Max 4.5. This newer API includes functions such as class_new() and class_addmethod(). Supporting attributes, user interface objects, and additional new features of Max requires the use of the newer interface for definiting classes documented on this page.
You may not mix these two styles of creating classes within an object.
The namespace for all Max object classes which can be instantiated in a box, i.e.
in a patcher.
Class flags.
If not box or polyglot, class is only accessible in C via known interface
Adds an attribute to a previously defined object class.
Adds a method to a previously defined object class.
m, to respond to the message string
namein the leftmost inlet of the object.
Registers an alias for a previously defined object class.
Wraps user gettable attributes with a method that gets the values and sends out dumpout outlet.
Finds the class pointer for a class, given the class's namespace and name.
Finds the class pointer for a class, given the class's namespace and name.
Frees a previously defined object class.
This function is not typically used by external developers.
Determine if a class is a user interface object.
Initializes a class by informing Max of its name, instance creation and free functions, size and argument types.
Developers wishing to use obex class features (attributes, etc.) must use class_new() instead of the traditional setup() function.
Retrieves the byte-offset of the obex member of the class's data structure.
Registers the byte-offset of the obex member of the class's data structure with the previously defined object class.
Use of this function is required for obex-class objects. It must be called from
main().
Registers a previously defined object class.
This function is required, and should be called at the end of
main().
Define a subclass of an existing class.
First call class_new on the subclass, then pass in to class_subclass. If constructor or destructor are NULL will use the superclass constructor.
Call super class constructor.
Use this instead of object_alloc if you want to call the super class constructor, but allocating enough memory for subclass. | http://cycling74.com/sdk/MaxSDK-6.1.1/html/group__class.html | CC-MAIN-2014-42 | refinedweb | 496 | 66.23 |
CXF Fault for IOException in CamelEoin Shanaghy Nov 2, 2009 10:25 AM
Hello,
I have a camel route with from & to CXF endpoints. If the to: endpoint is inaccessible (IOException), I'd like to get a SOAP fault in the invoker which has a specific fault code. The default behaviour gives me this (CXF 2.2.2.2-fuse in Fuse ESB 4.1, camel 1.6.1.2-fuse):
I'm trying to set up a CXF interceptor which can check if the root cause was an IOException and if so, set the faultcode to something specific which can be understood as "web service down". I set up an outFaultInterceptor on the from: endpoint which gets the SoapFault instance and updates the FaultCode, but the result at the invoker is the same!
Does anyone know how this can be done with CXF interceptors or is it better to do it with a Camel processor somehow?
1. Re: CXF Fault for IOException in CamelPedro Neveu Nov 3, 2009 11:51 AM (in response to Eoin Shanaghy)
Take a look at error handling in camel:
Let me know if you have any questions.
Pedro
2. Re: CXF Fault for IOException in CamelPatrick Fox Nov 3, 2009 12:20 PM (in response to Eoin Shanaghy)
Hi shanaghe,
The following thread might be of use with regards understanding the camel outbound route -.
Perhaps one option is to try is having a processor at the end of the route that manipulates the exception content if the exchange has an exception set.
Best regards
Pat
3. Re: CXF Fault for IOException in CamelEoin Shanaghy Nov 3, 2009 1:01 PM (in response to Patrick Fox)
Thanks for the replies.
Where should the processor be added to be able to read/manipulate the error? I thought the only way to do this would be via an ErrorHandler (as alluded to in the Pedro's reply).
If I try this:
<camel:route>
<camel:from
<camel:to
<camel:process
</camel:route>
the processor is never invoked in the error scenario.
I looked at implementing an ErrorHandler but I couldn't clearly see how to implement a custom ErrorHandlerBuilder which just executes some processing - the API seemed a bit more complicated than required.
public Processor createErrorHandler(RouteContext routeContext, Processor processor)
{
// If I create a processor here and return it, do I need to do anything with the routeContext and processor in the arguments?
}
4. Re: CXF Fault for IOException in CamelPatrick Fox Nov 3, 2009 1:28 PM (in response to Eoin Shanaghy)
Hi
Scratch my last reply - it was an oversight on my part. I don?t think that will work when an exception is thrown.
Best regards
Pat
5. Re: CXF Fault for IOException in CamelPedro Neveu Nov 3, 2009 4:46 PM (in response to Eoin Shanaghy)
Let me know if you got this going. My guess is you'd have to specify handled(false) so that your app will take over from the default dead letter channel. See.
Pedro
6. Re: CXF Fault for IOException in CamelWillem Jiang Nov 3, 2009 8:48 PM (in response to Eoin Shanaghy)
I think there are lots of way to implement your requirement on CXF side.
1. You can add your customer CXF MessageSenderInterceptor to throw the exception that you want.
2. You can also add interceptor which implements the handleFault method (you can replace the fault message as you want ), and add this interceptor before the MessageSenderInterceptor. Your interceptor's handleFault() method will be called, after the MessageSenderInterceptor throw the Fault.
Since CXF client just use ClientOutFaultObserver which only call the ClientCallback method when the exception is thrown, that can explain why your outFaultInterceptor doesn't take effect.
If you want to do it on Camel side, you need go through the ErrorHandler of camel
7. Re: CXF Fault for IOException in CamelEoin Shanaghy Nov 4, 2009 5:08 AM (in response to Willem Jiang)
Thanks, these replies are giving me exactly the information I'm looking for.
I have a solution working but I don't feel it's as neat as njiang's suggestions.
I defined a CXF interceptor which implements handleMessage() and added it as an interceptor in my camel-context:
Then in handleMessasge() I do:
if (rootCause instanceof IOException) {
fault.setFaultCode(mySpecificFaultCode);
}
Njiang, what's the best way to add my interceptor before the MessageSenderInterceptor.? I tried and couldn't crack it. Can I do this in the CXF endpoint definition or does it have to be done in Java code somewhere?
8. Re: CXF Fault for IOException in CamelWillem Jiang Nov 5, 2009 3:24 AM (in response to Eoin Shanaghy)
Here is the code snippet of the interceptor
public class MySenderIntercepor extends AbstractPhaseInterceptor<Message> { public MySenderInterceptor() { super(Phase.PREPARE_SEND); addBefore(MessageSenderInterceptor.class.getName()); } public void handleMessage(Message message) { // do nothing here } public void handleFault(Message message) { // add you code here } }
You can add this interceptor through the spring configuration or java code. Please check this out for more information.
CXFEndpoint definition supports configure the interceptor like this
<cxf:cxfEndpoint <cxf:inInterceptors> <ref bean="logInbound"></ref> </cxf:inInterceptors> <cxf:outInterceptors> <ref bean="logOutbound"></ref> </cxf:outInterceptors> </cxf:cxfEndpoint> | https://developer.jboss.org/message/889374?tstart=0 | CC-MAIN-2019-43 | refinedweb | 867 | 58.92 |
Hi, everybody.
I decided to master 32-bit AT32UC3C1512C, before I worked only with 8-bit at the Assembler. I have a question whether it is possible to work with the controller using only the standard enclosed libraries (not to use an includes for debugging boards)? I wrote the program for management of a light-emitting diode of polling method, but with use of interruptions - it is be impossible. How the minimum program for work with interruption has to look? My code:
#include <avr32/io.h> #include <avr32/uc3c1512c.h> #include <avr32/eic_302.h> #include <avr32/intc_102.h> __attribute__((__interrupt__)) static void _AVR32_EIC_INT3(void) // I need to use external interruption INT3 { // code } //-------------------------------------- int main(void) { while (1) {} }
the compiler writes: warning: '_AVR32_EIC_INT3' defined but not used.
1:From the INTC table Interrupt Request Signal Map, EIC 3 is Group 15, Line 2 which is interrupt request 482 (=15*32+2 = AVR32_EIC_IRQ_3 )
uc3c1512c.h already includes the processor-specific eic_302.h and intc102.h
2:
Interrupt handling in the UC3xxx family is different to that in the AVR8 family.
You must first initialise the INTC module, and then configure the INTC to connect your ISRs to the interrupt sources.
The INTC module has a requirement that an ISR must start within 16kb from the start of the interrupt-vector table (the EVBA), and that can be a problem in the 'C' language.
There are several solutions, but the simplest is to use the INTC 'driver' from the ASF.
intc.c in the ASF has two routines INTC_init_interrupts() and INTC_register_interrupt(,,)
So you will need to do ;
AVR32_INTC_INT0 is the interrupt priority (0=lowest, 3=highest).
Top
- Log in or register to post comments
I.e. if I have correctly understood, my code has to look it seems:
But in that case the compiler gives error messages:
Error: recipe for target 'main.o' failed
Warning: data definition has no type or storage class
Warning: type defaults to 'int' in declaration of 'INTC_init_interrupts'
Error: expected ')' before numeric constant
Error: 'eic_options' undeclared (first use in this function)
Message: each undeclared identifier is reported only once for each function it appears in
Error: 'EIC_MODE_EDGE_TRIGGERED' undeclared (first use in this function)
Error: 'EIC_SYNCH_MODE' undeclared (first use in this function)
Error: 'EXT_INT_EXAMPLE_LINE1' undeclared (first use in this function)
Prompt that I do not so.
Top
- Log in or register to post comments
Those definitions are in intc.h and eic.h You will need to #include them.
If you are using Atmel Studio to develop/manage your project it will be simpler to create a new project, then use the ASF wizard to include the INTC and EIC 'drivers'.
The ASF wizard will also add the 'C' startup code.
Top
- Log in or register to post comments
Thanks for a response. The matter is that work of ASF wizard in Atmelstudio7 requires "User Board template UC3 C0/C1/C2". Further he suggests to connect various modules. Having connected a little (eic, gpio, intc), the compiler still doesn't understand that it:
He writes:
'EIC_MODE_EDGE_TRIGGERED' undeclared (first use in this function).
Whether writing of the minimum piece of the program from "blank sheet", without use of application-oriented files is possible?
Top
- Log in or register to post comments
You can create a minimum project but you will need to add the appropriate C startup-code and tell Studio where your .c and .h files are located.
This works for Studio 6.2 (I do not have a Win7 machine that has Studio 7 installed)
menu File -> New Project -> C/C++ -> GCC executable project
Device Selection -> AT32UC3C1512C -> OK
menu ASF -> ASF Wizard
press "CANCEL" to the "No defined board" dialog.
Select EIC then Add>>
Select INTC then Add>>
Select GPIO then Add>>
Apply
OK
At this point the ASF Wizard has put the .c and .h files into a .src subdirectory, however, a little bit of manual work still needs to be preformed.
menu Project -> xxxx Properties -> Toolchain -> AVR32/GNU linker -> General -> tick(select) the option "Do not use standard start files -nostartfiles"
In the file where your int main(void) is, add #include "src/asf.h"
( If you move your main file to the src subfolder then add #include "asf.h" )
Top
- Log in or register to post comments
Thanks Mikech , for the detailed comment. Following your recommendations I have received such code:
On what the compiler gives 7 error messages, in particular on registration of interruptions:
Error conflicting types for 'INTC_init_interrupts'
Error expected ')' before numeric constant
Having looked at syntax in files "eic.h" and "intc.h", I have changed lines
That there are no mistakes, but on interruption compiler gives the message:
Warning: "_AVR32_EIC_INT3" defined but not used.
I believe that registration of interruptions hasn't taken place?
Top
- Log in or register to post comments
I can't help thinking you have made too large a leap here. Going from AVR8 and assembler to 32bit, C and using a complex support library (ASF) looks like a step too far to me.
I'd take it in steps. For AVR8 first switch from Asm to C. Then, assuming there is ASF support for it start to use ASF in C projects for AVR8. When you understand all the techniques involved then consider the move to C+ASF for 32 bit
Top
- Log in or register to post comments
You want something like
Top
- Log in or register to post comments
in this case the compiler writes:
Warning implicit declaration of function 'INTC_register_interrupts'
Top
- Log in or register to post comments
1: Confirm that in the file asf.h there is a #include for the file intc.h
2: Why ? Where ? and to What ? did you change
Top
- Log in or register to post comments
This including is present. The matter has appeared in syntax:
At such writing of mistakes isn't given.
The last that remained to solve with settings eic_options.
If to write:
There will be a mistake:
Error 'eic_options' undeclared (first use in this function)
And if so:
Error expected identifier or '(' before '[' token
Top
- Log in or register to post comments
Where have you declared what the variable eic_options[] is ?.
You need a eic_options_t eic_options[2]; declaration.
I recommend that you use Atmel Studio to create an example-project, (there are several EIC examples) and examine how the ASF routines are used and what the data-structures are.
The ASF is the software layer that Atmel puts between you and the hardware, to try and insulate you from some of the details at the hardware level.,
you can also manipulate the hardware directly because all the modules, bits, fields and addresses are predefined.
for example,
AVR32_EIC.MODE.int2 = EIC_EDGE_RISING_EDGE; will set the field 'int2' in the EIC MODE register.
AVR32_EIC.mode = 0x04; will put a value into all 32 bits of the MODE register.
Top
- Log in or register to post comments
It what now is necessary for me. Because ASF connects a set of macroes and functions which still need to be found.
Whether there is literature for on this subject? I a lot of things has reconsidered, but in relation to Atmel haven't found.
Top
- Log in or register to post comments
Learning how to use the ASF is not easy, it takes time and effort to understand how it names things and how it does operations.
A technique that worked for me was to look at the description of a module in the datasheet, (for example, the EIC module),
and then look at the ASF code (of the EIC 'driver') to help explain what needs to be done.
Top
- Log in or register to post comments
I didn't think that everything is so difficult. Mikech thanks for all councils they very much were useful to me.
Top
- Log in or register to post comments
Hi All,
I hope I haven't entered this discussion too late... I'm having a semi-similar issue trying to implement external interrupts with an AT32UC3C0256C. I've found several examples of interrupt implementations using ASF, but where I'm stuck is that my project is written in C++ which seems inherently incompatible with ASF, even in Atmel Studio 7 which is the platform I'm using. As poster mikech pointed out, reading through the chip datasheet and then tracing through ASF code is a technique that I've been using to march through several pieces of my code (SPI, UART, Clock and Power Management, PWM, and so on) but the implementation of interrupts seems extraordinarily tedious... so much so that I felt compelled to register as a new user here and ask, "What the heck am I missing??"
I'm not new to microcontrollers (truth told I go all the way back to the 6502 and Z80 days! HA!) nor am I new to Atmel, 32-bit micro's and so on, but this interrupt thing really has me scratching my head. I've spent a fair amount of time searching around (i.e., I'm not a lazy programmer looking for an easy out by way of the hard work of others), but so far haven't come up with much. I'm not looking for ready-made solutions, but if anyone could point me in the right direction I would be truly grateful.
Thanks in advance for any assistance!
Top
- Log in or register to post comments
Here my code which has earned can be still to whom to be useful:
Top
- Log in or register to post comments
Interrupts at the module-level (eg the USART, PWM, EIC, etc) are roughly similar to other processors.
The problem with the UC3 interrupt-handling is in how the hardware actually processes those exceptions/module-interrupts, because it is a requirement that the start of the interrupt-handlers be within 16 kbytes from the start of the 'interrupt-vector-table'. (EVBA + 16kbytes)
There are several techniques to achieve that goal and all have their advantages and disadvantages.
To get the AVR32 GCC compiler/linker to put the interrupt-handlers 'close' together (and close enough to the address has been put into EVBA), requires fiddling with program-sections and linker scripts and might not always work in all cases.
The authors of the ASF INTC 'driver' took a much more general approach that will always work (your interrupt-handlers can be anywhere in memory), but that flexibility and simplicity comes with a run-time penalty because there is extra code and a jump-table between the hardware and your interrupt-handler(s).
The only ASF item that I routinely use is the INTC 'driver' and for my projects I can live with its' code/memory/time overheads.
Top
- Log in or register to post comments
Thank you both for your help! I tried adding INTC and EIC via the ASF Wizard previously but at compile time saw a "generous" helping of errors. I didn't pursue it very much because I read so many posters detailing the fundamental incompatibility of C++ and ASF and figured this pile of errors must have been what everyone was talking about. I tried it again now and quickly got down to the need to remove startup_uc3.s from the newly added ASF support files and now I seem able to compile. I'll start adding more interrupt related code next and see what happens and surely borrow from the code that Alex-A provided above. Hopefully it's all forward progress from here!
Losing one's hair as a natural consequence of age is one thing, losing it because you're literally tearing it out of your head is quite another... Thank you both for preventing that process from getting too far out of hand!
Top
- Log in or register to post comments | http://www.avrfreaks.net/comment/2162106 | CC-MAIN-2018-09 | refinedweb | 1,966 | 60.85 |
my main aim here, is to get the command line arguments, and ideally assign them to variables and them and do something.. trouble is, the program doesnt seem to work..and inspite of 0 errors and warnings when building, when i run the first program, it prints out the number of arguments, but then crashes.
and this is the output from the problem details,and this is the output from the problem details,Code:
#include <stdio.h>
void main (int argc, char *argv)
{
int n;
printf("\n No. of arguments:%d",argc);
printf("%s",argv[1]);
for(n=0;n<argc;n++)
{
printf("\n Argv(%d) is %s",n,argv[n]);
}
}
Problem signature:
Problem Event Name: APPCRASH
Application Name: user_interface.exe
Application Version: 0.0.0.0
Application Timestamp: 4c869bfd
Fault Module Name: msvcrt.dll
Fault Module Version: 7.0.7600.16385
Fault Module Timestamp: 4a5bda6f
Exception Code: c0000005
Exception Offset: 0000d193
OS Version: 6.1.7600.2.0.0.256.48 i forgotten anything? | http://cboard.cprogramming.com/c-programming/129818-program-crashing-despite-no-errors-warnings-printable-thread.html | CC-MAIN-2015-32 | refinedweb | 165 | 67.96 |
This chapter
Finally, we describe the library functions to add rich user interfaces to the software projects, including mouse interaction, drawing primitives, and Qt support.
Note
At the time of writing this book, a new major version of OpenCV (Version 3.0) is available, still on beta status. Throughout the book,.
OpenCV is freely available for download at. This site provides the last version for distribution (currently, 3.0 beta) and older versions.
Note:
The main repository (at), devoted to final users. It contains binary versions of the library and ready-to‑compile.
Tip‑party).
Tip
The fastest route to working with OpenCV is to use one of the precompiled versions included with the distribution. Then, a better choice is to build a fine-tuned version of the library with the best settings for the local platform used for software development. This chapter provides the information to build and install OpenCV on Windows. Further information to set the library on Linux can be found at and.
A good choice for cross‑platform‑platform Qt framework, which includes the Qt library and the Qt Creator Integrated Development Environment (IDE). The Qt framework is freely available at.
Note‑platform tool available at.‑friendly way with its Graphical User Interface (GUI) version.
The steps to configure OpenCV with CMake can be summarized as follows:
Choose the source (let's call it
OPENCV_SRC
Note.
Tip.
Note
The short instructions given to install OpenCV apply to Windows. A detailed description with the prerequisites for Linux can be read at. Although the tutorial applies to OpenCV 2.0, almost all the information is still valid for Version 3.0.
Once OpenCV is installed, the
OPENCV_BUILD\install directory will be populated with three types of files:
Header files: These are located in the
OPENCV_BUILD\install\includesubdirectory and are used to develop new projects with OpenCV.
Library binaries: These are static or dynamic libraries (depending on the option selected with CMake) with the functionality of each of the OpenCV modules. They are located in the
binsubdirectory (for example,
x64\mingw\binwhenCM.
photo: This includes Computational Photography including inpainting, denoising,
HighDynamic).
Note.
In this book,
Note
‑L<location> for GNU compilers) and the name of the library (such as
-l<module_name>).
Note
You can find a complete list of available online documentation for GNU GCC and Make at and.‑platform book,.
Note‑make.
Note
For a detailed description of the tools (including Qt Creator and qmake) developed within the Qt project, visit.
Image processing relies on getting an image (for instance, a photograph or a video fame) and "playing" with it by applying signal processing techniques on it to get the desired results. In this section, we show you how to read images from files using the functions supplied by OpenCV.
The
Mat class is the main data structure that stores and manipulates images in OpenCV. This class is defined in the
core module. OpenCV has implemented mechanisms to allocate and release memory automatically for these data structures. However, the programmer should still take special care when data structures share the same buffer memory. For instance, the assignment operator does not copy the memory content from an object (
Mat A) to another (
Mat B); it only copies the reference (the memory address of the content). Then, a change in one object (
A or
B) affects both objects. To duplicate the memory content of a
Mat object, the
Mat::clone() member function should be used.
Note
Many functions in OpenCV process dense single or multichannel arrays, usually using the
Mat class. However, in some cases, a different datatype may be convenient, such as
std::vector<>,
Matx<>,
Vec<>, or
Scalar. For this purpose, OpenCV provides the proxy classes
InputArray and
OutputArray, which allow any of the previous types to be used as parameters for functions.
The
Mat class is used for dense n-dimensional single or multichannel arrays. It can actually store real or complex-valued vectors and matrices, colored or grayscale images, histograms, point clouds, and so on.
There are many different ways to create a
Mat object, the most popular being the constructor where the size and type of the array are specified as follows:
Mat(nrows, ncols, type, fillValue)
The initial value for the array elements might be set by the
Scalar class as a typical four-element vector (for each RGB and transparency component of the image stored in the array). Next, we show you a usage example of
Mat as follows:
Mat img_A(4, 4, CV_8U, Scalar(255)); // White image: // 4 x 4 single-channel array with 8 bits of unsigned integers // (up to 255 values, valid for a grayscale image, for example, // 255=white)
The
DataType class defines the primitive datatypes for OpenCV. The primitive datatypes can be
bool,
unsigned char,
signed char,
unsigned short,
signed short,
int,
float,
double, or a tuple of values of one of these primitive types. Any primitive type can be defined by an identifier in the following form:
CV_<bit depth>{U|S|F}C(<number of channels>)
In the preceding code
U,
S, and
F stand for
unsigned,
signed, and
float, respectively. For the single channel arrays, the following enumeration is applied, describing the datatypes:
enum {CV_8U=0, CV_8S=1, CV_16U=2, CV_16S=3,CV_32S=4, CV_32F=5, CV_64F=6};
Note
Here, it should be noted that these three declarations are equivalent:
CV_8U,
CV_8UC1, and
CV_8UC(1). The single-channel declaration fits well for integer arrays devoted to grayscale images, whereas the three channel declaration of an array is more appropriate for images with three components (for example, RGB, BRG, HSV, and so on). For linear algebra operations, the arrays of type
float (F) might be used.
We can define all of the preceding datatypes for multichannel arrays (up to 512 channels). The following screenshots illustrate an image's internal representation with one single channel (
CV_8U,
grayscale) and the same image represented with three channels (
CV_8UC3,
RGB). These screenshots are taken by zooming in on an image displayed in the window of an OpenCV executable (the showImage example):
An 8-bit representation of an image in RGB color and grayscale
Note
It is important to notice that to properly save a RGB image with OpenCV functions, the image must be stored in memory with its channels ordered as BGR. In the same way, when an RGB image is read from a file, it is stored in memory with its channels in a BGR order. Moreover, it needs a supplementary fourth channel (alpha) to manipulate images with three channels, RGB, plus a transparency. For RGB images, a larger integer value means a brighter pixel or more transparency for the alpha channel.
All OpenCV classes and functions are in the
cv namespace, and consequently, we will have the following two options in our source code:
Add the
using namespace cvdeclaration after including the header files (this is the option used in all the code examples in this book).
Append the
cv::prefix to all the OpenCV classes, functions, and data structures that we use. This option is recommended if the external names provided by OpenCV conflict with the often-used standard template library (STL) or other libraries.
OpenCV supports the most common image formats. However, some of them need (freely available) third-party libraries. The main formats supported by OpenCV are:
Windows bitmaps (
*.bmp,
*dib)
Portable image formats (
*.pbm,
*.pgm,
*.ppm)
Sun rasters (
*.sr,
*.ras)
The formats that need auxiliary libraries are:
JPEG (
*.jpeg,
*.jpg,
*.jpe)
JPEG 2000 (
*.jp2)
Portable Network Graphics (
*.png)
TIFF (
*.tiff,
*.tif)
WebP (
*.webp).
In addition to the preceding listed formats, with the OpenCV 3.0 version, it includes a driver for the formats (NITF, DTED, SRTM, and others) supported by the Geographic Data Abstraction Library (GDAL) set with the CMake option,
WITH_GDAL. Notice that the GDAL support has not been extensively tested on Windows OSes yet. In Windows and OS X, codecs shipped with OpenCV are used by default (
libjpeg,
libjasper,
libpng, and
libtiff). Then, in these OSes, it is possible to read the JPEG, PNG, and TIFF formats. Linux (and other Unix-like open source OSes) looks for codecs installed in the system. The codecs can be installed before OpenCV or else the libraries can be built from the OpenCV package by setting the proper options in CMake (for example,
BUILD_JASPER,
BUILD_JPEG,
BUILD_PNG, and
BUILD_TIFF).
To illustrate how to read and write image files with OpenCV, we will now describe the showImage example. The example is executed from the command line with the corresponding output windows as follows:
<bin_dir>\showImage.exe fruits.jpg fruits_bw.jpg
The output window for the showImage example
In this example, two filenames are given as arguments. The first one is the input image file to be read. The second one is the image file to be written with a grayscale copy of the input image. Next, we show you the source code and its explanation:
#include <opencv2/opencv.hpp> #include <iostream> using namespace std; using namespace cv; int main(int, char *argv[]) { Mat in_image, out_image; // Usage: <cmd> <file_in> <file_out> // Read original image in_image = imread(argv[1], IMREAD_UNCHANGED); if (in_image.empty()) { // Check whether the image is read or not cout << "Error! Input image cannot be read...\n"; return -1; } // Creates two windows with the names of the images namedWindow(argv[1], WINDOW_AUTOSIZE); namedWindow(argv[2], WINDOW_AUTOSIZE); // Shows the image into the previously created window imshow(argv[1], in_image); cvtColor(in_image, out_image, COLOR_BGR2GRAY); imshow(argv[2], in_image); cout << "Press any key to exit...\n"; waitKey(); // Wait for key press // Writing image imwrite(argv[2], in_image); return 0; }
Here, we use the
#include directive with the
opencv.hpp header file that, in fact, includes all the OpenCV header files. By including this single file, no more files need to be included. After declaring the use of
cv namespace, all the variables and functions inside this namespace don't need the
cv:: prefix. The first thing to do in the main function is to check the number of arguments passed in the command line. Then, a help message is displayed if an error occurs.
If the number of arguments is correct, the image file is read into the
Mat in_image object with the
imread(argv[1], IMREAD_UNCHANGED) function, where the first parameter is the first argument (
argv[1]) passed in the command line and the second parameter is a flag (
IMREAD_UNCHANGED), which means that the image stored into the memory object should be unchanged. The
imread function determines the type of image (codec) from the file content rather than from the file extension.
The prototype for the
imread function is as follows:
Mat imread(const String& filename, int flags = IMREAD_COLOR )
The flag specifies the color of the image read and they are defined and explained by the following enumeration in the
imgcodecs.hpp header file:
enum { IMREAD_UNCHANGED = -1, // 8bit, color or not IMREAD_GRAYSCALE = 0, // 8bit, gray IMREAD_COLOR = 1, // unchanged depth, color IMREAD_ANYDEPTH = 2, // any depth, unchanged color IMREAD_ANYCOLOR = 4, // unchanged depth, any color IMREAD_LOAD_GDAL = 8 // Use gdal driver };
Note
As of Version 3.0 of OpenCV, the
imread function is in the
imgcodecs module and not in
highgui like in OpenCV 2.x.
Tip
As several functions and declarations are moved into OpenCV 3.0, it is possible to get some compilation errors as one or more declarations (symbols and/or functions) are not found by the linker. To figure out where (
*.hpp) a symbol is defined and which library to link, we recommend the following trick using the Qt Creator IDE:
Add the
#include <opencv2/opencv.hpp> declaration to the code. Press the F2 function key with the mouse cursor over the symbol or function; this opens the
*.hpp file where the symbol or function is declared.
After the input image file is read, check to see whether the operation succeeded. This check is achieved with the
in_image.empty()member function. If the image file is read without errors, two windows are created to display the input and output images, respectively. The creation of windows is carried out with the following function:
void namedWindow(const String& winname,int flags = WINDOW_AUTOSIZE )
OpenCV windows are identified by a univocal name in the program. The flags' definition and their explanation are given by the following enumeration in the
highgui.hpp header file:
enum { WINDOW_NORMAL = 0x00000000, // the user can resize the window (no constraint) // also use to switch a fullscreen window to a normal size WINDOW_AUTOSIZE = 0x00000001, // the user cannot resize the window, // the size is constrained by the image displayed WINDOW_OPENGL = 0x00001000, // window with opengl support WINDOW_FULLSCREEN = 1, WINDOW_FREERATIO = 0x00000100, // the image expends as much as it can (no ratio constraint) WINDOW_KEEPRATIO = 0x00000000 // the ratio of the image is respected };
The creation of a window does not show anything on screen. The function (belonging to the
highgui module) to display an image in a window is:
void imshow(const String& winname, InputArray mat)
The image (
mat) is shown with its original size if the window (
winname) was created with the
WINDOW_AUTOSIZE flag.
In the showImage example, the second window shows a grayscale copy of the input image. To convert a color image to grayscale, the
cvtColor function from the
imgproc module is used. This function can actually be used to change the image color space.
Any window created in a program can be resized and moved from its default settings. When any window is no longer required, it should be destroyed in order to release its resources. This resource liberation is done implicitly at the end of a program, like in the example.
If we do nothing more after showing an image on a window, surprisingly, the image will not be shown at all. After showing an image on a window, we should start a loop to fetch and handle events related to user interaction with the window. Such a task is carried out by the following function (from the
highgui module):
int waitKey(int delay=0)
This function waits for a key pressed during a number of milliseconds (
delay >
0) returning the code of the key or
-1 if the delay ends without a key pressed. If
delay is
0 or negative, the function waits forever until a key is pressed.
Another important function in the
imgcodecs module is:
bool imwrite(const String& filename, InputArray img, const vector<int>& params=vector<int>())
This function saves the image (
img) into a file (
filename), being the third optional argument a vector of property-value pairs specifying the parameters of the codec (leave it empty to use the default values). The codec is determined by the extension of the file.
Note
For a detailed list of codec properties, take a look at the
imgcodecs.hpp header file and the OpenCV API reference at.
Rather than still images, a video deals with moving images. The sources of video can be a dedicated camera, a webcam, a video file, or a sequence of image files. In OpenCV, the
VideoCapture and
VideoWriter classes provide an easy-to-use C++ API for the task of capturing and recording involved in video processing.
The recVideo example is a short snippet of code where you can see how to use a default camera as a capture device to grab frames, process them for edge detection, and save this new converted frame to a file. Also, two windows are created to simultaneously show you the original frame and the processed one. The example code is:
#include <opencv2/opencv.hpp> #include <iostream> using namespace std; using namespace cv; int main(int, char **) { Mat in_frame, out_frame; const char win1[]="Grabbing...", win2[]="Recording..."; double fps=30; // Frames per second char file_out[]="recorded.avi"; VideoCapture inVid(0); // Open default camera if (!inVid.isOpened()) { // Check error cout << "Error! Camera not ready...\n"; return -1; } // Gets the width and height of the input video int width = (int)inVid.get(CAP_PROP_FRAME_WIDTH); int height = (int)inVid.get(CAP_PROP_FRAME_HEIGHT); VideoWriter recVid(file_out, VideoWriter::fourcc('M','S','V','C'), fps, Size(width, height)); if (!recVid.isOpened()) { cout << "Error! Video file not opened...\n"; return -1; } // Create two windows for orig. and final video namedWindow(win1); namedWindow(win2); while (true) { // Read frame from camera (grabbing and decoding) inVid >> in_frame; // Convert the frame to grayscale cvtColor(in_frame, out_frame, COLOR_BGR2GRAY); // Write frame to video file (encoding and saving) recVid << out_frame; imshow(win1, in_frame); // Show frame in window imshow(win2, out_frame); // Show frame in window if (waitKey(1000/fps) >= 0) break; } inVid.release(); // Close camera return 0; }
In this example, the following functions deserve a quick review:
double VideoCapture::get(int propId): This returns the value of the specified property for a
VideoCaptureobject. A complete list of properties based on DC1394 (IEEE 1394 Digital Camera Specifications) is included with the
videoio.hppheader file.
static int VideoWriter::fourcc(char c1, char c2, char c3, char c4): This concatenates four characters to a fourcc code. In the example, MSVC stands for Microsoft Video (only available for Windows). The list of valid fourcc codes is published at.
bool VideoWriter::isOpened(): This returns
trueif the object for writing the video was successfully initialized. For instance, using an improper codec produces an error.
Tip
Be cautious; the valid fourcc codes in a system depend on the locally installed codecs. To know the installed fourcc codecs available in the local system, we recommend the open source tool MediaInfo, available for many platforms at.
VideoCapture& VideoCapture::operator>>(Mat& image): This grabs, decodes, and returns the next frame. This method has the equivalent
bool VideoCapture::read(OutputArray image)function. It can be used rather than using the
VideoCapture::grab()function, followed by
VideoCapture::retrieve().
VideoWriter& VideoWriter::operator<<(const Mat& image): This writes the next frame. This method has the equivalent
void VideoWriter::write(const Mat& image)function.
In this example, there is a reading/writing loop where the window events are fetched and handled as well. The
waitKey(1000/fps)function call is in charge of that; however, in this case,
1000/fpsindicates the number of milliseconds to wait before returning to the external loop. Although not exact, an approximate measure of frames per second is obtained for the recorded video.
void VideoCapture::release(): This releases the video file or capturing device. Although not explicitly necessary in this example, we include it to illustrate its use.
In the previous sections, we explained how to create (
namedWindow) a window to display (
imshow) an image and fetch/handle events (
waitKey). The examples we provide show you a very easy method for user interaction with OpenCV applications through the keyboard. The
waitKey function returns the code of a key pressed before a timeout expires.
Fortunately, OpenCV provides more flexible ways for user interaction, such as trackbars and mouse interaction, which can be combined with some drawing functions to provide a richer user experience. Moreover, if OpenCV is locally compiled with Qt support (the
WITH_QT option of CMake), a set of new functions are available to program an even better UI.
In this section, we provide a quick review of the available functionality to program user interfaces in an OpenCV project with Qt support. We illustrate this review on OpenCV UI support with the next example named showUI.
The example shows you a color image in a window, illustrating how to use some basic elements to enrich the user interaction. The following screenshot displays the UI elements created in the example:
The output window for the showUI example
The source code of the showUI example (without the callback functions) is as follows:
#include <opencv2/opencv.hpp> #include <iostream> using namespace std; using namespace cv; // Callback functions declarations void cbMouse(int event, int x, int y, int flags, void*); void tb1_Callback(int value, void *); void tb2_Callback(int value, void *); void checkboxCallBack(int state, void *); void radioboxCallBack(int state, void *id); void pushbuttonCallBack(int, void *font); // Global definitions and variables Mat orig_img, tmp_img; const char main_win[]="main_win"; char msg[50]; int main(int, char* argv[]) { const char track1[]="TrackBar 1"; const char track2[]="TrackBar 2"; const char checkbox[]="Check Box"; const char radiobox1[]="Radio Box1"; const char radiobox2[]="Radio Box2"; const char pushbutton[]="Push Button"; int tb1_value = 50; // Initial value of trackbar 1 int tb2_value = 25; // Initial value of trackbar 1 orig_img = imread(argv[1]); // Open and read the image if (orig_img.empty()) { cout << "Error!!! Image cannot be loaded..." << endl; return -1; } namedWindow(main_win); // Creates main window // Creates a font for adding text to the image QtFont font = fontQt("Arial", 20, Scalar(255,0,0,0), QT_FONT_BLACK, QT_STYLE_NORMAL); // Creation of CallBack functions setMouseCallback(main_win, cbMouse, NULL); createTrackbar(track1, main_win, &tb1_value, 100, tb1_Callback); createButton(checkbox, checkboxCallBack, 0, QT_CHECKBOX); // Passing values (font) to the CallBack createButton(pushbutton, pushbuttonCallBack, (void *)&font, QT_PUSH_BUTTON); createTrackbar(track2, NULL, &tb2_value, 50, tb2_Callback); // Passing values to the CallBack createButton(radiobox1, radioboxCallBack, (void *)radiobox1, QT_RADIOBOX); createButton(radiobox2, radioboxCallBack, (void *)radiobox2, QT_RADIOBOX); imshow(main_win, orig_img); // Shows original image cout << "Press any key to exit..." << endl; waitKey(); // Infinite loop with handle for events return 0; }
When OpenCV is built with Qt support, every created window—through the
highgui module—shows a default toolbar (see the preceding figure) with options (from left to right) for panning, zooming, saving, and opening the properties window.
Additional to the mentioned toolbar (only available with Qt), in the next subsections, we comment the different UI elements created in the example and the code to implement them.
Trackbars are created with the
createTrackbar(const String& trackbarname, const String& winname, int* value, int count, TrackbarCallback onChange=0, void* userdata=0) function in the specified window (
winname), with a linked integer value (
value), a maximum value (
count), an optional callback function (
onChange) to be called on changes of the slider, and an argument (
userdata) to the callback function. The callback function itself gets two arguments:
value (selected by the slider) and a pointer to
userdata (optional).With Qt support, if no window is specified, the trackbar is created in the properties window. In the showUI example, we create two trackbars: the first in the main window and the second one in the properties window. The code for the trackbar callbacks is:
void tb1_Callback(int value, void *) { sprintf(msg, "Trackbar 1 changed. New value=%d", value); displayOverlay(main_win, msg); return; } void tb2_Callback(int value, void *) { sprintf(msg, "Trackbar 2 changed. New value=%d", value); displayStatusBar(main_win, msg, 1000); return; }
Mouse events are always generated so that the user interacts with the mouse (moving and clicking). By setting the proper handler or callback functions, it is possible to implement actions such as select, drag and drop, and so on. The callback function (
onMouse) is enabled with the
setMouseCallback(const String& winname, MouseCallback onMouse, void* userdata=0 ) function in the specified window (
winname) and optional argument (
userdata).
The source code for the callback function that handles the mouse event is:
void cbMouse(int event, int x, int y, int flags, void*) { // Static vars hold values between calls static Point p1, p2; static bool p2set = false; // Left mouse button pressed if (event == EVENT_LBUTTONDOWN) { p1 = Point(x, y); // Set orig. point p2set = false; } else if (event == EVENT_MOUSEMOVE && flags == EVENT_FLAG_LBUTTON) { // Check moving mouse and left button down // Check out bounds if (x > orig_img.size().width) x = orig_img.size().width; else if (x < 0) x = 0; // Check out bounds if (y > orig_img.size().height) y = orig_img.size().height; else if (y < 0) y = 0; p2 = Point(x, y); // Set final point p2set = true; // Copy orig. to temp. image orig_img.copyTo(tmp_img); // Draws rectangle rectangle(tmp_img, p1, p2, Scalar(0, 0 ,255)); // Draw temporal image with rect. imshow(main_win, tmp_img); } else if (event == EVENT_LBUTTONUP && p2set) { // Check if left button is released // and selected an area // Set subarray on orig. image // with selected rectangle Mat submat = orig_img(Rect(p1, p2)); // Here some processing for the submatrix //... // Mark the boundaries of selected rectangle rectangle(orig_img, p1, p2, Scalar(0, 0, 255), 2); imshow("main_win", orig_img); } return; }
In the showUI example, the mouse events are used to control through a callback function (
cbMouse), the selection of a rectangular region by drawing a rectangle around it. In the example, this function is declared as
void cbMouse(int event, int x, int y, int flags, void*), the arguments being the position of the pointer (
x,
y) where the event occurs, the condition when the event occurs (
flags), and optionally,
userdata.
OpenCV (only with Qt support) allows you to create three types of buttons: checkbox (
QT_CHECKBOX), radiobox (
QT_RADIOBOX), and push button (
QT_PUSH_BUTTON). These types of button can be used respectively to set options, set exclusive options, and take actions on push. The three are created with the
createButton(const String& button_name, ButtonCallback on_change, void* userdata=0, int type=QT_PUSH_BUTTON, bool init_state=false ) function in the properties window arranged in a buttonbar after the last trackbar created in this window. The arguments for the button are its name (
button_name), the callback function called on the status change (
on_change), and optionally, an argument (
userdate) to the callback, the type of button (
type), and the initial state of the button (
init_state).
Next, we show you the source code for the callback functions corresponding to buttons in the example:
void checkboxCallBack(int state, void *) { sprintf(msg, "Check box changed. New state=%d", state); displayStatusBar(main_win, msg); return; } void radioboxCallBack(int state, void *id) { // Id of the radio box passed to the callBack sprintf(msg, "%s changed. New state=%d", (char *)id, state); displayStatusBar(main_win, msg); return; } void pushbuttonCallBack(int, void *font) { // Add text to the image addText(orig_img, "Push button clicked", Point(50,50), *((QtFont *)font)); imshow(main_win, orig_img); // Shows original image return; }
The callback function for a button gets two arguments: its status and, optionally, a pointer to user data. In the showUI example, we show you how to pass an integer (
radioboxCallBack(int state, void *id)) to identify the button and a more complex object (
pushbuttonCallBack(int, void *font)).
A very efficient way to communicate the results of some image processing to the user is by drawing shapes or/and displaying text over the figure being processed. Through the
imgproc module, OpenCV provides some convenient functions to achieve such tasks as putting text, drawing lines, circles, ellipses, rectangles, polygons, and so on. The showUI example illustrates how to select a rectangular region over an image and draw a rectangle to mark the selected area. The following function draws (
img) a rectangle defined by two points (
p1,
p2) over an image with the specified color and other optional parameters as thickness (negative for a fill shape) and the type of lines:
void rectangle(InputOutputArray img, Point pt1, Point pt2,const Scalar& color, int thickness=1,int lineType=LINE_8, int shift=0 )
Additional to shapes' drawing support, the
imgproc module provides a function to put text over an image with the function:
void putText(InputOutputArray img, const String& text, Point org, int fontFace, double fontScale, Scalar color, int thickness=1, int lineType=LINE_8, bool bottomLeftOrigin=false )
Qt support, in the
highgui module, adds some additional ways to show text on the main window of an OpenCV application:
Text over the image: We get this result using the
addText(const Mat& img, const String& text, Point org, const QtFont& font)function. This function allows you to select the origin point for the displayed text with a font previously created with the
fontQt(const String& nameFont, int pointSize=-1, Scalar color=Scalar::all(0), int weight=QT_FONT_NORMAL, int style=QT_STYLE_NORMAL, int spacing=0)function. In the showUI example, this function is used to put text over the image when the push button is clicked on, calling the
addTextfunction inside the callback function.
Text on the status bar: Using the
displayStatusBar(const String& winname, const String& text, int delayms=0 )function, we display text in the status bar for a number of milliseconds given by the last argument (
delayms). In the showUI example, this function is used (in the callback functions) to display an informative text when the buttons and trackbar of the properties window change their state.
Text overlaid on the image: Using the
displayOverlay(const String& winname, const String& text, int delayms=0)function, we display text overlaid on the image for a number of milliseconds given by the last argument. In the showUI example, this function is used (in the callback function) to display informative text when the main window trackbar changes its value.
In this chapter, you got a quick review of the main purpose of the OpenCV library and its modules. You learned the foundations of how to compile, install, and use the library in your local system to develop C++ OpenCV applications with Qt support. To develop your own software, we explained how to start with the free Qt Creator IDE and the GNU compiler kit.
To start with, full code examples were provided in the chapter. These examples showed you how to read and write images and video. Finally, the chapter gave you an example of displaying some easy-to-implement user interface capabilities in OpenCV programs, such as trackbars, buttons, putting text on images, drawing shapes, and so on.
The next chapter will be devoted to establishing the main image processing tools and tasks that will set the basis for the remaining chapters. | https://www.packtpub.com/product/learning-image-processing-with-opencv/9781783287659 | CC-MAIN-2021-17 | refinedweb | 4,849 | 50.46 |
I would suggest that the module be bound to Grass.Window and not
Window, as it is easier to _flatten_ the namespace
( import Grass.Window; Window = Grass.Window; del Grass )
than it is to expand it. A possible shorthand for the above
could be "import Grass.Window as Window" .
Also: import Grass.* could import all modules contained in or below
directory (one of: $PYTHONPATH)/Grass.
I'll comment in more detail after I've fully digested your proposal,
but in general, I would hope for a more minimal solution: i.e. just
enough to fix the problems with import, but not any more complicated.
( I'm not saying your proposal ISN'T the minimal solution - but it
doesn't hit me as minimal on the first read. )
- Steve Majewski (804-982-0831) <sdm7g@Virginia.EDU>
- UVA Department of Molecular Physiology and Biological Physics | http://www.python.org/search/hypermail/python-1993/0519.html | CC-MAIN-2013-48 | refinedweb | 143 | 61.53 |
IRC log of tagmem on 2005-01-24
Timestamps are in UTC.
20:02:16 [RRSAgent]
RRSAgent has joined #tagmem
20:02:16 [RRSAgent]
is logging to
20:03:13 [Stuart]
zakim, this is tag
20:03:13 [Zakim]
ok, Stuart; that matches TAG_Weekly()2:30PM
20:03:24 [Stuart]
zakim, who is here?
20:03:24 [Zakim]
On the phone I see Roy_Fielding, Stuart
20:03:25 [Zakim]
On IRC I see RRSAgent, Zakim, Stuart, Chris, DanC, Norm
20:03:44 [Zakim]
+Norm
20:05:06 [Chris]
zakim, dial chris-617
20:05:06 [Zakim]
ok, Chris; the call is being made
20:05:08 [Zakim]
+Chris
20:07:52 [Zakim]
+DanC
20:08:54 [Zakim]
+TimBL
20:09:32 [Stuart]
zakim, who is here?
20:09:32 [Zakim]
On the phone I see Roy_Fielding, Stuart, Norm, Chris, DanC, TimBL
20:09:33 [Zakim]
On IRC I see RRSAgent, Zakim, Stuart, Chris, DanC
20:10:10 [Chris]
Meeting: TAG telcon
20:10:14 [Chris]
Chair: Stuart
20:10:29 [tim-phone]
tim-phone has joined #tagmem
20:10:33 [tim-phone]
if you can from the unforgiving minute get 60 seconds worth of distance run ....
20:10:46 [Chris]
Agenda:
20:10:58 [Chris]
Scribe: Chris
20:12:08 [DanC]
q+ to request an agendum on uri scheme registry reivew, W3C/IETF telcon 27 Jan
20:12:20 [Chris]
Regrets: Paul, Ian
20:12:37 [Chris]
Topic: Agenda review
20:12:47 [DanC]
(side note on review of agenda: this agenda is not exhaustive w.r.t. action items in the group; sigh.)
20:13:00 [Chris]
DC: IETF call
20:13:22 [Chris]
Topic: Next meeting
20:13:28 [Chris]
SKW: Regrets
20:14:06 [Chris]
SKW: transition telcons before new TAG participants terms
20:14:13 [Chris]
TBL: No objection
20:14:28 [Chris]
SKW: VQ agreed to work o agenda for first f2f
20:15:02 [Chris]
NW: Volunteer to chair the telcon next week
20:15:13 [Chris]
RF: Volunteer to scribe next week
20:15:33 [Chris]
Topic: approve agenda
20:15:50 [Chris]
SKW: Did not note that we accepted minutes of previous meeting
20:15:58 [DanC]
yes they do, stuart: "Minutes of 20 Dec 2004 accepted." --
20:16:02 [Chris]
NW: No objection
20:16:07 [Chris]
TBL: Seconded
20:16:16 [Chris]
RESOLVED; accept minutes of last meeting
20:16:40 [Stuart]
20:16:44 [Chris]
Topic: public discussion of extensibility and versioning
20:16:55 [Chris]
20:17:02 [Chris]
Noahs email
20:17:40 [Chris]
SKW: which list - schema-dev, www-tag, etc
20:18:13 [Chris]
CL: Asking ppl to subscribe to www-tag gets them a high volume list; better to go on schema-dev
20:18:51 [Chris]
DC: As long as its public, fine with me. if its more general than just schema, should be on www-tag
20:19:13 [Chris]
SKW: So, schema-specific stuff on schema-dev
20:19:41 [Chris]
ACTION Stuart: respond to Noah citing xml-schema-dev as forum for schema specific versioning discussion
20:20:04 [DanC]
(if anybody is seeking a shared forum where both the schema WG and the TAG are obliged to pay attention, we don't yet have one)
20:20:31 [Chris]
SKW: Joint meeting with schema 14 Feb at regular TAG telcon slot
20:20:43 [Chris]
Topic: Tech Plenary
20:21:04 [Chris]
SKW: Net outcome: A single proposed Panel session on theme of Extensibility and Versioning. Paul Downey (BT) is owning the session for TPPC.
20:21:04 [Chris]
Anticipating participation from TAG (volunteers?)and other WG's inc. XML Schema and QA-WG.
20:21:24 [Chris]
SKW: Steve Bratt said just one session
20:22:03 [Chris]
SKW: Perhaps DO, HT, NM on panel?
20:22:22 [Stuart]
20:22:23 [Chris]
Plenary agenda:
20:22:39 [Chris]
20:23:57 [Chris]
CL: I'm interested in Cross-Specifications Test Suites
20:24:07 [Chris]
NW: Interested in XML futures
20:24:36 [DanC]
(I feel similarly to CL re test foo)
20:24:41 [Norm]
Norm has joined #tagmem
20:25:09 [Chris]
Topic: TAG f2f
20:25:20 [Chris]
SKW: VQ is assembling an agenda
20:25:37 [Chris]
... TAG liaisons tracking table started
20:27:32 [Chris]
SKW: little other interest in extensibility outside of XML and schema
20:27:48 [Chris]
20:27:59 [Chris]
DC: Is thuis up to date and maintained?
20:28:06 [Chris]
SKW: Yes, ffeel free to update
20:28:17 [Chris]
s/thuis/this
20:28:23 [Chris]
s/ffeel/feel
20:29:15 [DanC]
(actually, what I asked was: does the page currently know everything stuart knows, and he said yes.)
20:29:16 [Chris]
RF: when are we meeting:
20:29:22 [Chris]
SKW: Mon 9-12
20:29:42 [Chris]
NW: plan to be there, may be slightly delayed'
20:30:22 [Chris]
Topic: QA Review
20:30:39 [Stuart]
20:30:41 [Chris]
CL: my draft
20:31:19 [Chris]
spec is
20:32:30 [DanC]
(yes, it has a pleasant style to it. plenty of whitespace, not horribly long)
20:33:47 [DanC]
(ah... now I see why I didn't read Chris's msg; went to tag, not to www-tag; and yet it's in the technical part of our agenda. disconnect, for me.)
20:35:23 [Chris]
20:36:48 [Zakim]
+Noah_Mendelsohn
20:37:19 [Chris]
its not clear whether the review is public yet, since we have not agreed to it
20:37:33 [Stuart]
ack dan
20:37:33 [Zakim]
DanC, you wanted to request an agendum on uri scheme registry reivew, W3C/IETF telcon 27 Jan and to
20:38:04 [Chris]
DC: seems like a fine review, wish oit was sent to them directly
20:38:58 [Chris]
DC: Not read carefully. Critical to fix the optional conformance bit
20:39:38 [Chris]
(discussion - who owns and umbrella spec, what if its another WG). Cross-spec conformance
20:39:45 [DanC]
(table
)
20:40:49 [Chris]
SKW: Needs to clearly indicate which section is being discussed
20:41:07 [Chris]
SKW: Overal l positive tone not conveyed by tesxt, add a prefix on that
20:41:23 [tim-phone]
timbl notes character set problems with that table.
20:41:26 [Chris]
SKW: Discussion at TP on these comments? CL available
20:41:34 [Chris]
CL: Sure
20:42:13 [Chris]
SKW: Who owns this after Chris turns into a pumpkin?
20:43:39 [Chris]
TBL: Can an external person contribute, or is this a tunnelling out of alumni until their actions are all done or transferred
20:44:22 [Chris]
CL: Does not seem like too much work
20:44:33 [Chris]
TBL: precedent, we invited DO to do similar
20:44:40 [Chris]
CL: OK agreed
20:44:51 [Chris]
SKW: Splendid
20:45:09 [Chris]
SKW: Is this suitable to send as TAG feedback?
20:45:18 [Chris]
RF: No objection
20:45:26 [Chris]
(no objections)
20:45:47 [Chris]
TBL: Abstain, did not get chance to read the comments. Support the TAG sending it
20:46:02 [Chris]
NM: Abstain too, have not reviewed
20:46:19 [Chris]
s/abstain/concurr/g
20:46:51 [DanC]
(I think "abstain" puts a motion at risk of failing due to lack of support, while "concur" does not)
20:47:20 [Chris]
SKW: Support CL
20:47:22 [Chris]
Please send Last Call review comments on this document before that date to www-qa@w3.org, the publicly archived list
20:47:35 [DanC]
I gather we are so RESOLVED.
20:47:36 [Chris]
ACTION Chris: Clean up and submit
20:47:50 [Chris]
RESOLVED: These , cleaned up are TAG comments
20:48:03 [Chris]
Topic: IETF URI Registry
20:48:13 [DanC]
20:48:32 [DanC]
Duplication of provisional URI namespace tokens in 2717/8-bis
20:48:46 [Chris]
DC:
20:48:48 [DanC]
20:49:16 [Chris]
DC: new process drafted, a provisionl and a final registry
20:49:26 [Chris]
... good to cite WebArch
20:49:45 [Chris]
... IRI everywhere is related to this
20:49:59 [tim-phone]
q+
20:50:04 [Chris]
... if you care about this, time is running out to fix/change tings
20:50:32 [Chris]
RF: they are ready to produce another draft
20:50:44 [Chris]
RF: probably best to wait for the new draft
20:51:16 [Chris]
SKW; could have multiple provisional registrations for the same URI scheme?
20:51:24 [Chris]
DC: yes, but not the permanent one
20:52:32 [Chris]
TBL: (scribe missed)
20:52:45 [Chris]
SKW: Larry asked us to review new schemes.
20:53:03 [Chris]
DC: expert review of new schemes as they move to permanent registry
20:53:09 [Chris]
TBL: Who assigns it?
20:53:25 [Chris]
DC: IESG last call, then its allocated
20:53:38 [Stuart]
SKW: Larry asked us to review and comment on revision of the URI scheme registration process.
20:54:20 [Chris]
RF: If anyone raises a non-uniqueness then it would halt the IESG review
20:54:56 [Chris]
RF: Next draft wil make it more clear tat the permanent registry is unique. provisional registrsations that clas with permanent als not allowed
20:55:17 [Chris]
TBL: No warning on provisional clashes?
20:55:51 [Chris]
DC: Any sane (machine readable) registry can produce uniqueness
20:56:11 [Chris]
NM: Early/late registration - late can have an inadvertent clash
20:57:21 [Chris]
DC: 27 Jan IETF/W3C telcon
20:58:13 [Chris]
DC: Next IETF is when??
20:58:15 [DanC]
"6-11 Mar 2005 Minneapolis, MN?
20:58:15 [DanC]
62nd IETF"
20:58:22 [Chris]
... 6-11 March
20:58:34 [Chris]
RF: Its not a WG so no meeting then
20:59:00 [Chris]
Topic: XML Chunk Equality
20:59:25 [Chris]
SKW: Suggested posting as a note, or a finding
20:59:51 [Chris]
SKW: TBL asked for reasons for different types of equality, when to use each one
21:00:52 [DanC]
"ACTION: NDW to make editorial improvements, point to other different schemes, why use them, things to avoid in XML Chunk Equality."
21:01:03 [Chris]
NW: Took some actions to improove the doc in this way. no due date. Not completed yet
21:01:06 [DanC]
--
21:01:40 [Chris]
SKW: So, discuss more once this revision is done
21:02:10 [Chris]
NW: Due date depends on XSL/XQ specification schedule... tell you next week
21:03:43 [DanC]
"pc: good to see when F&O deep= works and when it does not"
21:03:53 [Chris]
TBL: Equality characterized by a number of parameters?
21:04:17 [Chris]
NW: Yes, deep= has options that can be set. Namespace-related options
21:04:45 [Stuart]
Use cases from the Issur raising:
21:04:49 [Stuart]
Cases I am aware of:
21:04:49 [Stuart]
- XML itself uses it for an external entity
21:04:49 [Stuart]
- XML schema has the "Deep equality" issue as to when any two chunks
21:04:49 [Stuart]
are "equal".
21:04:49 [Stuart]
- RDF has a "XML Literal" data type which it handles transparently. It
21:04:50 [Stuart]
needs a notion of when two chunks are the same.
21:04:52 [Stuart]
- XML-DSig signs, and therefore ensures the integrity of, a chunk of XML
21:04:58 [DanC]
(timbl, why are you surprised that RSS feeds don't have namespaces? consumers don't require them. people naturally do the minimum work that achieves their goal.)
21:05:03 [Chris]
TBL: Amazed at how much RSS has no namespace
21:06:00 [Chris]
NW: question is of unused but declared namespaces?
21:06:18 [Chris]
DC: case of two non namespaced docs, equal or not???
21:06:48 [Chris]
F(equal) -> Yes | No | dunno
21:06:55 [DanC]
i.e. did <p> in doc1 mean what <p> in doc2 meant?
21:08:08 [Chris]
NM: (starts to say something interesting, but phone fades)
21:08:47 [Chris]
Topic: Mark Baker issue on WS-Addressing
21:09:06 [DanC]
(the best way to provoke a response is to threaten harm, somehow; i.e. start talking about the next topic, threatining somebody's ability to comment on the previous topic)
21:09:21 [Chris]
21:09:36 [Chris]
DC; Read hoim to say he was happy
21:09:45 [Chris]
s/hoim/him
21:10:25 [Chris]
WS-Addressing SOAP binding & app protocols
21:11:00 [Chris]
DC: (reads from email)
21:12:24 [timbl]
wsa:to
21:12:53 [timbl]
q+
21:13:00 [Chris]
DC: its not a new issue
21:13:29 [Chris]
NM: SOAP will wind up putting the URI where HTTP wants it, but will also be in the SAP header too
21:13:44 [Chris]
... is it a flaw to carry the info in an additional place?
21:14:08 [DanC]
(doesn't seem like a new issue, to me; seems like issue
)
21:14:25 [Stuart]
ack tim
21:14:33 [Stuart]
ack tim
21:14:42 [Chris]
TBL: Arch of the WS-* specs is not yet written.
21:15:13 [Chris]
... identify an endpoint in ws, but actually send it to a different URI of the service, which has some connection, but the sever has a URI
21:15:43 [Chris]
... so its a service end point, and the service can talk about multiple objects
21:15:50 [Chris]
objects and services are distinct
21:16:56 [Chris]
... another achitecture, get on the URI of a book, but behind the scenes its broken down into multiple services, checking financials and stock etc so it looks atomic but i ssplit up behind the scenes
21:17:40 [Chris]
... not clear wheter to support marks issue because its not clear what architecture it is fitting into
21:18:04 [Chris]
... good to involve DO here, finsd how WS folks tend to do this
21:18:18 [Chris]
... may be some defacto or emergent architecture
21:18:45 [Chris]
.... can't say its broken unless we can point to the part that breaks
21:19:08 [Chris]
DC: Prefer to discuss whether to add this as an issue, not the summary of the eventual finding
21:19:15 [Chris]
TBL: Happy to add it to the list
21:19:28 [Chris]
NM: or work it outafter some fact finding first
21:19:37 [Chris]
:)
21:20:28 [Chris]
RF: seems the direction of all ws specs is to be binding neutral, but no statement that a given binding is required
21:20:39 [DanC]
endPointRefs-NN?
21:20:49 [Chris]
... so entirely separate architectures all described as web services
21:20:56 [Chris]
... support adding it as an issue
21:21:22 [Chris]
SKW: TP liaison with WS Addressing
21:22:04 [DanC]
ACTION DanC: edit
to reflect avaialability and interest
21:22:11 [Chris]
NM: Suggest asking Mark Nottingham
21:22:44 [Chris]
SKW: Calls question to add as an issue
21:22:54 [Chris]
DC: endpointRefs-NN
21:23:01 [Chris]
DC: Aye
21:23:12 [Chris]
CL: Concurr
21:23:15 [Chris]
RF: Yes
21:23:21 [timbl]
Aye
21:23:21 [Chris]
NW: Yes
21:23:28 [Chris]
SKW Concurr
21:23:29 [Stuart]
concur
21:23:41 [Chris]
NM: Yes
21:24:01 [Chris]
RESOLVED: New issue endpointRefs-NN
21:24:07 [Chris]
salt NN to taste
21:24:18 [DanC]
(tradition is to announce new issues. I'm not in a position do that)
21:24:23 [DanC]
(easily)
21:24:28 [Chris]
ACTION Stuart: Tell mark Nottingham we added the isse and would like to discuss it
21:24:46 [Chris]
s/mark/Mark
21:24:59 [Chris]
tag-announce and www-tag?
21:25:06 [Chris]
SKW: End of agenda
21:25:11 [Chris]
DC: Seconded :)
21:25:22 [Zakim]
-Roy_Fielding
21:25:29 [Chris]
Adjourned
21:25:44 [Zakim]
-Norm
21:26:19 [Zakim]
-TimBL
21:26:33 [Chris]
rrsagent, bye
21:26:33 [RRSAgent]
I see 4 open action items:
21:26:33 [RRSAgent]
ACTION: Stuart to respond to Noah citing xml-schema-dev as forum for schema specific versioning discussion [1]
21:26:33 [RRSAgent]
recorded in
21:26:33 [RRSAgent]
ACTION: Chris to Clean up and submit [2]
21:26:33 [RRSAgent]
recorded in
21:26:33 [RRSAgent]
ACTION: DanC to edit
to reflect avaialability and interest [3]
21:26:33 [RRSAgent]
recorded in
21:26:33 [RRSAgent]
ACTION: Stuart to Tell mark Nottingham we added the isse and would like to discuss it [4]
21:26:33 [RRSAgent]
recorded in | http://www.w3.org/2005/01/24-tagmem-irc | CC-MAIN-2016-50 | refinedweb | 2,807 | 65.59 |
Okay, so i got my Dashboard working but i need to import a custom font (All of us know the cool font on digital alarm clocks, watches and/or old displays).
How to import this font and actually print updating text?
You know, a Tachometer like km/h or mph
you should use better tags
Answer by Cyb3rManiak
·
Apr 21, 2011 at 12:58 PM
RTFM dude... (Check this out, and see if it helps you)
To add a font to your project you need
to place the font file in your Assets
folder. Unity will then automatically
import it
To add a font to your project you need
to place the font file in your Assets
folder. Unity will then automatically
import it
RTFM? no need to be rude man. you don't like my question? why are you answering then...
He's answering because he's kind. Bio, please try to first figure things out yourself and only then ask questions here..
just fyi - this page is the first result when i search google for "custom font unity"
I wish I could downvote this but I need 100 rep. This answer is just stupid, delete it or edit it please. Now this is one of the highest ranked pages when Googling font + unity. Not sure how this Q&A site works but on the SE network this should get downvoted to oblivion so it would not get ranked on Google.
font + unity
Im just a random passerby from google and i am going to thank the OP for posting this question even if they didn't search the Forums or the Unity Manual first. why? Because if it weren't for him this post wouldn't exist for people like me who are using search to solve their questions...
I got an idea programming community, howabout you think ahead on how answering a "stupid" question can help others in the future who are simply searching for answers. Rather than being assholes to each other for asking "stupid" questions. don't like it? Then don't answer; nobody needs your petty spite and belittlement. Just because you answer doesn't make you a kind person. it's all about how you answer.
Sometimes I wish I could punch the entire programming community for childish attitudes. Many times you guys treat each other like trash; it's discussing and needs to stop.
Answer by GfK
·
Jun 06, 2015 at 05:30 AM
This happens ALL the time on here and paints Unity3D in a very dim light. Genuine question asked, and the first reply is always some sarcastic know-all who doesn't appreciate that neither he nor anybody else knows everything. Even had one recently with a LMGTFY link.
Well I DID Google it, and I ended up here.
Know-alls should bear this in mind in future - preferably before posting.
I shouldn't be upvoting this because it's not an actual answer to the question. But yeah, turns out we're not robots and this isn't anal-yst SO.
Answer by AlphaRed_Studios
·
Sep 28, 2015 at 06:38 AM
@Superrodan lol! I so agree, unity community is the only real downside to the engine workflow.
However, as a bit of a side note, I typed add font to unity out of curiousity and it literally came straight to this forum post. Not everyone wants to read through an entire manual just to find the answer to a simple question like adding a font to unity text. Most of the explanations in the manual are written so robotically it is hard to even get an answer out of it. Most post questions on the forum for that reason not expecting a jerk response like "look it up or read the manual moron!!!" etc. If it bothers you that bad why even waste time and effort replying?? It is only going to drive new potential customers for unity away and give the helpful and polite users (they do exist in this community I promise!) a really bad reputation. Think about how your answers are going to impact the community as a whole guys, and don't just jump into "I know everything there is to know but I refuse to answer because you should have to earn my knowledge because I am holier then thou!" replies.
On the upside to this, and for future unity beginners who are just looking for the simple solution to this question;
To create a custom font select ‘Create->custom font’ from the project window. This will add a custom font asset to your project library.
To add a pre-existing font (created in photoshop or downloaded) simply import it as an asset in the project asset library and it will be ready to select in the font selection source.
For any additional font questions you can go here for more detailed (and robotic) info:
Answer by gumbotron
·
Nov 10, 2015 at 12:59 AM
People that can't be bothered to read the instructions: How do you not understand that when coming to the forum and being upset that someone points you to the manual/wiki/documentation YOU ARE LITERALLY ASKING PEOPLE TO READ THE INSTRUCTIONS TO YOU? Let that sink in. One more time: YOU ARE ASKING STRANGERS TO READ THE INSTRUCTIONS TO YOU BECAUSE YOU ARE TOO LAZY OR STUPID TO MAKE SENSE OF IT YOURSELF ( @AlphaRed_Studios, I'm looking at you and your "reading is hard" attitude).
This is why knowledgeable people can be abrupt on the forums; they've done the work to understand. They've read the manual. They've figured it out, at least somewhat. When you get mad that they point you to the manual, you are saying that you don't care about other people's time and effort that it takes to find and post the info you think you are owed, that your time and problems are more important, and that you can't be bothered to read and figure it out yourself.
Now, I understand that it's slightly embarrassing to have the whole world shown that you can't figure things out on your own and must have information spoon-fed to you (and NO - POSTING TO A FORUM TO ASK A QUESTION THAT IS CLEARLY SPELLED OUT IN EASY TO USE DOCUMENTATION IS NOT FIGURING OUT THINGS ON YOUR OWN), but to follow it up with crying about how hurt your feelings are that someone told you to READ A PARAGRAPH IN THE MANUAL instead of going through the effort, AGAIN, of posting info THAT IS ALREADY POSTED is laughable.
@AlphaRed_Studios Are you for real? No one needs to read an "entire manual" to figure out how to import a font unless they are a moron that can't use search. Robotic? How could "To add a font to your project you need to place the font file in your Assets folder. Unity will then automatically import it." to make it any less robotic or simple? Maybe a :-) or a lolcat pic? Seriously, whining that reading is hard and that someone didn't kiss your hand while reading to you the instructions THAT SHOULD BE THE FIRST PLACE YOU LOOK is beyond pathetic. You really need to grow up and figure out that NO ONE OWES YOU ANYTHING.
@antk @MadMenyo @knuckles209cp @Galactic_Muffin So, yes - this page is found through a Google search. And it tells you, right away, not only that your answer is in the manual but WHERE IN THAT MANUAL TO LOOK. So what's the problem again? When @Galactic_Muffin says "this post wouldn't exist for people like me who are using search to solve their questions." you are stunningly oblivious to the fact that IT IS ALREADY POSTED IN THE MANUAL THAT IS EASILY SEARCHABLE AND WELL DESIGNED. Let me repeat that: IT'S ALREADY POSTED AND HAS BEEN AVAILABLE TO YOU FROM THE MOMENT YOU STARTED USING UNITY. You just have to do what you've already done, but IN A PLACE SPECIFICALLY DESIGNED FOR IT and then not complain about it.
You need a Xanax.
Yes he does...
Shouting at people is rather.
Outline Text From TTF.
2
Answers
Problem with displaying a value digitally
1
Answer
A node in a childnode?
1
Answer
Is there a way to get the size of text?
1
Answer
Changing the font of a tk2dTextMesh in C#
0
Answers | https://answers.unity.com/questions/59753/importing-custom-fonts-into-unity.html | CC-MAIN-2018-43 | refinedweb | 1,413 | 69.92 |
24 March 2013 20:05 [Source: ICIS news]
SAN ANTONIO, Texas (ICIS)--The US has a five-to-10-year window to leverage its shale gas and oil advantage into a manufacturing renaissance, the president of the American Fuel & Petrochemical Manufacturers (AFPM) said on Sunday.
“Other nations are not going to wait for us to get our act together,” said Charles Drevna, who made his comments on the sidelines of the AFPM’s annual International Petrochemical Conference (IPC).
The shale boom has brought about many calls for a revival of US manufacturing after many plants and jobs were moved overseas toward the end of the 20th century due to lower labour and infrastructure costs.
But shale gas and oil can level the playing field in terms of energy and feedstocks in the ?xml:namespace>
The industry is “more than ready” to exploit the advantages brought about from the shale boom, Drevna said.
“We’ve been waiting for this moment for decades,” he said.
Bringing about a manufacturing renaissance powered by shale oil and gas will take time and much investment in infrastructure, from wellheads to crackers to distribution facilities, said James Cooper, AFPM vice president of petrochemicals.
But it can not be done haphazardly, he said – it must be done “slowly and deliberately”.
Conversations must be had with all parties at the table discussing how to use shale gas and oil effectively to bring about that manufacturing renaissance, as well as ensure that they are used in an environmentally conscious manner, Cooper said.
Those conversations need to happen soon, as well as action taken on the
“We don’t have a choice if we are going to take advantage,” he said. “If we fail to do this or let it go fallow, it will hurt | http://www.icis.com/Articles/2013/03/24/9652795/afpm-13-us-has-5-to-10-years-to-create-manufacturing-revival.html | CC-MAIN-2014-23 | refinedweb | 295 | 53.55 |
Join devRant
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple APILearn More
Search - "halp"
- So today I got fired.
Why?
The Ceo forgot they asked me to take care of some business while he was gone. They went on a trip to get thier butt inflated (quite litterally kim kardashian status) for two months.
Me, A general employee, not a captain, or a division manager.
Turns out I ran the company a lot more efficiently than they did, reducing our man power from 5 staff per task down to one per task.
Not only that increased client retention 78℅
Was let go for overstepping my company roles.
I think they we're just a bit jealous, or ego was too large.
Luckily, one of the division managers took me under one of their teams and is secretly keeping me on until I bust out of this joint.12
-
-
- "Trinidad And Tobago" changed their country name to "Trinidad & Tobago", and the .Net framework reflected that change.
So that's why this unit test is failing.
I GUESS BULLSHIT IS NOW INTERNATIONAL.2
- HUGE FUCKING DILEMMA FOR ME.
I will probably get the chance of choosing a company phone soon (as in, next few days).
Option 1: Android - not allowed to root or anything crazy so I'll have a partly open system but with google tracking fully enabled at all times (most probably).
Option 2: iOS - Also not allowed to jailbreak or anything 'weird' but it's entirely closed source. Although no Google tracking shit.
I honestly have no clue what to choose.
Halp.108
-
-.2
- Dear diary:
It is 2020 and I still don't know shit about docker :3
I don't know how bad this is.....17
- *12
- I'm the worst with color combinations and I want to enable dark mode on the privacy/security blog!
What color combinations (if you have hex codes or something, please share!) would you think would suit the blog?
Halp :P36
-
- I hate asking questions but I need to right now.
Any suggestions for an easy installable email server like mailinabox? Multiple domains is a requirement :)46
- Game dev update: I'm procrastinating the project cuz I need help with stupid background and it doesnt mix well so Im just contemplating existence
But I did this for fun8
- I am in programmer hell today.
Oh great programmers of the universe, lend me your strength so that I do not leave work a shattered soul on this day!5
- > likes linux
> maybe not even install windows on shiny new laptop?
> debian-live.iso
> y u no wifi?
> google: lol apt-get
> but i has no internets...
Why only with Debian and not with literally any other flavor of Linux I've tried, which are all Debian variants?
Halp?19
-
- I am a terrible designer. When i say terrible i mean my designs for website somehow always manage to look like an ugly piece of shit.
Halp?20
- Anybody else has the "gif glitch" issue
#region debug_details
Android 7.1.2 beta
Nexus 5X
#endregion debug_details11
- *squirming in bed*
If it ain't broke don't fix it.. If it ain't broke don-WHAT THE FUCK IS "payment.needed2"??
Calm down, it's just some bad code but it works, you didn't write it, it's not your probl-WHY THE FUCK DO THE IF STATEMENTS HAVE SO MANY DUPLICATED LINES??
Sleep. Just sleep.
- Legit got excited because today...on friday...one of our servers went down.
Why excited?
Tell me, do you know how fun it is to call your admin if he was "able to get it up" just for him to reply that he is having some "performance issues"?
Lmao it's fucking hilarious.
On another nothe, plz halp5
-
- Add a course that teaches people how to write and formulate questions.
So that people don't write questions like, "my code no work, plz halp"
-
-5
- !HALP!
I was just messing around to find a good username. Finally I got frustrated and tried “gary”. Now I am stuck with it for at least next 6 months! HALP @dfox12
- I can't stop the urge to buy Udemy courses. Help!! If I don't stop, I go broke cause I'm still aiming for a job.10
-
- Anyone who has experienced this without falling into desperation deserves a beer, I know I need one.
Colleague: Python says I have an indentation error, halp. *Sends screenshot*
You: okay, it says it's located on file.py line 39. Can I see the related line?
Colleague: *sends a screenshot of the whole file, without numbered lines*
You: ummm, could you send me the related lines tho?😐
Colleague: 😒 yeah. *Resends a screenshot of the error and the whole file...again*
You: I REALLY need you to send me only the function scope to help you cuz I can't visually debug the whole file on a picture.
Colleague: *sends a panned phone picture with an arrow to the function (half of it)*
Plot twist: she's your girlfriend.
[EDIT]
GF: I can't see it, I'll go have a snack.1
- ❤️debugging
to give you an idea of how much work this really was, the problem was that the request wasn't sending in the frontend6
- Things that piss me off in github issue comments:
- "halp it doesn't work!!!" no description, no steps to reproduce, nothing.
- people writing comments in a random ass language that nobody understands so everybody except them has to go through the effort of translating it.
- issue comments that escalate into a meme fest because the issue was linked on reddit or some other platform
-
- My Windows installation has been achting up the last few days. Something with permissions is messing with my gpu it seems.
Of all the errors I have gotten, this must have been the best one5
- Amazon's AWS support sent me an email about a request to support that I sent to billing, saying they sent it to billing. They then said they couldn't help.
I just need them to stop billing me for things I no longer use!3
-
- Interviewed a guy yesterday and asked what languages he knows...
"I'm really, really good at HTML"8
-
- Using torrent for the FIRST TIME IN MY LIFE.
Wish me luck.
Download: less than 1MiB/sec
Upload: 50KiB/sec7
- I'm hoarding free courses on Udemy which I probably won't even watch. I even enrolled in stuff not related to dev, things like meditation and etc...1
- I actually experienced this today:
"Halp, why is this not working?"
$array = [];
$array->key = 'value';
...a few more...1
-
- When you're working on something all day and then the senior dev swoops in and answers your question in 5 minutes.
-
-
-
- Can't sleep because never ending brainstorming/problem solving, any of you guys who suffer the same sleeples nights, and do you have any tips? Was thinking of trying out prescription medications.5
-
- someone please motivate me to code again.
since one month i did nothing but digital art/youtube watching/maths/chatting but i did not code.
the main reason being that all my projects have bugs currently :
- How do you transition from a Windows to Apple keyboard layout? OS X is good and everything but I can't get accustomed to this weird layout. All my shortcuts that I spent years memorizing doesn't work. Halp!9
-
- R.I.P John Doe
He made the mistake of writing unmaintainable code then leaving the team.
I just heard of his passing.
He was brutally beaten to death by the new maintainer. Now the maintainer is behind bars.
And now I've been asked to maintain it.
R.I.P Me as well I guess
Joke of course.7
-
-
- Maybe you guys could help me...
My father just sent me a .xlsxm file (excel + macro file), it's all about horse races and stuff a 60+ years old dude would do :D
The file is pretty neat, but some minors changes needs to be done, but I have no clue where the code is. I found the "macro" part but it's empty, and I'm not surprised since the file itself seems to be generated from C# (Maybe not, I'm not the expert)
Sooo... Can anyone tell me how do I get to this code
-
- Started work at a call center recently, really hoping things work out since it's full time, if you're inclined I'd appreciate some help toward that stress ball, I think I may need it soon...4
-
-
-
- Developing nanosatellite to be launched with foreign company.
The ICD (Interface Control Document, basically guidelines regarding design) is clear, but there are some key points we needed to ask with the launcher.
I've sent email to ask them regarding those questions
Then got a reply saying that it'll be forwarded to the engineering team.
That's it. 2 weeks in, no reply. Tried emailing them again to nudge them, no reply, resent the email the following week.
Still waiting till today.
Please reply me 😂😂😂1
- Company notebook be like:
"1 program still needs to close"
"-----------------------------------"
"(waiting for) Microsoft Outlook"
"Outlook is shutting down"
"-----------------------------------"
"To close the program that is preventing Windows from shutting down, click Cancel, and then close the program"
"-----------------------------------"
FORCE SHUTDOWN, YOU MOFO!3
-?10
- Counted it out... 100k LoC frontend & backend... Not a single automated test. No unit testing, no integration testing, nothing. I've been asked to implement a CI server.
Halp5
-
- .12
-
- Trying to add tiny, 1dp dividers to my Android navigation drawer...this shouldn't be taking me 45 minutes XD2
- That moment you realise you fucked up, but don't have the courage to tell your team leader that you fucked up.. Especially when you're atleast 30% into a project.1
- What do you think of online tests (for hiring) with a mandatory webcam?
The webcam part is making me anxious enough to back out.24
-
- Does anyone know where you could buy O'Reilly books cheaper than they are on their site and possibly cheaper than Amazon? 50$ per book for a student it quite a lot.11
-
-
- Having a hard time finding work. Jack of all trades, master of none. Went to college for a while, but never finished a degree. Mostly self taught and can easily learn on the fly.
Can program, 3d design and model, ins and outs of unreal engine 4, web stuff, can do IT work, knows VR standards and tricks, powerful desktop and powerful laptop, plenty of uhd cameras, knows Android and ios, etc.
Where do I look? What can I apply for? Can I make money on my own? Can I provide a service? How do I sell that?
HALP 😫8
- So, I have my first ever on site interview on Monday for a Mobile Software Developer position.
I’m super excited but also super nervous.
You guys have any tips for not Richard Hendrix-ing the on site interview? 😂
- Trying to make android system rw on a rooted phone, but it doesn't allow me. FML. Hiw am I going to block YouTube ads now?8
-
-
- Have you ever smelled fuckery, like the "the potential answers to my questions are 4 years old and unanswered" kind of fuckery?
Fuck my life.
- awake for 48 hrs already still needs to code because i'm a slave and i won't be able to sleep if i stop but brain is now lagging halp!2
- Hi people.
Did I go south, or I really am stuck in ES6 while ES9 is the thing going on or what is goinnnnnnnng on pls.
halp mah3
-
- I've got an interview invitation as a freelancer. I've never freelanced before. What if I don't know how to do what the client wants me to do? Halp
- Pretty sure I've decided to dedicate to shipping electron applications. The problem is I've only lightly dealt with node, recommendations on where to start
- I'm trying to install ubuntu server 19.04 on a machine that also has windows 10 on it. The SSD is already split into two parts, one is an ntfs partition for windows and the other is free space with no partition. That's where I want to install the server, but the installer doesn't seem to be aware of the windows partition and I don't want to accidentally format it, overwrite it or make it unbootable.
Is it even possible to dual boot with ubntu server 19.04?10
-3
-
- I'm dying when I see a span of code out there in the wild, mixed with everything else. `Can we have some backtick love?`
This is a site for developers. Halp!
- One year anniversary at my company and I find I personally have 4 separate exchange accounts to varying levels of synchronization. Perforce, email, lync Skype and a few others have varying spellings of "Welcome1" as the password.
Every password expiration and reset gradually adds to the slow motion landslide.
IT can't figure out how my accounts are even working in the first place and wont touch it.
Halp.1
- And here i am. Waiting outside, with a literally freezing weather, for my boss, to work on a freaking saturday. Halp!6
- Just got my first internship, unfortunately there were no C++ or Java positions available.
Here I find myself on a front end job using Angular 5 and typescript with practically no experience with web development.
HALP!!!!
Any tips to making this learning process easier?4
-
- To link a html to CSS is it
<link href = "style.css" type="style/css" rel="style sheet"> I need halp11
- Can you guys give me ideas of a side project to do? I finished my last one and I'm feeling hopeless to use the shit ton of things I learned I learn through my jobs5
-
- Hi, I'm currently taking a software dev course and curious to try using linux for software development. There's tons of linux distros and my question is, what's the best or ideal linux distro for it?8
-
- Guys I need your help.
Im a guy used to java development, so used to nice assisting IDEs.
Turns out my boss has a very complex and not very organized server written in Dlang which im supposed to add a semi-complex functionality in.
So far I have a Linux-Mint VM running a docker container able to build the system. Now I'm really not used to editing code without an IDE and all IDEs I tried on windows or Linux dont seem to work (maybe due to minimal knowledge in Linux and D).
Furthest I got was to get Visual Studio set up with Visual D, but it wasnt able to import the dub
project giving weird unsearchable errors.
Is there anyone out there able to get me started with an IDE? The server is on a github-repository, is a dub project and has a few dependencies.
I'm just totally lost.5
-!3
-'ve started having nightmares about someone hijacking my computer and trying to freak me out. Halp.1
-
-?10
-
- Halp meh, plz... I have run across a problem and I have absolutely no idea how to go about solving it...
So basically I need to decrypt a TDES encrypted Azure service bus message. Can be done in a straightforward manner in .NET Framework solution with just your regular old System.Security.Cryptography namespace methods. As per MSDN docs you'd expect it to work in a .NET Core solution as well... No, no it doesn't. Getting an exception "Padding is invalid and cannot be removed". Narrowed the cause down to just something weird and undocumented happening due to Framework <> Core....
And before someone says 'just use .NET Framework then', let me clarify that it's not a possibility. While in production it could be viable, I'm not developing on a Windows machine...
How do I go about solving this issue? Any tips and pointers?12
- I start my new internship in a week. Its Java (springboot), angular, and the most popular testing tech for each. I know some of each, and no testing. PLS HALP. Want to impress.11
- Hey guys, need a lil help from any front-ender, I need to create a chart that allows me to show tooltips for specific timeframes and let's me click them and go to the specific url for that timeframe, I know this isn't SO but from my past experiences, I would rather ask here. I have looked at chart.js and other libraries but I'm not sure If chart.js has those capabilities.
I would like to achieve something like this :...
Any tips are welcomed :)9
- I used vim, Sublime and most recently vscode. I had Webstorm lying around for about 2 months now, and I'd really like to switch to it but it's constant fight, and not one im winning either... :(
Any advice from fellow "devs who rant" ?6
- !Rant
I need your help gaiz. I need an idea for a project. I have to be tech specific here as I am currently learning ASP.Net from my college curriculum. Pls halp.
A coder in need is a coder indeed.12
-
-
-
- We're slowly migrating to VSTS (sigh) from Mantis and SVN for tasks management and code repo.
It's been 4 months now and we still have to move the code from SVN to GIT, asked management when they plan to do that and they still give no ETA, and when asked to make sure our commits stays intact after the transfer I got told "no need for that we're just gonna copypaste the last version of the source code". And most likely the local SVN server we're using is gonna be dismissed.
On top of that, by the way they want to use it, VSTS is being terrible for tracking stuff. I'm so used with other tools at home for some side projects and even though I expressed my concern about VSTS I got ignored over and over...
Bonus (not so) fun fact: branches are something mythic here so everyone else commits straight to master and it's a pain in the ass everytime, because people happen to break things most of the time.
And no, unfortunately this is not a small company.
Send halp please .
- I'm currently trying to design my portfolio site, anyone recommends and tool for this?
I currently use gravit () it's nice but there are some bugs and the tools its offers is quite limited..4
-
-
Top Tags | https://devrant.com/search?term=halp | CC-MAIN-2021-31 | refinedweb | 3,154 | 73.98 |
Firebase Auth provides the ability to use service workers to detect and pass Firebase ID tokens for session management. This provides the following benefits:
- Ability to pass an ID token on every HTTP request from the server without any additional work.
- Ability to refresh the ID token without any additional round trip or latencies.
- Backend and frontend synchronized sessions. Applications that need to access Firebase services such as Realtime Database, Firestore, etc and some external server side resource (SQL database, etc) can use this solution. In addition, the same session can also be accessed from the service worker, web worker or shared worker.
- Eliminates the need to include Firebase Auth source code on each page (reduces latency). The service worker, loaded and initialized once, would handle session management for all clients in the background.
Overview
Firebase Auth is optimized to run on the client side. Tokens are saved in web storage. This makes it easy to also integrate with other Firebase services such as Realtime Database, Cloud Firestore, Cloud Storage, etc. To manage sessions from a server side perspective, ID tokens have to be retrieved and passed to the server.
firebase.auth().currentUser.getIdToken() .then((idToken) => { // idToken can be passed back to server. }) .catch((error) => { // Error occurred. });
However, this means that some script has to run from the client to get the latest ID token and then pass it to the server via the request header, POST body, etc.
This may not scale and instead server side session cookies may be needed. ID tokens can be set as session cookies but these are short lived and will need to be refreshed from the client and then set as new cookies on expiration which may require an additional round trip if the user had not visited the site in a while.
While Firebase Auth provides a more traditional
cookie based session management solution,
this solution works best for server side
httpOnly cookie based applications
and is harder to manage as the client tokens and server side tokens could get
out of sync, especially if you also need to use other client based Firebase
services.
Instead, service workers can be used to manage user sessions for server side consumption. This works because of the following:
- Service workers have access to the current Firebase Auth state. The current user ID token can be retrieved from the service worker. If the token is expired, the client SDK will refresh it and return a new one.
- Service workers can intercept fetch requests and modify them.
Service worker changes
The service worker will need to include the Auth library and the ability to get the current ID token if a user is signed in.
// Initialize the Firebase app in the service worker script. firebase.initializeApp(config); /** * Returns a promise that resolves with an ID token if available. * @return {!Promise<?string>} The promise that resolves with an ID token if * available. Otherwise, the promise resolves with null. */ const getIdToken = () => { return new Promise((resolve, reject) => { const unsubscribe = firebase.auth().onAuthStateChanged((user) => { unsubscribe(); if (user) { user.getIdToken().then((idToken) => { resolve(idToken); }, (error) => { resolve(null); }); } else { resolve(null); } }); }); };
All fetch requests to the app's origin will be intercepted and if an ID token is available, appended to the request via the header. Server side, request headers will be checked for the ID token, verified and processed. In the service worker script, the fetch request would be intercepted and modified.
const getOriginFromUrl = (url) => { // const pathArray = url.split('/'); const protocol = pathArray[0]; const host = pathArray[2]; return protocol + '//' + host; }; self.addEventListener('fetch', (event) => { const requestProcessor = (idToken) => { let req = event.request; // For same origin https requests, append idToken to header. if (self.location.origin == getOriginFromUrl(event.request.url) && (self.location.protocol == 'https:' || self.location.hostname == 'localhost') && idToken) { // Clone headers as request headers are immutable. const headers = new Headers(); for (let entry of req.headers.entries()) { headers.append(entry[0], entry[1]); } // Add ID token to header. headers.append('Authorization', 'Bearer ' + idToken); try { req = new Request(req.url, { method: req.method, headers: headers, mode: 'same-origin', credentials: req.credentials, cache: req.cache, redirect: req.redirect, referrer: req.referrer, body: req.body, bodyUsed: req.bodyUsed, context: req.context }); } catch (e) { // This will fail for CORS requests. We just continue with the // fetch caching logic below and do not pass the ID token. } } return fetch(req); }; // Fetch the resource after checking for the ID token. // This can also be integrated with existing logic to serve cached files // in offline mode. event.respondWith(getIdToken().then(requestProcessor, requestProcessor)); });
As a result, all authenticated requests will always have an ID token passed in the header without additional processing.
In order for the service worker to detect Auth state changes, it has to be
installed typically on the sign-in/sign-up page. After installation, the service
worker has to call
clients.claim() on activation so it can be setup as
controller for the current page.
// In service worker script. self.addEventListener('activate', event => { event.waitUntil(clients.claim()); });
Client side changes
The service worker, if supported, needs to be installed on the client side sign-in/sign-up page.
// Install servicerWorker if supported on sign-in/sign-up page. if ('serviceWorker' in navigator) { navigator.serviceWorker.register('/service-worker.js', {scope: '/'}); }
When the user is signed in and redirected to another page, the service worker will be able to inject the ID token in the header before the redirect completes.
// Sign in screen. firebase.auth().signInWithEmailAndPassword(email, password) .then((result) => { // Redirect to profile page after sign-in. The service worker will detect // this and append the ID token to the header. window.location.assign('/profile'); }) .catch((error) => { // Error occurred. });
Server side changes
The server side code will be able to detect the ID token on every request. This is illustrated in the following Node.js Express sample code.
// Server side code. const admin = require('firebase-admin'); const serviceAccount = require('path/to/serviceAccountKey.json'); // The Firebase Admin SDK is used here to verify the ID token. admin.initializeApp({ credential: admin.credential.cert(serviceAccount) }); function getIdToken(req) { // Parse the injected ID token from the request header. const authorizationHeader = req.headers.authorization || ''; const components = authorizationHeader.split(' '); return components.length > 1 ? components[1] : ''; } function checkIfSignedIn(url) { return (req, res, next) => { if (req.url == url) { const idToken = getIdToken(req); // Verify the ID token using the Firebase Admin SDK. // User already logged in. Redirect to profile page. admin.auth().verifyIdToken(idToken).then((decodedClaims) => { // User is authenticated, user claims can be retrieved from // decodedClaims. // In this sample code, authenticated users are always redirected to // the profile page. res.redirect('/profile'); }).catch((error) => { next(); }); } else { next(); } }; } // If a user is signed in, redirect to profile page. app.use(checkIfSignedIn('/'));
Conclusion
In addition, since ID tokens will be set via the service workers, and service workers are restricted to run from the same origin, there is no risk of CSRF since a website of different origin attempting to call your endpoints will fail to invoke the service worker, causing the request to appear unauthenticated from the server's perspective.
While service workers are now supported in all modern major browsers, some older browsers do not support them. As a result, some fallback may be needed to pass the ID token to your server when service workers are not available or an app can be restricted to only run on browsers that support service workers.
Note that services workers are single origin only and will only be installed on websites served via https connection or localhost.
Learn more about about browser support for service worker at caniuse.com.
Useful links
- For more information on using service workers for session management, check out the sample app source code on GitHub.
- A deployed sample app of the above is available at | https://firebase.google.com/docs/auth/web/service-worker-sessions?hl=vi | CC-MAIN-2019-26 | refinedweb | 1,289 | 50.23 |
Constructor: Constructor is the default method for a class that is created when a class is installed and ensures the proper execution of the roles in the class and its subsections. Angular are preferably the Dependency Injector (DI), analyzes the builder’s components and when creating a new feature by calling the new MyClass() tries to find suppliers that match the builder’s parameter types, resolve them and pass them to similar components.
new MyClass(someArg);
Example:
Output:
6
ngOnInit: OnInit is a life cycle widget called Angular to show that Angular is made to create a component. We have to import OnInit like this to use it (actually using OnInit is not mandatory but it is considered good).
Syntax:
import {Component, OnInit} from '@ angular / core';
and to use it to execute the OnInit method, we should use a section like this:
Example:
Output:
Called Constructor Called ngOnitit method
Note: Class app sales
constructor () { // First called before ngOnInit () }
Oninit () { // Named after the constructor and named after NgOnChanges() }
Use this interaction to apply custom startup thinking after the launch of the admin property. NGOnInit is named after the indexing of the target sites for the first time, and before any of its children are tested. Only once a guide is included.
Difference between ngOnInit and Constructor:
- We mostly use ngOnInit in every startup/announcement and avoid things to work in builders. The constructor should only be used to start class members but should not do the actual “work”.
- So you should use the constructor() to set Dependency Injection and not much. ngOnInit() is a better “starting point” – this is where / when component combinations are solved.
- We use constructor() for all the initialization/declaration.
- It’s better to avoid writing actual work in the constructor.
- The constructor() should only be used to initialize class members but shouldn’t do actual “work”.
- So we should use constructor() to set up Dependency Injection, Initialization of class fields, etc.
- ngOnInit() is a better place to write “actual work code” that we need to execute as soon as the class is instantiated.
- Like loading data from Database — to show the user in your HTML template view. Such code should be written in ngOnInit().
Conclusion:
- Constructor initializes class members.
- ngOnInit() is a place to put the code that we need to execute at very first as soon as the class is instantiated.
Attention reader! Don’t stop learning now. Get hold of all the important DSA concepts with the DSA Self Paced Course at a student-friendly price and become industry ready. | https://www.geeksforgeeks.org/what-is-the-difference-between-constructor-and-ngoninit-in-angularjs/?ref=lbp | CC-MAIN-2021-17 | refinedweb | 425 | 60.75 |
So you want to use jQuery validation, huh?
What is it? Something that was added to the holy jquery site and is an easy way to validate input from users. Now this should in no way take over for server side validation, but it helps to at least catch a few things without having to send anything to the server. So how do ya do it?
Well to start, you need some files:
jquery-1.3.2.js and jquery.validate.js.
Now oddly enough the validation file isn’t hosted on the holy jquery site but how to use it is.
Ok now you have the files, what’s next? Well you need form, and I can do that for you.
So basically it’s a simple form with one input that is required.
jQuery(document).ready ( function() { jQuery("#PrimaryForm").validate ( { errorLabelContainer: "#ErrorDiv", wrapper: "div", rules: { FirstName : { required : true } }, messages: { FirstName: { required : 'First Name is required.' } }, onfocusout : false, onkeyup: false, submitHandler: function(label) { postSubmit(); } } ); }
jQuery("#PrimaryForm").validate
Real simple, just setting the validator to the primary form on the page.
errorLabelContainer: "#ErrorDiv",
This sets the errors to show up in the ErrorDiv. Now this is optional, as you can have it show the errors next to the FirstName text box but personally I think that looks horrible. Setting up the ErrorDiv puts all the errors in one central location and allows for styling the actual div.
rules: { FirstName : { required : true } },
This matches an element with the id of FirstName to the required rule, meaning that FirstName is required. Rocket science.
messages: { FirstName: { required : 'First Name is required.' } },
If you can’t figure this out, I hear circus is hiring for the “World’s Dumbest Person”. You’ll fit in with Jub Jub the Dog Boy.
onfocusout : false, onkeyup: false,
Basically this prevents the validation when leaving the textbox or on every key press. This is just another preference.
submitHandler: function(label) { postSubmit(); }
If the submit is successful, call this method.
But… BUT WHAT IF IT’S AN EMAIL?!??! WHAT WILL I DO???!?!?
Well for one, stop being such a child. And two, look here.
Some what different, as you can see it’s now email and there is one extra requirement in the rules:
rules: { EmailAddress : { email : true, required : true } }, messages: { EmailAddress: { required : 'Yo, email fool.', email : 'So not an email address.' }, },
See? It has nice built in rule for email. Simple.
BUT WHAT IF I NEED A REGULAR EXPRESSION?!??! WHAT WILL I DO???!?!?
I swear if you don’t stop that, I’m turning this post around and going home.
jQuery.validator.addMethod ( "isZipCode", function(value, element) { return value.match(new RegExp(/(^\d{5}$)|(^\d{5}-\d{4}$)/)); } );
Just have to create a method and “add it” to the validator itself. And then there’s the use:
rules: { ZipCode : { required : true, isZipCode : true } }, messages: { ZipCode: { required : 'For the love of me, enter a zip code!.', isZipCode : 'Serioulsy? Do you know what a zip code is?' }, },
Woo hoo right?
Don’t do it… Don’t you yell.
But what if one input depends on another?
Much better. Well that’s not as hard as it may seem and here’s the example.
rules: { InputB : { required : { depends : function(element) { return jQuery('#InputA').val() != "" } } } },
As you can see, you can change how the required method works by adding in a depends handler. Works out pretty well.
Yes I will show you how to make sure two inputs match. I swear you ask for a lot.
rules: { Password : { equalTo : "#ConfirmPassword" }, },
Couldn’t be easier unless I wrote it out for you. Wait, I did.
So here you’ve either learned a bit about jQuery validation or have just spent the last few minutes drooling uncontrollably. Either way, I’m done with this post and you’re left to do whatever it is you do, you sick f—.
Side note: I haven’t actually been to Htmlgoodies since eh college? but wow did that place sell out. How fitting that an introduction to html page now looks like it was designed by someone just starting out… in the 90s.
7 thoughts on “jQuery Validation – How to Use to Get Rid Of Even The Toughest Stains”
Worth a read for the sarcasm alone 🙂
Quiet, you might give the impression this is worth reading at all.
After making a second false input for email, the error message is gone
Can’t recreate that. Details man, details!
Hello,
This article is very informative.
I am a jquery newbie and still learning the ropes. I am using the Jquery validate.js plugin to validate a form and running into issues.
I would greatly appreciate your help/input. Can you please help me resolve the issue.
I have a form where elements can be added/removed dynamically and am having trouble validating the elements that are arrays.
The validation works but the error message is being displayed for the “order_date” elements in all rows, even if they are valid.
The form has two groups of radio buttons with names ( g1 and g2 ). It also has other elements like textboxes and select boxes.
Buttons are defined as :
Yes
No
Yes
No
div2 has elements like:
div4 elements like:
// returns true if the string is a valid date formatted as…
// mm dd yyyy, mm/dd/yyyy, mm.dd.yyyy, mm-dd-yyyy
function isDate(str){
var re = /^(d{1,2})[s./-](d{1,2})[s./-](d{4})$/
if (!re.test(str)) return false;
var result = str.match(re);
var m = parseInt(result[1]);
var d = parseInt(result[2]);
var y = parseInt(result[3]);
var dateString = new Date(str);
var today = new Date();
if(dateString > today) return false;
if(m 12 || y 2100) return false;
if(m == 2){
var days = ((y % 4) == 0) ? 29 : 28;
}else if(m == 4 || m == 6 || m == 9 || m == 11){
var days = 30;
}else{
var days = 31;
}
return (d >= 1 && d 0;
},
‘Format: MM/DD/YYYY. No Future Dates’
);
$(document).ready(function() {
$(“#myForm”).validate({
rules: {
“prodid[]”: {
required: function(){
return $(“input[name=g1][value=yes]:checked”).length > 0;
}
},
“order_date[]”: {
checkdate: true
}
}
});
});
How do I make sure that the error message is displayed only for the invalid element?
Also, Is there a method called “depends” in the validate plugin? If yes, can you please show me an example? I haven’t been able to find anything on the net.
Appreciate your help and thanks in advance.
Cheers,
Mike.
Please ignore my question about “depends”. I was too quick to post that. This article itself has an example using “depends”.
Mike. | http://byatool.com/lessons/jquery-validation-how-to-use-to-get-rid-of-even-the-toughest-stains/ | CC-MAIN-2017-22 | refinedweb | 1,092 | 76.22 |
MathLink.
to have the value
. Unix.
option to configure the class path when Java first launches is not very important. One setting for the" MathLink example program. In Java, it might look like this:
With the default StaticsVisible->False, you would have to call addtwo as
..
Mathematica".:
is equivalent to the more commonly used
. Mathematica".
Method calls can be chained in the Wolfram Language just like in Java. For example, if meth1 returns a Java object, you could write in Java obj.meth1().meth2(). In the Wolfram Language, this becomes
. Note that there is an apparent problem here: the Wolfram Language's
operator groups to the right, whereas Java's dot groups to the left. In other words, obj.meth1().meth2() in Java is really (obj.meth1()).meth2() whereas
in the Wolfram Language would normally be
. (
in this example). These contexts are usually not on $ContextPath, so you do not have to worry that there is a symbol of the same name in the
class that returns the current
object (you cannot create a
object with JavaNew, as
instead of
.
option to ReinstallJava. The value that you supply for the
option is a string that names the desired directories and zip or jar files. This string is platform-dependent; the paths are specified in the native style for your platform, and the separator character is a colon on Unix and a semicolon on Windows. Here are typical specifications.
The default setting for
is Automatic, which means to use the value of the CLASSPATH environment variable. If you set
to something else, then J/Link will ignore the CLASSPATH environment variable—it will not be able to find those classes. In other words, if you use a
specification, you lose the CLASSPATH environment variable. This is similar to the behavior of the
command-line option to the Java runtime and compiler, if you are familiar with those tools.
It is recommended that users avoid the
option. If you need the dynamic control that the
option provides, you should use the more powerful and convenient AddToClassPath mechanism, described in the next section. The most common reason for using the,
/AddOns/Applications, and
that specified a list of extra locations. You could add to this list.
was deprecated in J/Link 2.0, but it still works. One advantage of
over using AddToClassPath is that changes made to
persist across a restart of the Java runtime.
Examining the Class Path
The JavaClassPath function returns the set of directories and jar files in which J/Link will search for classes. This includes all locations added with AddToClassPath or
,_1<<
method of
.
You should think of this
method as being the replacement for Class.forName(). When you find yourself wanting to obtain a
object from a class name given as a string, remember to use
. MathLink MathLink MathLink C API has single functions to put arrays. The Java types long (these are 64 bits), boolean, and String do not have fast MathLink MathLink MathLink programs on most systems). Clearly this is not acceptable. As a first step, you try using
because you are calling the same method many times.
Note that you use fmt as the first argument to
. The first argument merely specifies the class; as with virtually all functions in J/Link that take a class specification, you can use an object of the class if you desire. The
and the JavaObject expression created when you ask for the first element of the
refers to the same object as
.
If you call ReleaseJavaObject[b1], it is not the Wolfram Language symbol
that is affected, but the Java object that
refers to. Therefore, using
is also an error (or any other way to refer to this same
to indicate an error and halt the parsing. You want to implement these handler methods in Wolfram Language code, and thus you want a way to throw a),
and
(discussed later), and the "wrapper" classes (e.g., java.lang.Integer). You could say that these exceptional cases are returned "by value". The table in "Conversion of Types between Java and Mathematica" MathLink.
MathLink to Java and a Java array is created with these values. That array is passed as an argument to arrayAbs(), which itself creates and returns another array. This array is then sent back to the Wolfram Language via MathLink
. You have a Wolfram Language variable named MathLink traffic.
This example is somewhat contrived, since repeatedly appending to a growing string is not a very efficient style of programming, but it illustrates the issues.
When the Do loop is executed,
gets assigned values that are not Wolfram Language strings, but JavaObject expressions that refer to strings residing in Java. That means that MathLink. The result of this was that the
. You cannot call such a method from the Wolfram Language with a string as the argument because although J/Link recognizes that a Wolfram Language string corresponds to a Java string, it does not recognize that a Wolfram Language string corresponds to a Java
._3<<
It fails for the reason given above. To call a Java method that is typed to take an (
for a two-dimensional array of ints,
class. This class is a subclass of
, so it has a method with the following signature.
You should always specify keys and values for
class, which is discussed in detail in "Motivation for the Expr Class". Basically, an
is a Java object that can represent an arbitrary Wolfram Language expression. Its main use is as a convenience for Java programmers who want to examine and operate on Wolfram Language expressions in Java. Sometimes it is useful to have a way of creating
function. The Expr methods demonstrated here are typically called from Java, not the Wolfram Language.
Note that Expr objects, like Wolfram Language expressions, are immutable. The above call to insert() did not modify
; MathLink that it uses to communicate with the front end. This MathLink MathLink
Wolfram Language function, the kernel is kept in a state where it is receptive to evaluation requests arriving from either the notebook front end or Java, evenly sharing its attention between these two programs. Lastly, there is a manual mode, characterized by the use of the. "
" or "
"), and the second of which is the Wolfram Language function that should be called in response. The Wolfram Language function can be a name, as in "
"],
, or
.
windows, and also the Swing classes used for top-level windows (
,
, and
).
method.
You must not call
if you are not using DoModal. This is because after. If you call DoModal and realize that for some reason you cannot end it from Java, you can abort it from the front end by selecting Evaluation ▶ Interrupt Evaluation in the menu, and then in the resulting dialog, clicking the button labeled Abort.
There is one subtlety you might notice in the code for
that is not directly related to J/Link. In the line that calls buttonListener@setHandler, you pass the name of the button function not as the literal string "buttonFunc", but as
.
.
Important Note: In Mathematica 5.1 and later, the kernel is always shared with Java. This means that the functions
and
are not necessary and, in fact, do nothing at all. If you are writing a program that only needs to run in Mathematica 5.1 and later, you never need to call
or
(
and
are still useful, however). If your programs need to work on all versions of the Wolfram Language, then you will need to use
and
as described next.
takes a LinkObject as an argument and initiates sharing of the kernel between that link and the current $ParentLink (typically, the notebook front end). If you call
with no arguments, it assumes you mean the link to Java. Most users will call it with no arguments.
Note that while the kernel is being shared, the input prompt has "(sharing)" prepended to it. The string that is prepended is specified by the
option to
.,
takes care of shuffling the $ParentLink value back and forth between links as input arrives on each.
It is safe to call
if the kernel is already being shared. This means that programs you write can call it without your having to worry that a user might already have initiated sharing. When you are finished with the need to share the kernel with Java, you can call
. This restores the kernel to its normal mode of operation, paying attention only to the front end.
When called with no arguments,,
returns a token (it is just an integer, but you should not be concerned with its representation) that reflects a request for sharing functionality. In other words, calling
registers a request for sharing, turns it on if it is not on already, and returns a token that represents that particular request. When you call
, you pass it the token to "unregister" that particular request for sharing. Only if there are no other outstanding requests will sharing actually be turned off.
A quirk of
is that you cannot call
and
function as the solution to this problem..
currently does not work with a remote kernel; the same machine must be running the kernel and the front end..
is a way to restore a feature that was lost when you gained the ability to create modeless interfaces via
. That is how to think of
—as a step beyond
that allows side-effect output generated by computations triggered in Java to appear in the notebook front end.
is particularly useful when developing code that needs to use
, even if the code does not need the extra functionality of
. This is because Wolfram System error messages generated by computations triggered by Java events get lost with
. The messages will show up in the front end if front end sharing is turned on.
When you are done with the need to share the front end, call
. Like the
/
pair of functions,
returns a token that you should pass to
to unregister the request for front end sharing. Only when all calls to
have been unregistered by calls to
will front end sharing be turned off. You can force front end sharing to be shut down immediately by calling
and pass it to
.
requires that the kernel be shared, so it calls
internally. Calling
with no arguments forces kernel sharing to stop immediately, and this turns off front end sharing as well. Thus, you can use
as a quick shortcut to immediately shut down all sharing.
An example of some simple palette-type buttons that use
is presented in "Sharing the Front End: Palette-Type Buttons".
An important use (
handles this automatically for you). This works fine, but you lose the ability to get pictures of typeset expressions in your Java interface.
.
Summary of Modal and Modeless Operation
The previous discussion of modal and modeless operation,
, and
may have seemed complex. In fact, the principles and uses of these techniques are simple. This will become clear upon seeing some more examples. Many of the example programs in "Example Programs" use
, which you can think of as an extension to
.
A very common mistake is to create a Java window, wire up a MathListener class that calls back to the Wolfram Language on some event, and then trigger the event before you have called DoModal or
.
. Calling
in a program will cause the kernel to accept one request for a computation from the Java side. It performs the computation and then returns control to your program. If there is no request waiting,
returns immediately.
Here is some pseudocode showing the structure of a program that displays a progress bar with an Abort button and periodically calls
is closely related to DoModal, and although this is not the actual implementation, you can think of DoModal as being written in terms of
.
(* Not the actual implementation of DoModal, but the principle is correct. *)
DoModal[] :=
While[!endModal,
ServiceJava[]
]
Seen in this way, DoModal is a special case of the use of
, where the Wolfram Language is doing nothing but servicing requests from Java. Sometimes you need something else to be going on in the Wolfram Language, but still need to be able to handle requests arriving from Java. That is when you call
yourself. Like DoModal, there is no shifting of $ParentLink when
is called. Thus, side-effect output like graphics, messages, and Print output triggered by Java computations appear in the notebook, just as if they were hard-coded into the Wolfram Language program that called
.
The BouncingBalls example program presented in "BouncingBalls: Drawing in a Window" uses
.
has been called. Looking at it from the other direction, the only time it will not work is if
is in use, but
;
is not enough. Modal and modeless interfaces
values. This is demonstrated in the Scribble.nb example notebook.
There is one new MathCanvas method demonstrated in this program, repaintNow(). In a computation-intensive program like this, where events are being fired on the user interface thread very quickly, and the handlers for these events take a nontrivial amount of time to execute, Java will sometimes delay repainting the window. The drawing becomes very chunky, with no visual effect for a while and then suddenly all the lines drawn in the last few seconds will appear. Even calling the standard repaint() method after every new point will not ensure that the window is updated in a timely manner. To solve this problem, the repaintNow() method is provided, which forces an immediate redraw of the canvas. If your program relies on smooth visual feedback from user events that fire rapidly, you should call repaintNow() also, even if it does not seem necessary on your system. There can be very significant differences between different platforms and different Java runtimes on the responsiveness of the screen updating mechanism.
The ability to draw in response to events in a MathCanvas or MathGraphicsJPanel opens up the possibility for some impressive interactive demonstrations, tutorials, and so on. Two of the larger example programs provided draw into a MathCanvas from the Wolfram Language: BouncingBalls (in the section "BouncingBalls: Drawing in a Window") and Spirograph (in the section "Spirograph").
Bitmaps
You have seen how to draw into a MathCanvas or MathGraphicsJPanel by using an offscreen image. Another type of image that you can create with Wolfram Language code and display using setImage() is a bitmap. In this example you will create an indexed-color bitmap out of Wolfram Language data and display it. You will use an 8-bit color table, meaning that every data point in the image will be treated as an index into a 256-element list of colors. You could use a larger color table if desired.
You closed the frame window in the Scribble example, so you must first create a new frame and canvas for the bitmap.
Here is the color table. It is an array of {r,g,b} triplets, with each color component being in the range 0…255. In this example, colors with low indices are mostly blue, and ones with high indices are mostly red.
The data is a 400×400 matrix of integers in the range 0…255 (because they are indices into the 256-element color table). In a real application, this data might be read from a file or computed in some more sophisticated way. If the range of numbers in the data did not span 0…255, you would have to scale it into that range, or a larger range if you wanted to use a deeper color table.
Here you create the Java objects that represent the color model and bitmap. You can read the standard Java documentation on these classes for more information.
Now create an Image out of the bitmap and display it.
The Java Console Window
J/Link provides a convenient means to display the Java "console" window. Any output written to the standard System.out and System.err streams will be directed to this window. If you are calling Java code that writes diagnostic information to System.out or System.err, then you can see this output while your program runs. Like most J/Link features, the console window can be used easily from either the Wolfram Language or Java programs (its use from Java code is described in "Writing Java Programs That Use Mathematica"). To use it from the Wolfram Language, call the ShowJavaConsole function.
Showing the console window.
Capturing of output only begins when you call ShowJavaConsole, so when the window first appears it will not have any content that might have been previously written to System.out or System.err. You will also note that the J/Link console window displays version information about the J/Link Java component and the Java runtime itself. Calling ShowJavaConsole when the window is already open will cause it to come to the foreground.
To demonstrate, you can write some output from the Wolfram Language. If you executed the ShowJavaConsole[] given earlier, then you will see "Hello from Java" printed in the window.
Although it is convenient to demonstrate writing to the window using Wolfram Language code like this, this is typically done from Java code instead. Actually, there is one common circumstance where it is quite useful to use the Java console window for diagnostic output written from Wolfram Language code. This is the case where you have a "modeless" Java user interface (as described in the section "Creating Windows and Other User Interface Elements") and you have not used the ShareFrontEnd function. Recall that in this circumstance, output from calls to Print in the Wolfram Language will not appear in the notebook front end. If you write to System.out instead, as in the example, then you will always be able to see the output. You might want to do this in other circumstances just to avoid cluttering up your notebook with debugging output..
JavaBeans has not been mentioned up to this point because there really is not anything special to be said. Beans are just Java classes, and they can be used and called like any other classes. It is probably the case that many Java classes you use from the Wolfram Language Wolfram Language code, you call them by name in the usual way, without any consideration of the "Beanness" of the class.
Note that it would be quite possible to add Wolfram Language functions to J/Link that would provide explicit support for Bean properties. For example, a function BeanSetProperty could be written that would take a Bean object, a property name as a string, and the value to set the property to. The following is currently required.
Instead, you could write the following..
To make use of events that a JavaBean fires, you can use one of the standard MathListener classes, as described in the section "Creating Windows and Other User Interface Elements". JavaBeans often fire PropertyChangeEvents, and you can arrange for Wolfram Language code to be executed in response to these events by using a MathPropertyChangeListener or a MathVetoableChangeListener.
Hosting Applets
J/Link gives you the ability to run most applets in their own window directly from the Wolfram Language. Although this may seem immensely useful, given the vast number of applets that have been created, most applets do not export any useful public methods. They are generally standalone pieces of functionality, and thus they benefit little from the scriptability that J/Link provides. Still, there are many applets that may be useful to launch from a Wolfram Language program.
Note that this section is not about writing applets that use the Wolfram Language kernel. That topic is covered in "Writing Applets".
J/Link includes an AppletViewer function for running applets. This function takes care of all the steps of creating the applet instance, providing a frame window to hold it, and starting it running. The first argument to AppletViewer is the fully qualified name of the applet class. The second argument is an optional list of parameters in "name=value" format, corresponding to the parameters supplied to an applet in an HTML page that hosts it. For example, here is the <applet> tag in a web page that hosts an applet.
You would call AppletViewer as follows.
You will typically supply at least "WIDTH=" and "HEIGHT=" specifications to control the width and height of the applet window. If you do not specify these parameters, the default width and height are 300 pixels.
An excellent example of an applet that is useful to Wolfram Language users is LiveGraphics3D, written by Martin Kraus. LiveGraphics3D is an interactive viewer for Wolfram Language 3D graphics. It gives you the ability to rotate and zoom images, view them in stereo, and more. If you want to try the following example, you will need to get the LiveGraphics3D materials, available from. Make sure you put live.jar onto your CLASSPATH before trying that example, or use the AddToClassPath feature of J/Link to make it available.
First, load the PolyhedronOperations` Package and create the graphic to display. The LiveGraphics3D documentation gives a more general-purpose function for turning a Wolfram Language graphics expression into appropriate input for the LiveGraphics3D applet but, for many examples, using ToString, InputForm, and N is sufficient.
You specify the image to be displayed via the INPUT parameter, which takes a string giving the InputForm representation of the graphic.
The Live applet has a number of keyboard and mouse controls for manipulating the image. You can read about them in the LiveGraphics3D documentation. Try Alt+S to switch into a stereo view.
When you are done with an applet, just click the window's close box.
If the applet needs to refer to other files, you should be aware that AppletViewer sets the document base to be the directory specified by the "user.dir" Java system property. This will normally be the Wolfram Language's current directory (given by Directory[]) at the time that InstallJava was called.
Most applets expose no public methods useful for controlling from the Wolfram Language, so there is nothing to do but start them up with AppletViewer and then let the user close the window when they are finished. The Live applet is an exception—it provides a full set of methods to allow the view point, spin, and so on to be modified by Wolfram Language code. These methods are in the Live class, so to call them you need an instance of the Live class. The way you used AppletViewer earlier does not give us any instance of the applet class. The construction and destruction of the applet instance was hidden within the internals of AppletViewer. You can also call AppletViewer with an instance of an applet class instead of just the class name. This lets you manage the lifetime of the applet instance.
Now you can call methods on the applet instance. See the LiveGraphics3D documentation for the full set of methods. This scriptability opens up lots of possibilities, such as programming "flyby" views of objects, or creating buttons that jump the image into certain orientations or spins.
When you are done, you call ReleaseJavaObject to release the applet instance. This can be done before or after the applet window is closed.
Periodical Tasks
The section "Creating Windows and Other User Interface Elements" described the
function and how it allows Java and the notebook front end to share the kernel's attention. A side benefit of this functionality is that it becomes easy to provide a means whereby users can schedule arbitrary Wolfram Language programs to run at periodical intervals during a session. Say you have a source that provides continuously updated financial data and you want to have some variables in the Wolfram Language
if it has not already been called. There is no limit on the number of periodicals that can be established.
After scheduling thatapse, earlier.
Sometimes you might want to change the interval for a periodical task or remove it entirely from within the code of the task itself.
is a variable that holds the ID of the currently executing periodical task. It will only have a value during the execution of a periodical task. You use
to yield the CPU, if Java is not running then setting a periodical task will cause the kernel to keep the CPU continuously busy. Periodical task functionality is included in J/Link because it is a simple extension to
,, clean up the periodical tasks you created.
Some Special Number Classes
Preamble
There is a set of special number-related classes in Java that J/Link maps to their Wolfram Language numeric representation. Like strings and arrays, objects of these number classes have an important property: although they are objects in Java, they have a meaningful "by value" representation in the Wolfram Language, so it is convenient for J/Link to automatically convert them to numbers as they are returned from Java to the Wolfram Language, and back to objects as they are sent from the Wolfram Language to Java.
These classes are the so-called "wrapper" classes that represent primitive types (Byte, Integer, Long, Double, and so on), BigDecimal and BigInteger, and any class used to represent complex numbers. The treatment of these classes is described in this section.
The "Wrapper" Classes: Integer, Float, Boolean, and Others
Java has a set of so-called "wrapper" classes that represent primitive types. These classes are Byte, Character, Short, Integer, Long, Float, Double, and Boolean. The wrapper classes hold single values of their respective primitive types, and are necessary to allow everything in Java to be represented as a subclass of Object. This lets various utility methods and data structures that deal with objects handle primitive types in a straightforward way. It is also necessary for Java's reflection capabilities.
If you have a Java method that returns one of these objects, it will arrive in the Wolfram Language as an integer (for Byte, Character, Short, Integer, and Long), real number (for Float and Double), or the symbols True or False (for Boolean). Likewise, a Java method that takes one of these objects as an argument can be called from the Wolfram Language with the appropriate raw Wolfram Language value. The same rules hold true for arrays of these objects, which are mapped to lists of values.
In the unlikely event that you want to defeat these automatic "pass by value" semantics, you can use the ReturnAsJavaObject and JavaObjectToExpression functions, discussed in "References and Values".
Complex Numbers
You have seen that Java number types (e.g. byte, int, double) are returned to the Wolfram Language as integers and reals, and integers and reals are converted to the appropriate types when sent as arguments to Java. What about complex numbers? It would be nice to have a Java class representing complex numbers that mapped directly to the Wolfram Language's Complex type, so that automatic conversions would occur as they were passed back and forth between the Wolfram Language and Java. Java does not have a standard class for complex numbers, so J/Link lets you name the class that you want to participate in this mapping.
Setting the class for complex numbers.
You can use any class you like as long as it has the following properties:
1. A public constructor that takes two doubles (the real and imaginary parts, in that order)
2. Methods that return the real and imaginary parts, having the following signatures
Say that you are doing some computations with complex numbers in Java, and you want to interact with these methods from the Wolfram Language. You like to use the complex number class available from netlib. This class is named ORG.netlib.math.complex.Complex and is available at. You use the SetComplexClass function to specify the name of the class.
Now any method or field that takes an argument of type ORG.netlib.math.complex.Complex will accept a Wolfram Language complex number, and any object of class ORG.netlib.math.complex.Complex returned from a method or field will automatically be converted into a complex number in the Wolfram Language. The same holds true for arrays of complex numbers.
Note that you must call SetComplexClass before you load any classes that use complex numbers, not merely before you call any methods of the class.
BigInteger and BigDecimal
Java has standard classes for arbitrary-precision floating-point numbers and arbitrary-precision integers. These classes are java.math.BigDecimal and java.math.BigInteger, respectively. Because the Wolfram Language effortlessly handles such "bignums", J/Link maps BigInteger to Wolfram Language integers and BigDecimal to Wolfram Language reals. What this means is that any Java method or field that takes, say, a BigInteger can be called from the Wolfram Language by passing an integer. Likewise, any method or field that returns a BigDecimal will have the value returned to the Wolfram Language as a real number..
Look what happens if you call it with a ragged array.
An error occurs because the Wolfram Language definition for the Testing`intArrayIdentity() function requires that its argument be a two-dimensional rectangular array of integers. The call never even gets out of the Wolfram Language.
Here you turn on support for ragged arrays, and the call works. This requires modifications in both the Wolfram Language-side type checking on method arguments and the Java-side array-reading routines.
It is a good idea to turn off support for ragged arrays as soon as you no longer need it, since it slows arrays down so much.
Implementing a Java Interface with Wolfram Language Code
You have seen how J/Link lets you write programs that use existing Java classes. You have also seen how you can wire up the behavior of a Java user interface via callbacks to the Wolfram Language via the MathListener classes. You can think of any of these MathListener classes, such as MathActionListener, as a class that "proxies" its behavior to arbitrary user-defined Wolfram Language code. It is as if you have a Java class that has its implementation written in the Wolfram Language. This functionality is extremely useful because it greatly extends the set of programs you can write purely in the Wolfram Language, without resorting to writing your own Java classes.
Implementing a Java interface entirely in the Wolfram Language.
It would be nice to be able to take this behavior and generalize it, so that you could take any Java interface and implement its methods via callbacks to Wolfram Language functions, and do it all without having to write any Java code. The ImplementJavaInterface function, new in J/Link 2.0, lets you do precisely that. This function is easier to understand with a concrete example. Say you are writing a Wolfram Language program that uses J/Link to display a Java window with a Swing menu, and you want to script the behavior of the menu in the Wolfram Language. The Swing JMenu class fires events to registered MenuListeners, so what you need is a class that implements MenuListener by calling into the Wolfram Language. A quick glance at the section on MathListeners reveals that J/Link does not provide a
class for you. You could choose to write your own implementation of such a class, and in fact this would be very easy, even trivial, since you would make it a subclass of MathListener and inherit virtually all the functionality you would need. For the sake of this discussion, assume that you choose not to do that, perhaps because you do not know Java or you do not want to deal with all the extra steps required for that solution. Instead, you can use ImplementJavaInterface to create such a Java class with a single line of Wolfram Language code.
The first argument to ImplementJavaInterface is the Java interface or list of interfaces you want to implement. The second argument is a list of rules that associate the name of a Java method from one of the interfaces with the name of a Wolfram Language function to call to implement that method. The Wolfram Language function will be called with the same arguments that the Java method takes. What ImplementJavaInterface returns is a Java object of a newly created class that implements the named interface(s). You use it just like any JavaObject obtained by calling JavaNew or through any other means. It is just as if you had written your own Java class that implemented the named interface by calling the associated Wolfram Language functions, and then called JavaNew to create an instance of that class.
It is not necessary to associate every method in the interface with a Wolfram Language function. Any Java methods you leave out of your list of mappings will be given a default Java implementation that returns null. If this is not an appropriate return value for the method (e.g. if the method returns an int) and the method gets called at some point an exception will be thrown. Generally, this exception will propagate to the top of the Java call stack and be ignored, but it is recommended that you implement all the methods in the Java interface.
The ImplementJavaInterface function makes use of the "dynamic proxy" capability introduced in Java 1.3. It will not work in Java versions earlier than 1.3. All Java runtimes bundled with Mathematica 4.2 and later are at Version 1.3 or later. If you have Mathematica 4.0 or 4.1, the ImplementJavaInterface function is another reason to make sure you have an up-to-date Java runtime for your system.
At first glance, the ImplementJavaInterface function might seem to give us the capability to write arbitrary Java classes in the Wolfram Language, and to some extent that is true. One important thing you cannot do is extend, or subclass, an existing Java class. You also cannot add methods that do not exist in the interface you are implementing. Event-handler classes are a good example of the type of classes for which this facility is useful. You might think that the MathListener classes are rendered obsolete by ImplementJavaInterface, and it is true that their functionality can be duplicated with it. The MathListener classes are still useful for Java versions earlier than 1.3, but most importantly, they are useful for writing pure Java programs that call the Wolfram Language. Using a class implemented in the Wolfram Language via ImplementJavaInterface in a Java program that calls the Wolfram Language would be possible, but quite cumbersome. If you want a dual-purpose class that is as easy to use from the Wolfram Language as from Java, you should write your own subclass of MathListener. One poor reason for choosing to use ImplementJavaInterface instead of writing a custom Java class is that you are worried about complicating your application by requiring it to include its own Java classes in addition to Wolfram Language code. As explained in "Deploying Applications That Use J/Link", it is extremely easy to include supporting Java classes in your application. Your users will not require any extra installation steps nor will they need to modify the Java class path.
Writing Your Own Installable Java Classes
Preamble
The previous sections have shown how to load and use existing Java classes. This gives Wolfram Language programmers immediate access to the entire universe of Java classes. Sometimes, though, existing Java classes are not enough, and you need to write your own.
J/Link essentially obliterates the boundary between Java and the Wolfram Language, letting you pass expressions of any type back and forth and use Java objects in the Wolfram Language in a meaningful way. This means that when writing your own Java classes to call from the Wolfram Language, you usually do not need to do anything special. You write the code in exactly the same way as you would if you wanted to use the class only from Java. (One important exception to this rule is that because it is comparatively slow to call into Java from the Wolfram Language, you might need to design your classes in a way that will not require an excessive number of method calls from the Wolfram Language to get the job done. This issue is discussed in detail in "Overhead of Calls to Java".)
In some cases, you might want to exert more direct control over the interaction with the Wolfram Language. For example, you might want a method to return something different to the Wolfram Language than what the method itself returns. Or you might want the method not to a class of the MathListener type that calls into the Wolfram Language as the result of some event triggered in Java.
If you do not want to do any of these things, then you can happily ignore this section. The whole point of J/Link is to make unnecessary the need to be concerned about the interaction with the Wolfram Language through MathLink. Most programmers who want to write Java classes to be used from the Wolfram Language will just write Java classes, period, without thinking about the Wolfram Language or J/Link. Those programmers who want more control, or want to know more about the possibilities available with J/Link, read on.
The issues discussed in this section require some knowledge of MathLink programming (or, more precisely, J/Link programming using the Java methods that use MathLink), which is discussed in detail in "Writing Java Programs That Use the Wolfram Language". The fact that you meet some of these methods and issues here is a consequence of the false but useful dichotomy, noted in the "Introduction", between using MathLink to write "installable" functions to be called from the Wolfram Language and using MathLink to write front ends for the Wolfram Language. MathLink is always used in the same way, it is just that virtually all of it is handled for you in the installable case. This section is about how to go beyond this default behavior, so you will be making direct J/Link calls to read and write to the link. Thus you will encounter concepts, classes, and methods in this section that are not explained until "Writing Java Programs That Use the Wolfram Language".
Some of the discussion in this section will compare and contrast the process of writing an installable program in C. This is designed to help experienced MathLink programmers understand how J/Link works, and also to convince you that J/Link is a superior solution to using C, C++, or FORTRAN.
Installable Functions—The Old Way
Writing a so-called "installable" or "template" program in C requires a number of steps. If you have a file foo.c that contains a function foo, to call it from the Wolfram Language you must first write a template (.tm) file that contains a template entry describing how you want foo to be called from the Wolfram Language, what types of arguments it takes, and what it returns. You then pass this .tm file through a tool called mprep, which writes a file of C code that manages some, possibly all, of the MathLink-related aspects of the program. You also need to write a simple main routine, which is always the same. You then compile all of these files, resulting in an executable for just one platform.
Two big drawbacks of this method are that you need to write a template entry for every single function you want to call (imagine doing that for a whole function library), and the compiled program is not portable to other platforms. The biggest drawback, however, is that there is no automatic support for anything but the simplest types. If you want to do something as basic as returning a list of integers, you need to write the MathLink calls to do that yourself. And forget about object-oriented programming, as there is no way to pass "objects" to the Wolfram Language.
Installable Functions in Java
J/Link makes all those steps go away. As you have seen all throughout this tutorial, you can literally call any method in any class, without any preparation.
It is only in cases where the default behavior of calling a method and receiving its result is not enough that you need to write specialty Java code. The rest of this section will examine some of the special techniques that can be used.
Setting Up Definitions in the Wolfram Language When Your Class Is Loaded
Template entries in .tm files required by installable MathLink programs written in C have two features that might appear to be lost in J/Link. The first feature is the ability to specify arbitrary Wolfram Language code to be evaluated when the program is first "installed". This is done by using the :Evaluate: line in a template entry. The second feature is the ability to specify the way in which the function is to be called from the Wolfram Language, including the name of the Wolfram Language function that maps to the C function, its argument sequence, how those arguments are mapped to the ones provided to the C function, and possibly some processing to be done on them before they are sent. This information is specified in the :Pattern: and :Arguments: lines of a template entry.
These two features are related to each other, because they both rely on the ability to specify Wolfram Language code that is loaded when an external program is installed. J/Link gives you this ability and more, through two special methods called onLoadClass() and onUnloadClass(). When a class is loaded into the Wolfram Language, either directly through LoadJavaClass or indirectly by calling JavaNew, it is examined to see if it has a method with the following signature.
If such a method is present, it will be called after all the method and field definitions for the class are set up in the Wolfram Language. Because a class can only be loaded once in a Java session, this method will only be called once in the lifetime of a single Java runtime, although it may be called more than once in the lifetime of a single Wolfram Language kernel (because the user can repeatedly launch and quit the Java runtime). The KernelLink that is provided as an argument to this method is of course the link back to the Wolfram Language.
A typical use for this feature would be to define the text for an error message issued by one of the methods in the class. Here is an example.
public static void onLoadClass(KernelLink ml) throwsMathLinkException {
ml.evaluate("MyClass::sun = \"The foo() method can only be called on Sunday.\"");
ml.discardAnswer();
}
Note that this method throws MathLinkException. Your onLoadClass() method can throw any exceptions you like (a MathLinkException would be typical). This will not interfere with the matching of the expected signature for onLoadClass(). If an exception is thrown during onLoadClass, it will be handled gracefully, meaning that the normal operation of LoadJavaClass will not be affected. The only exception to this rule is if your code throws an exception while it is interacting with the link to the kernel, and more specifically, in the period between the time that it sends a computation to the kernel and the time that it begins to read the result. In other words, exceptions you throw will not break the LoadJavaClass mechanism, but it is up to you to make sure that you do not damage the link's state by starting something you do not finish.
Another reason to use onLoadClass() would be if you wanted to create a Wolfram Language function for users to call that "wrapped" a static method call, providing it with a preferred name or argument sequence. If you have a class named MyClass with the method public static void myMethod(double[a]), the definition that will be automatically created for it in the Wolfram Language will require that its argument be a list of real numbers or integers. Say you want to add a definition named MyMethod, having the traditional Wolfram Language capitalization, and you also want this function automatically to use N on its argument so that it will work for anything that will evaluate to a list of numbers, such as {Pi, 2Pi, 3Pi}. Here is how you would set up such an additional definition.
public static void onLoadClass(KernelLink ml) throwsMathLinkException {
ml.evaluate("MyMethod[x_] := myMethod[N[x]]");
ml.discardAnswer();
}
In other words, if you are not happy with the interface to the class that will automatically be created in the Wolfram Language, you can use onLoadClass() to set up the desired definitions without changing the Java interface.
The Wolfram Language context that will be current when onLoadClass() is called is the context in which all the class's static methods and fields are defined. That is why in the preceding example the definition was made for MyMethod and not MyClass`MyMethod. This is important since you cannot know the correct context in your Java code because it is determined by the user via the AllowShortContext option to LoadJavaClass.
It is generally not a good idea to use onLoadClass() to send a lot of code to the Wolfram Language. This will make the behavior of your class hard for people to understand because the Wolfram Language code is hidden, and also inflexible since you would have to recompile it to make changes to the embedded Wolfram Language code. If you have a lot of code that needs to accompany a Java class, it is better to put that code into a Wolfram Language package file that you or your users load. That is, rather than having users load a class that dumps a lot of code into the Wolfram Language, you should have your users load a Wolfram Language package that loads your class. This will provide the greatest flexibility for future changes and maintenance.
Finally, there is no reason why your onLoadClass() method needs to restrict itself to making J/Link calls. You could perform operations specific to the Java side, for example, writing some debugging information to the Java console window, opening a file for writing, or whatever else you desire.
Similar to the handling of the onLoadClass() method, the onUnloadClass() method is called when a class is unloaded. Every loaded class is unloaded automatically by UninstallJava right before it quits the Java runtime. You can use onUnloadClass() to remove definitions created by onLoadClass(), or perform any other clean-up you would like. The signature of onUnloadClass() must be the following, although it can throw any exceptions.
Note that the meaning of loading and unloading classes here refers to being loaded by the Wolfram Language with LoadJavaClass either directly or indirectly. It does not refer to the loading and unloading of classes internally by the Java runtime. Class loading by the Java runtime occurs when the class is first used, which may have occurred long before LoadJavaClass was called from the Wolfram Language.
Manually Returning a Result to the Wolfram Language
The default behavior of a Java method called from the Wolfram Language is to return to the Wolfram Language exactly what the method itself returns. There are times, however, when you want to return something else. For example, you might want to return an integer in some circumstances, and a symbol in others. Or you might want a method to return one thing when it is being called from Java, and return something different to the Wolfram Language. In these cases, you will need to manually send a result to the Wolfram Language before the method returns.
Say you are writing a file-reading class that you want to call from the Wolfram Language. Because you want almost the identical behavior to the standard class java.io.FileInputStream, your class will be a subclass of it. The only changes you want to make are to provide some more Wolfram Language-like behavior. One example is that you want the read method to return not -1 when it reaches the end of the file, but rather the symbol EndOfFile, which is what the Wolfram Language's built-in file-reading functions return.
import java.io.*;
import com.wolfram.jlink.*;
public class MyFileReader extends FileInputStream {
<<constructors, other methods deleted>>
public int read() {
int i = super.read();
if (i == -1) {
KernelLink link = StdLink.getLink();
if (link != null) {
link.beginManual();
try {
link.putSymbol("EndOfFile");
} catch (MathLinkException e) {}
}
}
return i;
}
}
If the file has reached the end, i will be -1, and you want to manually return something to the Wolfram Language. The first thing you need to do is get a KernelLink object that can be used to communicate with the Wolfram Language. This is obtained by calling the static method StdLink.getLink(). If you have written installable MathLink programs in C, you will recognize the choice of names here. A C program has a global variable named stdlink that holds the link back to the Wolfram Language. J/Link has a StdLink class that has a few methods related to this link object.
The first thing you do is check whether getLink() returns null. It will never be null if the method is being called from the Wolfram Language, so you can use this test to determine whether the method is being called from the Wolfram Language or as part of a normal Java program. In this way, you can have a method that can be used from Java in the usual way when a Wolfram Language kernel is nowhere in sight. The getLink() call works no matter if the method is called directly from the Wolfram Language, or indirectly as part of a chain of methods triggered by a call from the Wolfram Language.
Once you have verified that a link back to the kernel exists, the first thing to do is inform J/Link that you will be sending the result back to the Wolfram Language yourself, so it should not try automatically to send the method's return value. This is accomplished by calling the beginManual() method on the KernelLink object.
You must call beginManual() before you send any part of a result back to the Wolfram Language. If you fail to do this, the link will get out of sync and the next J()). As always, these calls can throw a MathLinkException, so you need to wrap them in a try/catch block. The catch handler is empty, since there really is not anything to do in the unlikely event of a MathLink error. The internal J/Link code that wraps all method calls will handle the cleanup and recovery from any MathLink error that might have occurred calling putSymbol(). You do not need to do anything for MathLinkExceptions that occur while you are putting a result manually. The method call will return $Failed to the Wolfram Language automatically.
Installable programs written in C can also manually send results back. This is indicated by using the Manual keyword in the function's template entry. Thus for C programs the manual/automatic decision must be made at compile time, whereas with J/Link it is a runtime switch. You can have it both ways with J/Link—a normal automatic return in some circumstances and a manual return in others, as the preceding example demonstrates.
Requesting Evaluations by the Wolfram Language
So far, you have seen only cases where a Java System, or some Print output, or you might want to have the Wolfram Language evaluate something and return the answer to you. This is a completely separate issue from what you want to return to the Wolfram Language at the end of your method—you can request evaluations from Java, the Java code is acting as the slave, performing a computation and returning control to the Wolfram Language. In the middle of a Java method, however, you can call back into the Wolfram Language, temporarily turning it into a computational server for the Java side. Thus you would expect to encounter essentially all the same issues that are discussed in "Writing Java Programs That Use the Wolfram Language", and you would need to understand the full J/Link Java-side API.
The full treatment of the MathLink and
interfaces is presented in "Writing Java Programs That Use the Wolfram Language". This section discusses a few special methods in
that are specifically for use by "installed" methods. You have already seen one, the beginManual() method. Now you will treat the message(), print(), and evaluate() methods.
The tasks of issuing a Wolfram System message from a Java method and triggering some Print output are so commonly done that the
interface has special methods for these operations. The method message() performs all the steps of issuing a Wolfram System message. It comes in two signatures.
The first form is for when you just have a single string argument to be slotted into the message text, and the second form is for if the message text needs two or more arguments. You can pass null as the second argument if the message text needs no arguments.
The print() method performs all the steps necessary to invoke the Wolfram Language's Print function.
Here is an example method that uses both. Assume that the following messages are defined in the Wolfram System (this could be from loading a package or during this class's onLoadClass() method).
public static double foo(double x, double y) {
KernelLink link = StdLink.getLink();). They do not throw MathLinkException so you do not have to wrap them in try/catch blocks.
Here is what happens when you call foo().
Note that you automatically get Indeterminate returned to the Wolfram Language when a floating-point result from Java Java. You can explicitly send the EvaluatePacket head, or you can use one of the methods in
that use EvaluatePacket for you. These methods are as follows.
These methods are discussed in "Writing Java Programs That Use Mathematica" (actually, they also come in several more flavors with other argument sequences). Here is a simple example.
public static double foo(double x, double y) {
KernelLink link = StdLink.getLink();
if (link != null) {
try {");
int sum2 = Integer.parseInt(s);
// If you want, put the whole evaluation piece by piece,
// including the EvaluatePacket head.
link.putFunction("EvaluatePacket");
link.putFunction("Plus", 2);
link.put(4);
link.put(4);
link.waitForAnswer();
int sum3 = link.getInteger();
} catch (MathLinkException e) {
// The only type of mathlink error we are likely to get
// is from a "get" function when what we are trying to
// get is not the type of expression that is waiting. We
// just clear the error state, throw away the packet we
// are reading, and let the method finish normally.
link.clearError();
link.newPacket();
}
}
return Math.sqrt(x) * Math.sqrt(y);
}
Throwing Exceptions
Any exceptions that your method throws will be handled gracefully by J/Link, resulting in the printing of a message in the Wolfram System describing the exception. This was discussed in "How Exceptions Are Handled". Java will probably hang.
Making a Method Interruptible
If you are writing a method that may take a while to complete, you should consider making it interruptible from the Wolfram Language. In C MathLink programs, a global variable named WLAbort is provided for this purpose. In J/Link programs, you call the wasInterrupted() method in the KernelLink interface.
Here is an example method that performs a long computation, checking every 100 iterations whether the user tried to abort it (using the Interrupt Evaluation or Abort Evaluation commands in the Evaluation menu).
public int foo() {
KernelLink link = StdLink.getLink(); J/Link causes a method or constructor call that is aborted to return Abort[], whether or not you detect the abort in your code. Therefore, if you detect an abort and want to honor the user's request, just return some value right away. When J/Link returns Abort[], the user's entire computation is aborted, just as if the Abort[] were.
J/Link makes no distinction between an interrupt request and an abort request; they Java method is executing, it has a different set of buttons than when normal Wolfram Language code is executing. One of the options is Send Abort to Linked Program and another is Send Interrupt to Linked Program. Both of these choices have the same effect for Java methods, which is to cause wasInterrupted() to return true and the call to return Abort[] when it completes. The third button is Kill Linked Program, which will cause the Java runtime to quit. If you call a Java method that is not interruptible, killing the Java runtime in this way is the only way to make the method call terminate (you can also kill the Java runtime using process control features of your operating system).
Sometimes you might want a Java Java Java code so that J/Link does not return Abort[], simply call the clearInterrupt() method.
public int foo() {
KernelLink link = StdLink.getLink();
for (int i = 0; i < 10000, i++) {
... perform one step ...
if (i % 100 == 0 && link.wasInterrupted()) {
link.clearInterrupt();
return resultSoFar; // This is the value that will be returned to Mathematica
}
}
...
return 42;
}
Writing Your Own Event Handler Code
"Handling Events with Mathematica Code: The "MathListener" Classes" introduced the topic of triggering calls into the Wolfram Language as a response to events fired in Java, such as clicking a button. A set of classes derived from MathListener is provided by J/Link for this purpose. You are not required to use the provided MathListener classes, of course. You can write your own classes to handle events and put calls into the Wolfram Language directly into their code. All the event handler classes in J/Link are derived from the abstract base class MathListener, which takes care of all the details of interacting with the Wolfram Language, and also provides the setHandler() methods that you use to associate events with Wolfram Language code. Users who want to write their own MathListener-style classes (for example, for one of the Swing-specific event listener interfaces, which J/Link does not provide) are strongly encouraged to make their classes subclasses of MathListener to inherit all this functionality. You should examine the source code for MathListener, and also one of the concrete classes derived from it (MathActionListener is probably the simplest one) to see how it is written. You can use this as a starting point for your own implementation.
There is a new feature of J/Link 2.0 that should be pointed out in this context. This is the ImplementJavaInterface Wolfram Language function, which lets you implement any Java interface entirely in Wolfram Language code. ImplementJavaInterface is described in more detail in "Implementing a Java Interface with Mathematica Code", but a common use for it would be to create event-handler classes that implement a "Listener"-type interface for which J/Link does not have a built-in MathListener. This is discussed in more detail in "Implementing a Java Interface with Mathematica Code", and if you choose this technique, then you do not have to worry about any of the issues in this section because they are handled for you.
If you are going to write a Java class, and you choose not to derive your class from MathListener, there are two very important rules that must be adhered to when writing event-handler code that calls into the Wolfram Language. To be more precise, these rules apply whenever you are writing code that needs to call into the Wolfram Language at a point when the Wolfram Language is not currently calling into Java. That may sound confusing, but it is really very simple. "Requesting Evaluations by Mathematica" showed how to request evaluations by the Wolfram Language from within a Java method. In this case, the Wolfram Language has called your Java method, and while the Wolfram Language is waiting for the result, your code calls back to perform some computation. This works fine as described in that earlier section, because at the point the code calls back into the Wolfram Language, the Wolfram Language is in the middle of a call to Java. This is a true "callback"—the Wolfram Language has called Java, and during the handling of this call, Java calls back to the Wolfram Language. In contrast, consider the case where some Java code executes in response to a button click. When the button click event fires, the Wolfram Language is probably not in the middle of a call to Java.
Special considerations are necessary in the latter case because there are two threads in the Java runtime that are using MathLink. The first one is created and used by the internals of J/Link to handle standard calls into Java originating in the Wolfram Language, as described throughout this tutorial. The second one is the Java user interface thread (sometimes called the AWT thread), which is the one on which your event handler code will be called. You need to make sure that your use of the link back to the kernel on the user interface thread does not interfere with J/Link's internal thread.
The following code shows an idealized version of the actionPerformed() method in the MathActionListener class. The actual code in MathActionListener is different, because this work is farmed out to the parent class, MathListener, but this example shows the correct flow of operations. This is the code that is executed when the associated object's action occurs (like a button click).
public void actionPerformed(ActionEvent e) {
KernelLink ml = StdLink.getLink();
StdLink.requestTransaction();
synchronized (ml) {
try {
// Send the code to perform the user's requested operation.
ml.putFunction("EvaluatePacket", 1);
... code to put rest of expression to evaluate goes here ...
ml.endPacket();
ml.discardAnswer();
} catch (MathLinkException exc) {
...
}
}
}
The first rule to note in this code is that the complete transaction with the Wolfram Language, which includes sending the code to evaluate and completely reading the result, is wrapped in a synchronized(ml) block. This is how you ensure that the user interface thread has exclusive access to the link for the entire transaction. The second rule is that the synchronized(ml) statement must be preceded by a call to StdLink.requestTransaction(). This call will block until the kernel is at a point where it is ready to accommodate evaluations originating in Java. The call must occur before the synchronized(ml) block begins, and once you call it you must make sure that you send something to the Wolfram Language. In other words, when requestTransaction() returns, the kernel will be blocking in an attempt to read from the Java link. The kernel will be stuck in this state until you send it something, so you must protect against a Java exception being thrown after you call requestTransaction() but before you send anything. Typically you will do this simply by calling requestTransaction() immediately before the synchronized(ml) block begins and you start sending something.
It was just said that StdLink.requestTransaction() will block until the kernel is ready to accept evaluations originating in Java. To be specific, it will block until one of the following conditions occurs:
- Java is not being used from the Wolfram Language (InstallJava has not been called)
These conditions should make sense given the discussion about creating user interface elements in the section "Creating Windows and Other User Interface Elements". DoModal,
, and
are the three ways in which you direct the kernel's attention to the Java link so that it can detect incoming request for computations.
If you make the common mistake of inadvertently triggering a call to the Wolfram Language from Java before you have called DoModal or
, the Java user interface thread will hang. This can be easily remedied by calling DoModal,
, or
afterward (
may need to be called more than once, if more than one event callback is queued up).
If the rule about when it is necessary to use StdLink.requestTransaction() and synchronized(ml) is confusing, you will be happy to learn that it is fine to use these constructs in any code that calls the Wolfram Language. In code that does not need them, they are pointless, but harmless, and will not cause the calling thread to block. If you are writing a Java method that needs to call the Wolfram Language and there is any chance that it might be called from the user interface thread, add the StdLink.requestTransaction() and synchronized(ml).
Debugging Your Java Classes
You can use your favorite debugger to debug Java code that is called from the Wolfram Language. The only issue is that you typically have to launch a Java program inside the debugger to do this. The Java program that you need to launch is the one that is normally launched for you when you call InstallJava. The class that contains J/Link's main() method is com.wolfram.jlink.Install. Thus, the command line to start J/Link that is executed internally by InstallJava is typically as follows.
There may be additions or modifications to this depending on the options to InstallJava, and also some extra MathLink-specific arguments are tacked on at the end. To use a debugger, you just have to launch Java with the appropriate command-line arguments that allow you to establish the link to the Wolfram Language manually.
If you use a development environment that has an integrated debugger, then the debugger probably has a setting for the main class to use (the class whose main() method will be invoked) and a setting for command-line arguments. For example, in WebGain Visual Café, you can set these values in the Project panel of the Project/Options dialog. Set the main class to be com.wolfram.jlink.Install, and the arguments to be something like the following.
(On Windows:)
-linkmode listen -linkname foo
(On Unix/Linux:)
-linkmode listen -linkprotocol tcp -linkname 1234
Then start the debugging session. You should see the J/Link copyright notice printed and then Java will wait for the Wolfram Language to connect. To do this, go to your Wolfram System session, make sure the JLink.m package has been read in, and execute the command.
This works because ReinstallJava can take a LinkObject as its argument, in which case it will not try to launch Java itself. This allows you to manually establish the MathLink connection between Java and the Wolfram Language, then feed that link to ReinstallJava and let it do the rest of the work of preparing the Wolfram Language and Java sides for interacting with each other.
If you like to use a command-line debugger like jdb, you can do the following.
C:\>jdb
Initializing jdb...
> run com.wolfram.jlink.Install -linkmode listen -linkname foo
running ...
main[1] J/Link (tm)
Version 1.1
Current thread "main" died. Execution continuing...
>
The message about the main thread dying is normal. Now jdb is ready for commands. First, though, you have to execute in your Wolfram System session the LinkConnect and ReinstallJava lines shown earlier. This example was for Windows, so Unix users will have to adjust the run line to reflect the proper arguments.
Deploying Applications That Use J/Link
This section discusses some issues relevant to developers who are creating add-ons for the Wolfram Language that use J/Link.
J/Link uses its own custom class loader that allows it to find classes in a set of locations beyond the startup class path. As described in "Dynamically Modifying the Class Path", users can grow this set of extra locations to search for classes by calling the AddToClassPath function. One of the motivations for having a custom class loader was to make it easy for application developers to distribute applications that have parts of their implementation in Java. If you structure your application directory properly, your users will be able to install it simply by copying it into any standard location for Wolfram Language applications. J/Link will be able to find your Java classes immediately, without users having to perform any classpath-related operations or even restart Java.
If your Wolfram Language application uses J/Link and includes its own Java components, you should create a Java subdirectory in your application directory. You can place any jar files that your application needs into this Java subdirectory. If you have loose class files (not bundled into a jar file), they should go into an appropriately nested subdirectory of the Java directory. "Appropriately nested" means that if your class is in the Java package com.somecompany.math, then its class file goes into the com/somecompany/math subdirectory of the Java directory. If the class is not in any package, it can go directly into the Java directory. J/Link can also find native libraries and resources your application needs. Native libraries must be in a subdirectory of your Java/Libraries directory that is named after the $SystemID of the platform on which it is installed. Here is an example directory structure for an application that uses J/Link.
MyApp/
... other files and directories used by the application ...
Java/
MyAppClasses.jar
MyImage.gif
Libraries/
Windows/
MyNativeLibrary.dll
PowerMac/
MyNativeLibrary
Darwin/
libMyNativeLibrary.jnilib
Linux/
libMyNativeLibrary.so
... and so on for other Unix platforms
Your application directory must be placed into one of the standard locations for Wolfram Language applications. These locations are listed as follows. In this notation, $InstallationDirectory/AddOns/Applications means "The AddOns/Applications subdirectory of the directory whose value is given by the Wolfram Language variable $InstallationDirectory".
/Applications (Mathematica 4.2 and later only)
/Applications (Mathematica 4.2 and later only)
$InstallationDirectory/AddOns/Applications
$InstallationDirectory/AddOns/ExtraPackages
Coding Tips
Here are a few tips on producing high-quality applications. These suggestions are guided by mistakes that developers frequently make.
Call InstallJava in the body of a function or functions, not when your package is read in. It is best to avoid side effects during the reading of a package. Users expect reading in a package to be fast and to do nothing but load definitions. If you launch Java at this time, and it fails, it could cause a mysterious hang in the loading process. It is better to call InstallJava in the code of one or more of your functions. You probably do not need to call InstallJava in every single function that uses Java. Most applications have a few "major" functions that users are likely to use almost exclusively, or at least at the start of their session. If your application does not have this property, then provide an initialization function that your users must call first, and call InstallJava inside it.
Call InstallJava with no arguments. You cannot know what options your users need for Java on their systems, so do not override what they may have set up. It is the user's responsibility to make sure that they call SetOptions to customize the options for InstallJava as necessary. Typically this would be done in their init.m file.
Make sure you use JavaBlock and/or ReleaseJavaObject to avoid leaking object references. You cannot know how others will use your code, so you need to be careful to avoid cluttering up their sessions with a potentially large number of useless objects. Sometimes you need to create an object that persists beyond the lifetime of a single Wolfram Language function, like a viewer window. In such cases, use a MathFrame or MathJFrame as your top-level window and use its onClose() method to specify Wolfram Language code that releases all outstanding objects and unregisters kernel or front end sharing you may have used. If this is not possible, provide a cleanup function that users can call manually. Use LoadedJavaObjects to look at the list of objects referenced in the Wolfram Language before and after your functions run; it should not grow in length.
If you use ShareKernel or ShareFrontEnd, make sure you save the return values from these functions and pass them as arguments to UnshareKernel and UnshareFrontEnd. Do not call UnshareFrontEnd or UnshareKernel with no arguments, as this will shut down sharing even if other applications are using it.
Do not assume that the Java runtime will not be restarted during the lifetime of your application. Although users are strongly discouraged from calling UninstallJava or ReinstallJava, it happens. It is unavoidable that some applications will fail if the Java runtime is shut down at an inopportune time (e.g. when they have a Java window displayed), but there are steps you can take to increase the robustness of your application in the face of Java shutdowns and restarts. One step was already given as the first tip listed—call InstallJava at the start of your "major" functions. Another step is to avoid caching JavaClass or JavaObject expressions unnecessarily, as these will become invalid if Java restarts. An example of this is calling InstallJava and then LoadJavaClass and JavaNew several times when your package file is read in, and storing the results in private variables for the lifetime of your package. This is problematic if Java is restarted. Never store JavaClass expressions—call LoadJavaClass whenever there is any doubt about whether a class has been loaded into the current Java runtime. Calling LoadJavaClass is very inexpensive if the class has already been loaded. If you have a JavaObject that is very expensive to create and therefore you feel it necessary to cache it over a long period of time in a user's session, consider using the following idiom to test whether it is still valid whenever it is used. The JavaObjectQ test will fail if Java has been shut down or restarted since the object was last created, so you can then restart Java and create and store a new instance of the object.
Do not call UninstallJava or ReinstallJava in your application. You need to coexist politely with other applications that may be using Java. Do not assume that when your package is done with Java, the user is done with it as well. Only users should ever call UninstallJava, and they should probably never call it either. There is no cost to leaving Java running. Likewise, users will rarely call ReinstallJava unless they are doing active Java development and need to reload modified versions of their classes.
Example Programs
Introduction
This section will work through some example programs. These examples are intended to demonstrate a wide variety of techniques and subtleties. Discussions include some nuances in the implementations and touch on most of the major issues in J/Link programming.
This will take a relatively rigorous approach, and in particular it will be careful to avoid leaking references. As discussed in the section "JavaBlock", JavaBlock and ReleaseJavaObject are the tools in this fight, but if you find yourself becoming the least bit confused about the subject, just ignore it completely. For many casual, personal uses of J/Link, you can forget about memory management issues, and just let Java objects pile up.
J/Link includes a number of notebooks with sample programs, including most of the programs developed in this section. These notebooks can be found in the <Mathematica dir>/SystemFiles/Links/JLink/Examples/Part1 directory.
A Beep Function
Here is a very simple example that generates a system alert just like the Wolfram Language Beep function.
You may notice a short delay the first time JavaBeep[] is executed. This is due to the LoadJavaClass call, which only takes measurable time the first time it is called for any given class.
This is a perfectly good beep function, and many users will not need to go beyond this. If you are writing code for others to use, however, you will probably want to embellish this code a little bit. Here is a more professional version of the same function.
Note that the first thing you do is call InstallJava. It is a good habit to call InstallJava in functions that use J/Link, at least if you are writing code for others to use. If InstallJava has already been called, subsequent calls will do nothing and return very quickly. The whole program is wrapped in JavaBlock. As discussed in the section "JavaBlock", JavaBlock automates the process of releasing references to objects returned to the Wolfram Language. The getDefaultToolkit() method returns a Toolkit object, so you want to release the JavaObject that gets created in the Wolfram Language. The getDefaultToolkit() method returns a reference to the same Toolkit object every time it is called, so even if you do not call JavaBlock, you will only "leak" one object in an entire session. You could also write Beep using an explicit call to ReleaseJavaObject.
The advantage to using JavaBlock is that you do not have to think about what, if any, methods might return objects, and you do not have to assign them to variables.
Formatting Dates
Here is an example of a computation performed in Java. Java provides a number of powerful date- and calendar-oriented classes. Say you want to create a nicely formatted string showing the time and date. In this first step you create a new Java Date object representing the current date and time.
Next you load the DateFormat class and create a formatter capable of formatting dates.
Now you call the format() method, passing the Date object as its argument.
There are many different ways in which dates and times can be formatted, including respecting a user's locale. Java also has a useful number-formatting class, an example of which was given in "An Optimization Example".
A Progress Bar
A simple example of a popup user interface for a Wolfram Language program is a progress bar. This is an example of a "non-interactive" user interface, as defined in "Interactive and Non-Interactive Interfaces", because it does not need to call back to the Wolfram Language or return a result to the Wolfram Language. the Wolfram Language Wolfram Language programs are typically written, and J/Link lets you do the same with Java objects and methods.
You the section "JavaBlock".
You also need a function to close the progress dialog and clean up after it. Only two things need to be done. First, the dispose() method must be called on the top-level frame window that contains the bar. Second, if you want to avoid leaking object references, you need to call ReleaseJavaObject on the bar object because it is the only object reference that escaped the JavaBlock in ShowProgressBar. You need to call dispose() on the JFrame object you created in ShowProgressBar, but you would still need to release the bar object. DestroyProgressBar (and the bar's setValue() method) is safe to call whether or not the user closed the dialog.
Here is how you would use the progress bar in a computation. The call to ShowProgressBar displays the bar dialog and returns a reference to the bar object. Then, while the computation is running, you periodically call the setValue() method to update the bar's appearance. When the computation is done, you call DestroyProgressBar.
An easy way to test whether your code leaks object references is to call LoadedJava, you can play with the look-and-feel options that Swing provides. Specifically, you need to call static methods.
The default look and feel is the "metal" theme. You can change it to the native style look for your platform as follows (it helps to be able to see the window when doing this).
A Simple Modal Input Dialog
You saw one example of a simple modal dialog in "Modal Windows". Presented here is another one—a basic dialog that prompts the user to enter an angle, with a choice of whether it is being specified in degrees or radians. This will demonstrate a dialog that returns a value to a running Wolfram Language program when it is dismissed, much like the Wolfram Language's built-in Input function, which requests a string from the user before returning. Dialogs like this one are not "modal" in the traditional sense that they must be closed before other Java windows can be used, but rather they are modal with respect to the kernel, which is kept busy until they are dismissed (that is, until DoModal[] returns). The section "Creating Windows and Other User Interface Elements" discusses modal and modeless Java windows in detail.
The code is rather straightforward and warrants little in the way of commentary. In creating the window and the controls within it, it exactly mirrors the Java code you would use if you were writing the program in Java. One technique it demonstrates is determining whether the OK or Cancel button was clicked to dismiss the dialog. This is done by having the MathActionListener objects assigned to the two buttons return different things in addition to calling EndModal[]. Recall that DoModal[] returns whatever the code that calls EndModal[] returns, so here you have the OK button execute (EndModal[]; True)&, a pure function that ignores its arguments, calls EndModal[], and returns True, whereas the Cancel button executes (EndModal[]; False)&. Thus, DoModal[] returns True if the OK button was clicked, or False if the Cancel button was clicked. It will return Null if the window's close box was clicked (this behavior comes from the MathFrame itself).
It may take several seconds to display the dialog the first time GetAngle[] is called. This is due to the one-time cost of loading the several large AWT classes required. Subsequent invocations of GetAngle[] will be much quicker.
The complete code for this example is also provided in the file ModalInputDialog.nb in the JLink/Examples/Part1 directory.
A File Chooser Dialog Box
A useful feature for Wolfram Language programs is to be able to produce a file chooser dialog, such as the typical Open or Save dialog boxes. You could use such a dialog box to prompt a user for an input file or a file into which to write data. This is easily accomplished in a cross-platform way with Java, specifically with the JFileChooser class in the standard Swing library. The code for such a dialog box is provided in the file FileChooserDialog.nb in the JLink/Examples/Part1 directory.
Mathematica 4.0 introduced a new "experimental" function called FileBrowse[] that displays a file browser in the front end. Although this function is usable, it has several shortcomings compared to the Java technique presented next. One of the limitations is that it requires that the front end be in use. Another is that it is not customizable, so you always get a Save file as: dialog box and.
Although this example is a short program, the code has some unfortunate complexity (meaning "ugliness") in it related to making this special type of dialog window come to the foreground on all platforms. For this reason, the code is not presented here. Instead, some topics in the program code will be mentioned; you can read the full code and its associated comments in the example file if you are interested in the implementation details. directory in which to start. You can also supply no arguments and get a default Open dialog box that starts in the kernel's current directory.
Although this is a "modal" dialog box, there is no need to use DoModal, because the showDialog() method will not return until the user dismisses the dialog box. Recall that DoModal is a way to force the Wolfram Language to stall until the dialog box or other window is dismissed. Here, you get this behavior for free from showDialog(). The other thing that DoModal does is put the kernel into a loop where it is ready to receive input from Java, so you can script some of the functionality of the dialog via callbacks to the Wolfram Language. The file chooser dialog box does not need to use the Wolfram Language in any way until it returns the selected file, so you have no need for this other aspect that DoModal provides.
A second point of interest is in the name of the constant that showDialog() returns to indicate that the user clicked the Save or Open button instead of the Cancel button. The name of this constant in Java is JFileChooser.APPROVE_OPTION. Java names map to Wolfram Language symbols, so they must be translated if they contain characters that are not legal in the Wolfram Language symbols, such as the underscore. Underscores are converted to a "U" when they appear in symbols, so the Wolfram Language name of this constant is JFileChooser`APPROVEUOPTION. See "Underscores in Java Names" for more information.
Sharing the Front End: Palette-Type Buttons
As discussed in the section "Creating Windows and Other User Interface Elements", one of the goals of J/Link is to allow Java user interface elements to be as close as possible to first-class members of the notebook front end environment in the way notebook and palette windows are. One of the ways this is accomplished is with the
function, which allows Java windows to share the kernel's attention with notebook windows. Such Java windows are referred to as "modeless", not in the traditional sense of allowing other Java windows to remain active, but modeless with respect to the kernel, meaning that the kernel is not kept busy while they are open. cause the current selection to be replaced by something else and the resulting expression to be evaluated in place.
The
function lets actions in Java modeless windows trigger events in a notebook window just like can be done from palette buttons or Wolfram Language code you evaluate manually in a notebook. Remember that you automatically get the ability to interact with the front end when you use a modal dialog (i.e. when DoModal is running). When Java is being run in a modal way, the kernel's $ParentLink always points at the front end, so all side effect outputs get sent to the front end automatically. A modal window would not be acceptable for the palette example here because the palette needs to be an unobtrusive enhancement to the Wolfram Language environment—it cannot lock up the kernel while it is alive.
allows Java windows to call the Wolfram Language without tying up the kernel, and
is an extension to
(it calls
internally) that allows such "modeless" Java windows to interact with the front end.
is discussed in more detail in "Sharing the Front End".
In the
example that follows, a simple palette-type button is developed in Java that prints its label at the current cursor position in the active notebook. Because of current limitations with
, this example will not work with a remote kernel; the same machine must be running the kernel and the front end.
Now invoke the
function to create and display the palette. Click the button to see the button's label (foo in this example) inserted at the current cursor location. When you are done, click the window's close box.
The code is mostly straightforward. As usual, you use the MathFrame class for the frame window because it closes and disposes of itself when its close box is clicked. You create a MathActionListener that calls buttonFunc and you assign it to the button. From the table in the section "Handling Events with Mathematica Code: The "MathListener" Classes", you know that buttonFunc will be called with two arguments, the first of which is the ActionEvent object. From this object you can obtain the button that was clicked and then its label, which you insert at the current cursor location using the standard NotebookApply function. One subtlety is that you need to specify SelectedNotebook[] as the target for notebook operations like NotebookApply, NotebookWrite, NotebookPrint, and so on, which take a notebook as an argument. Because of implementation details of
, the notebook given by EvaluationNotebook[] is not the correct target (after all, there is no evaluation currently in progress in the front end when the button is clicked).
The important thing to note in
is the use of
and
. As discussed earlier,
puts Java into a state where it forwards everything other than the result of a computation to the front end, and puts the front end into a state where it is able to receive it. This is why the Print output triggered by clicking the Java button, which would normally be sent to Java (and just discarded there), appears in the front end. Front end sharing (and also kernel sharing) should be turned off when they are no longer needed, but if you are writing code for others to use you cannot just blindly shut sharing down—the user could have other Java windows open that need sharing. To handle this issue,
(and
) works on a register/unregister principle. Every time you call
, it returns a token that represents a request for front end sharing. If front end sharing is not on, it will be turned on. When a program no longer needs front end sharing, it should call
, passing the token from
as the argument. Only when all requests for sharing have been unregistered in this way will sharing actually be turned off.
The onClose() method of the MathFrame class lets you specify Wolfram Language code to be executed when the frame is closed. This code is executed after all event listeners have been notified, so it is a safe place to turn off sharing. In the onClose() code, you call
with the token returned by
. Using the onClose() method in this way allows us to avoid writing a cleanup function that users would have to call manually after they were finished with the palette. It is not a problem to leave front end sharing turned on, but it is desirable to have your program alter the user's session as little as possible.
Now expand this example to include more buttons that perform different operations. The complete code for this example is provided in the file Palette.nb in the JLink/Examples/Part1 directory.
The first thing you do is separate the code that manages the frame containing the buttons from the code that produces a button. In this way you will have a reusable palette frame that can hold any number of different buttons. The
function here takes a list of buttons, arranges them vertically in a frame window, calls
, and displays the frame in front of the user's notebook window.
Note that you do not return anything from the
function—specifically, you do not return the frame object itself. This is because you do not need to refer to the frame ever again. It is destroyed automatically when its close box is clicked (remember, this is a feature of the MathFrame class). Because you do not need to keep references to any of the Java objects you create, the entire body of
can be wrapped in JavaBlock.
Now create a reusable will use the
function to create four buttons. The first is just the print button just defined, the behavior of which is specified by printButtonFunc.
The second as a result of having front end sharing turned on via
. if you put the insertion point into a StandardForm cell and try them.
Now you are finally ready to create the palette and show it.
In closing, it must be noted that although this example has demonstrated some useful techniques, it is not a particularly valuable way to use
. In creating a simple palette of buttons, you have done nothing that the front end cannot do all by itself. The real uses you will find for
will presumably involve aspects that cannot be duplicated within the front end, such as more sophisticated dialog boxes or other user interface elements.
Real-Time Algebra: A Mini-Application
This example will put together everything you have learned about modal and modeless Java user interfaces. You will implement the same "mini-application" (essentially just a dialog box) in both modal and modeless flavors. The application is inspired by the classic MathLink example program RealTimeAlgebra, originally written for the NeXT computer by Theodore Gray and then done in HyperCard by Doug Stein and John Bonadies. The original RealTimeAlgebra provides an input window into which the user types an expression that depends on certain parameters, an output window that displays the result of the computation, and some sliders that are used to vary the values of the parameters. The output window updates as the sliders are moved, hence the name RealTimeAlgebra. Our implementation of RealTimeAlgebra will be very simplistic, with only a single slider to modify the value of one parameter.
The complete code for this example is provided in the file RealTimeAlgebra.nb in the JLink/Examples/Part1 directory.
Here is the function that creates and displays the window.
The sliderFunc function is called by the MathAdjustmentListener whenever the slider's position changes. It gets the text in the inputText box, evaluates it in an environment where a has the value of the slider position (the range for this is 0…20, as established in the JavaNew call that creates the slider), and puts the resulting string into the outText box. It then calls ReleaseJavaObject to release the first argument, which is the AdjustmentEvent object itself. This is the only object passed in as an argument (the other two arguments are integers). If you are wondering how you determine the argument sequence for sliderFunc, you get it from the MathListener table in the section "Handling Events with Mathematica Code: The "MathListener" Classes". Note that you need to refer by name to the input and output text boxes in sliderFunc, so you cannot make their names local variables in the Module of CreateWindow, and of course they cannot be created inside that function's JavaBlock.
There is one interesting thing in the code that deserves a remark. Look at the lines where you add the three components to the frame. What is the ReturnAsJavaObject doing there? The method being called here is in the Frame class, and has the following signature.
The second argument, constraints, is typed only as Object. The value you pass in depends on the layout manager in use, but typically it is a string, as is the case here (BorderLayout`NORTH, for example, is just the string "NORTH"). The problem is that J/Link creates a definition for this signature of add that expects a JavaObject for the second argument, and Wolfram Language strings do not satisfy JavaObjectQ, although they are converted to Java string objects when sent. This means that you can only pass strings to methods that expect an argument of type String. In the rare cases where a Java method is typed to take an Object and you want to pass a string from the Wolfram Language, you must first create a Java String object with the value you want, and pass that object instead of the raw Wolfram Language string. You have encountered this issue several times before, and you have used MakeJavaObject as the trick to get the raw string turned into a reference to a Java String object. MakeJavaObject[BorderLayout`NORTH] would work fine here, but it is instructive to use a different technique (it also saves a call into Java). BorderLayout`NORTH calls into Java to get the value of the BorderLayout.NORTH static field, but in the process of returning this string object to the Wolfram Language, it gets converted to a raw Wolfram Language string. You need the object reference, not the raw string, so you wrap the access in ReturnAsJavaObject, which causes the string, which is normally returned by value, to be returned in the form of a reference.
Getting back to the RealTimeAlgebra dialog box, you are now ready to run it as a modal window. You write a special modal version that uses CreateWindow internally.
Note that the whole function is wrapped in JavaBlock. This is an easy way to make sure that all object references created in the Wolfram Language while the dialog is running are treated as temporary and released when DoModal finishes. This saves you having to properly use JavaBlock and ReleaseJavaObject in all the handler functions used for your MathListener objects (you will notice that these calls are absent from the sliderFunc function).
Now run the dialog. The RealTimeAlgebraModal function will not return until you close the RealTimeAlgebra window, which is what you mean when you call this a "modal" interface.
It may take several seconds before the window appears the first time. As always, this is the one-time cost of loading all the necessary classes. Play around by dragging the slider, and try changing the text in the input box, for example, to N[Pi,2a].
Recall that while the Wolfram Language is evaluating DoModal[], any Print output, messages, graphics, or any other output or commands other than the result of computations triggered from Java will be sent to the front end. To see this in action, try putting Print[a] in the input text box (you will want to arrange windows on your screen so that you can see the notebook window while you are dragging the slider). Next, try Plot[Sin[a x],{x,0,4 Pi}].
Quit RealTimeAlgebra by clicking the window's close box. In addition to closing and disposing of the window, this causes EndModal[] to be executed in the Wolfram Language, which then causes DoModal to return. The disposing of the window is due to using the MathFrame class for the window, and executing EndModal[] is the result of calling the setModal() method of MathFrame, as discussed in "Modal Windows".
Now implement RealTimeAlgebra as a modeless window. The CreateWindow function can be used unmodified. The only difference is how you make the Wolfram Language able to service the computations triggered by dragging the slider. For a modal window, you use DoModal to force the Wolfram Language to pay attention exclusively to the Java link. The drawback to this is that you cannot use the kernel from the notebook front end until DoModal ends. To allow the notebook front end and Java to share the kernel's attention, you use
.
returns immediately after the window is displayed, leaving the front end and the RealTimeAlgebra window able to use the kernel for computations.
You still need a little bit of polish on the modeless version, however. First, to avoid leaking object references, you must change sliderFunc. With the modal version, you did not bother to use JavaBlock or ReleaseJavaObject in sliderFunc because you had DoModal wrapped in JavaBlock. Every call to sliderFunc, or any other MathListener handler function, occurs entirely within the scope of DoModal, so you can handle all object releasing at this level. With a modeless interface, you no longer have a single function call that spans the lifetime of the window. Thus, you put memory-management functions in our handler functions. Here is the new sliderFunc.
The JavaBlock here is unnecessary because the code it wraps creates no new object references. Out of habit, though, you wrap these handlers in JavaBlock. You need to explicitly call ReleaseJavaObject on evt, which is the AdjustmentEvent object, because its reference is created in the Wolfram Language before sliderFunc is entered, so it will not be released by the JavaBlock. The type and scrollPos arguments are integers, not objects.
Try setting the input text to Print[a]. Notice that nothing appears in the front end when you move the slider, in contrast to the modal case. With a modeless interface, the Java link is the kernel's $ParentLink during the times when the kernel is servicing a request initiated from the Java side. Thus, the output from Print and graphics goes to Java, not the notebook front end. (The Java side ignores this output, in case you are wondering.) To get this output sent to the front end instead, use
.
Now if you set the input text to, say, Print[a] or Plot[a x,{x,0,a}], you will see the text and graphics appearing in the front end.
When you are finished, quit RealTimeAlgebra by clicking its close box. Then turn off front end sharing that was turned on in the previous input.
A modal interface is simpler than a modeless one in terms of how it uses the Wolfram Language, and is therefore the preferred method unless you specifically need the modeless attribute.
and
are complex functions that put the kernel into an unusual state. They work fine, but do not use them unnecessarily.
GraphicsDlg: Graphics and Typeset Output in a Window
It is useful to be able to display Wolfram Language graphics and typeset expressions in your Java user interface, and this is easy to do using J/Link's MathCanvas class. This example demonstrates a simple dialog box that allows the user to type in a Wolfram Language expression and see the output in the form of a picture. If the expression is a plotting or other graphics function, the resulting image is displayed. If the expression is not a graphic, then it is typeset in TraditionalForm and displayed as a picture. The example is first presented in modal form and then in modeless form using
and
.
This example also demonstrates a trivial example of using Java code that was created by a drag-and-drop GUI builder of the type present in most Java development environments. For layout of simple windows, it is easy enough to do everything from the Wolfram Language. This method was chosen for all the examples in this tutorial, writing no Java code and instead scripting the creation and layout of controls in windows with Wolfram Language calls into Java. This has the advantage of not requiring any Java classes to be written and compiled. For more complex windows, however, you will probably find it much easier to create the controls, arrange them in position, set their properties in a GUI builder, and let it generate Java code for you. You might also want to write some additional Java code by hand.
If you choose this route, the question becomes, "How do I connect the Java code thus generated with the Wolfram Language?" Any public fields or methods can be called directly from the Wolfram Language, but your GUI builder may not have made public all the ones you need to use. You could make these fields and methods public or add some new public methods that expose them. The latter approach is probably preferable since it does not involve modifying the code that the GUI builder wrote, which could confuse the builder or cause it to overwrite your changes in future modifications.
The complete code for this example is provided in the JLink/Examples/Part1/GraphicsDlg directory. Some of the code is in Java.
This example uses the GUI builder in the WebGain Visual Café Java development environment. The builder was used to create a frame window with three controls. The frame window was made to be a subclass of MathFrame because you want to inherit the setModal() method. In the top left is an AWT TextArea that serves as the input box for the expression. To its right is an Evaluate button. Occupying the rest of the window is a MathCanvas.
Up to this point, no code has been written by hand at all—everything has been done automatically as components were dropped into the frame and their properties set. All that is left to do is to wire up the button so that when it is clicked the input text is taken and supplied as to the MathCanvas via its setMathCommand() method. You could write that code in Java, using Visual Café's Interaction Wizard to wire up this event (similar facilities exist in other Java GUI builders). You would have to write some Java code by hand, as the code's logic is more complex than can be handled by graphical tools for creating event handlers.
Rather than doing that, move to the Wolfram Language to script the rest of the behavior because it is easier and more flexible. You will need to access the TextArea, Button, and MathCanvas objects from the Wolfram Language, but the GUI builder made these nonpublic fields of the frame class. Thus, you need to add three public methods that return these objects to the frame class.
public Button getEvalButton() {return evalButton;}
public TextArea getInputTextArea() {return inputTextArea;}
public MathCanvas getMathCanvas() {return mathCanvas;}
That is all you need to do to the Java code created by the GUI builder.
The GUI builder created a subclass of MathFrame that is named GraphicsDlg. It also gave it a main() method that does nothing but create an instance of the frame and make it visible. You will not bother with the main() method, choosing instead to do those two steps manually, since you need a reference to the frame.
Needed before the code is run is a demonstration of one more feature of J/Link—the ability to add directories to the class search path dynamically. You need to load the Java classes for this example, but they are not on the Java class path. With J/Link, you can add the directory in which the classes reside to the search path by calling AddToClassPath. This will work exactly as written in Mathematica 4.2 and later. You will need to modify the path if you have an earlier version of the Wolfram Language.
Here is the first implementation of the Wolfram Language code to create and run the graphics dialog. This runs the dialog in a modal loop.
As mentioned in the section "Creating Windows and Other User Interface Elements" only the notebook front end can perform the feat of taking a typeset (i.e. "box") expression and creating a graphical representation of it. Thus, the MathCanvas can render typeset expressions provided that it has a front end available to farm out the chore of creating the appropriate representation. The front end is used to run this example, but it is really because you are running the Java dialog "modally" that everything works the way it does. All the while the dialog is up, the front end is waiting for a result from a computation (DoModal[]), and therefore it is receptive to requests from the kernel for various services. As far as the front end is concerned, the code for DoModal invoked the request for typesetting, even though it was actually triggered by clicking a Java button.
What if you are not happy with the restriction of running the dialog modally? Now you want to have the dialog remain open and active while not interfering with normal use of the kernel from the front end. As discussed in "Modal Windows" and "Real-Time Algebra: A Mini-Application", you get a lot of useful behavior regarding the front end for free when you run your Java user interface modally. One of these features is that the front end is kept receptive to the various sorts of requests the kernel can send to it (such as for typesetting services). You know you can run a Java user interface in a "modeless" way by using
, but then you give up the ability to have the kernel use the front end during computations initiated by actions in Java. Luckily, the
function exists to restore these features for modeless windows.
Re-implement the graphics dialog in modeless form.
The code shown is exactly the same as DoGraphicsDialogModal except for the last few lines. You call
here instead of setModal and DoModal. That is the only difference—the rest of the code (including btnFunc) is exactly the same. Notice also that you use the onClose() method of MathCanvas to execute code that unregisters the request for front end sharing when the window is closed.
Run the modeless version. Note how you can continue to perform computations in the front end while the window is active.
This new version functions exactly like the modeless version except that it does not leave the front end hanging in the middle of a computation. It is interesting to contrast what happens if you turn off front end sharing (but you need to leave kernel sharing on or the Java dialog will break completely). You can do this by replacing
and
in DoGraphicsDialogModeless with
and
. Now if you use the dialog you will find that it fails to render typeset expressions, producing just a blank window, but it still renders graphics normally (unless they have some typeset elements in them, such as a plot label). All the functionality is kept intact except for the ability of the kernel to make use of the front end for typesetting services.
BouncingBalls: Drawing in a Window
This example demonstrates drawing in Java windows using the Java graphics API directly from the Wolfram Language. It also demonstrates the use of the
function to periodically allow event handler callbacks into the Wolfram Language from Java. The issues surrounding
and how it compares to DoModal and
are discussed in greater detail in "'Manual' Interfaces: The ServiceJava Function".
The full code is a little too long to include here in its entirety, but it is available in the sample file BouncingBalls.nb in the JLink/Examples/Part1 directory. Here is an excerpt that demonstrates the use of
.
...
mwl = JavaNew["com.wolfram.jlink.MathWindowListener"];
mwl@setHandler["windowClosing", "(keepOn = False)&"];
mathCanvas@addWindowListener[mwl];
keepOn = True;
While[keepOn,
g@setColor[bkgndColor];
g@fillRect[0, 0, 300, 300];
drawBall[g, #]& /@ balls;
mathCanvas@setImage[offscreen];
balls = recomputePosition /@ balls;
ServiceJava[]
];
...
A MathWindowListener is used to set keepOn=False when the window is closed, which will cause the loop to terminate. While the window is up, mouse clicks will cause new balls to be created, appended to the balls list, and set in motion. This is done with a MathMouseListener (not shown in the code). Thus, the Wolfram Language needs to be able to handle calls originating from user actions in Java. As discussed in the section "Creating Windows and Other User Interface Elements", there are three ways to enable the Wolfram Language to do this: DoModal (modal interfaces),
or
(modeless interfaces), and
(manual interfaces). A modal loop via DoModal would not be appropriate here because the kernel needs to be computing something at the same time it is servicing calls from Java (it is computing the new positions of the balls and drawing them).
would not help because that is a way to give Java access to the kernel between computations triggered from the front end, not during such computations.
You need to periodically point the kernel's attention at Java to service requests if any are pending, then let the kernel get back to its other work. The function that does this is
, and the code above is typical in that it has a loop that calls
every time through. The calls from Java that
will handle are the ones from mouse clicks to create new balls and when the window is closed.
Spirograph
This example is just a little fun to create an interesting, nontrivial application—an implementation of a simple Spirograph-type drawing program. It is run as a modal window, and it demonstrates drawing into a Java window from the Wolfram Language, along with a number of MathListener objects for various event callbacks. It uses the Java Graphics2D API, so it will not run on systems that have only a Java 1.1.x runtime.
The code for this example can be found in the file Spirograph.nb in the JLink/Examples/Part1 directory.
One of the things you will notice is that on a reasonably fast machine, the speed is perfectly acceptable. There is nothing to suggest that the entire functionality of the application is scripted from the Wolfram Language. It is very responsive despite the fact that a large number of callbacks to the Wolfram Language are triggered. For example, the cursor is changed as you float the mouse over various regions of the window (it changes to a resize cursor in some places), so there is a constant flow of callbacks to the Wolfram Language as you move the mouse. This example demonstrates the feasibility of writing a sophisticated application entirely in the Wolfram Language.
This application was written in the Wolfram Language, but it could also have been written entirely in Java, or a combination of Java and the Wolfram Language. An advantage of doing it in the Wolfram Language is that you generally can be much more productive. Spirograph would have taken at least twice as long to write in Java. It is invaluable to be able to write and test the program a line at a time, and to debug and modify it while it is running. Even if you intend to eventually port the code to pure Java, it can be very useful to begin writing it in the Wolfram Language, just to take advantage of the scripting mode of development.
Modal programs like this are best developed using
, then made modal only when they are complete. Making it modeless while it is being developed is necessary to be able to build and debug it interactively, because while it is running you can continue to use the front end to modify the code, make new definitions, add debugging statements, and so on. Using
instead of
for modeless operation lets Wolfram System error and warning messages generated by event callbacks, and Print statement inserted for debugging, show up in the notebook window. Only when everything is working as desired do you add the DoModal[] call to turn it into a modal window.
A Piano Keyboard
With the inclusion of the Java Sound API in Java 1.3 and later, it becomes possible to write Java programs that do sophisticated things with sound, such as playing MIDI instruments. The Piano.nb example in the JLink/Examples/Part1 directory displays a keyboard and lets you play it by clicking the mouse. A popup menu at the top lists the available MIDI instruments. This example was created precisely because it is so far outside the limitations of traditional Wolfram Language programming. Using J/Link, you can actually write a short and completely portable program, entirely in the Wolfram Language, that displays a MIDI keyboard and lets you play it! With just a little more work, the code could be modified to record a sequence played and then return it to the Wolfram Language, where you could manipulate it by transposing, altering the tempo, and so on. | http://reference.wolfram.com/language/JLink/tutorial/CallingJavaFromMathematica.html | CC-MAIN-2017-13 | refinedweb | 19,114 | 60.55 |
*
A
practical applications of threads/synchronization
Tom Griffith
Ranch Hand
Joined: Aug 06, 2004
Posts: 272
posted
Aug 31, 2007 08:37:00
0
Hello. If anybody has a minute, I've been revisiting threads and synchronization. I've always been able to mess around and see how this stuff works in stand-alone programs, but i'm trying to place these concepts into the context of a true many clients-one server relationship. The one pratical example that seems to show the application of this is the single bank account and multiple concurrent atm transactions. I'm trying to get over defaulting to an entity bean and applying only synchronization and threads. From what I can gather, the following would have to be the case to synchronize the "shared oject", ie the bank account...
1. The methods in the shared object class that manipulates the pesistent account data has to be ~both~ synchronized and static...so all new object references will access the same method(s) that access the persistent data. is that right?
2. I am failing to see how threads work into concurrent multiple client-single server transactions...it's almost as if threads would be applied to a single instance (client) to keep multiple processes (as opposed to multiple clients as with synchronization) in order. Is that right? (i know it's not but i don't see how multiple client threads can all be scoped into a single shared class).
I would appreciate any input or whatever. I posted it here as opposed to the threads section because this might be more "text-book" than actual code/application. Thank you for reading this.
[ August 31, 2007: Message edited by: Tom Griffith ]
Jaime M. Tovar
Ranch Hand
Joined: Mar 28, 2005
Posts: 133
posted
Aug 31, 2007 14:33:00
0
�The methods in the shared object class that manipulates the pesistent account data has to be ~both~ synchronized and static...so all new object references will access the same method(s) that access the persistent data. is that right?�
If you make the method static in a separate object it will be a bottle-neck. Here is other option so you can work directly with the account object.
import java.math.BigDecimal; public class Account { private BigDecimal balance; public void addMoney(BigDecimal money) { synchronized (balance) { balance = balance.add(money); } } public void redrawMoney(BigDecimal money) { synchronized (balance) { balance = balance.subtract(money); } } }
She will remember your heart when men are fairy tales in books written by rabbits.<br /> As long as there is duct tape... there is also hope.
Nicholas Jordan
Ranch Hand
Joined: Sep 17, 2006
Posts: 1282
posted
Sep 01, 2007 20:50:00
0
[Tom Griffith:]not but i don't see how multiple client threads can all be scoped into a single shared class
This is the code in my main() thread, which ..... uh, now I get your question.
while((filesOpen * 3) < MAX_OPEN_FILE_HANDLES.intValue() && (loopIndex > 0)) { try { // threadJuggler is a hashtable to hold references to executing threads. Belvedere.WordCount dictionaryBuilder = new Belvedere.WordCount();// dictionaryBuilder.setDaemon(true); dictionaryBuilder.start();//randInt is Cay S. Horstmann's random int generator threadJuggler.put(new Integer(randInt.draw()),(Object) dictionaryBuilder ); } /** * The catches here should be moved to an outer scope, */ catch(java.lang.Exception e) { sessionLog.println("Exception: " + e.getMessage()); sessionLog.close(); java.lang.System.exit((int)-1); } }
A Thread of execution is not a class, duh. You could declare an instance of a class and then pass a copy of the reference to each Thread of execution. It is so difficult, until you get the subtlety of a machine sequence of instructions making it's way through your code. I believe the simple version of your answer is that each client instance/request/object gets run, but it only runs once. You do not, ahem, re-run, a class.
Once the data is recovered from the instance or the instance has done it's work, you can let the var go out of scope or set it to null if you need to dispose of the resources to which that reference is a handle - but I would be sure to think about what needs to be done and how to persist the data.
It is correct that both methods have to be synchronized, but static is not the answer to the design question you are posing. Dov Bulka has an extremely good book on what you are asking. Though it is written in c++, not
Java
, the issues and concepts translate with almost no pain or wonderment.
Controller class ---------------------------------------------- | | | | | | T1 T2 T3 T3 T5 T6 T1.start T2.startT3.startT3.startT5.start()T6.start()
You have to have a way to
test
for completion. Join() and wait() are not solutions to monitoring a thread pool where any one may complete at any time. CyclicBarrier is not a solution to this design paradim, though you will hear it mentioned. On a true multi-processor machine, this is a proven design and will collaspe run-times to O log(t-total)
Henry's book is especially good for the question you are stepping into.
[
it can get really nasty
]
[ September 01, 2007: Message edited by: Nicholas Jordan ]
"The differential equations that describe dynamic interactions of power generators are similar to that of the gravitational interplay among celestial bodies, which is chaotic in nature."
Stan James
(instanceof Sidekick)
Ranch Hand
Joined: Jan 29, 2003
Posts: 8791
posted
Sep 02, 2007 08:42:00
0
Jaime Tovar's example correctly synchronizes two threads that are trying to access one account, at least on one method at a time. You could keep multiple instances of the Account object in a map keyed by account number. Account has no static bits, but the map will probably be a static "global" variable. It might need further synchronization when you add and remove accounts.
If you need to do more complex operations that involve several method calls, say transfer funds from one account to another, things get trickier. For example, another thread could take all the money out of acct1 between the balance call and the debit call:
if ( acct1.balance() > transferAmt ) { acct1.debit( transferAmt ) acct2.credit( transferAmt ) }
Perhaps the best solution would be to put this code in Account:
acct1.transfer( transferAmt, acct2 )
If you have to compose several methods balance, credit & debig, you might want to synchronize on acct1 and acct2 for the duration which risks deadlocks.
Tom, does this seem like the right conversation so far? If so, we can tackle that deadlock bit next.
A good question is never answered. It is not a bolt to be tightened into place but a seed to be planted and to bear more seed toward the hope of greening the landscape of the idea. John Ciardi
Nicholas Jordan
Ranch Hand
Joined: Sep 17, 2006
Posts: 1282
posted
Sep 04, 2007 19:02:00
0
Well if we make the Map static and key on the account number, we could then sync() on the account number, each entry in the map being an instance, Thus any number of threads could be working on the Map, but each account would be ( nomenclature for intact / integeral in oo ), and thus would scale well except for add/remove being disallowed under concurrency. That seems to be forefront to resoving deadlock detection and prevention ~ except for smaller textbook samples. Collection Iterators throw on concurrent modification == add/remove.
If the number of accounts needed is well known at program invocation, we could construct the map with a reasonable number of unused entires, but before taking this design approach we will need more design info from OP.
Tom Griffith
Ranch Hand
Joined: Aug 06, 2004
Posts: 272
posted
Sep 05, 2007 13:39:00
0
Hello. Thank you for the info everybody. I still kinda don't get why the synchronized methods aren't static, since they control persistent data and must belong to the class as opposed to instances. Wait a minute, the multiple customers don't create instances, do they?...they create a thread, ie call start(), on the ~imported~ class object, not an instance...right?
If that is the case, I can see why they don't have to be static...
Jim Yingst
Wanderer
Sheriff
Joined: Jan 30, 2000
Posts: 18671
posted
Sep 05, 2007 15:04:00
0
[Tom]: I still kinda don't get why the synchronized methods aren't static, since they control persistent data and must belong to the class as opposed to instances.
Why must they belong to the class? I don't see that. I can write instance methods that submit SQL queries or updates to a database, for example. Databases are normally designed to allow multiple users at once, and there's no reason why two different users can't update different records in teh same table at the same time, for example. If you can do it with two users, why not two threads, each reading/writing data for a different instance of the Account class (using Jaime's example)?
Note - there is a potential problem with Jaime's code. Synchronizing on balance is unreliable, as that's a mutable reference to immutable class instances. I don't think there's actually a problem with the two methods that are there so far, but for more complex methods there could be trouble, especially if additional mutable fields are added. I think it's more reliable to synchronize on this, the current instance of the Account class. Or on some private final lock object, e.g.
private BigDecimal balance; private final Object lock = new Object(); public void addMoney(BigDecimal money) { synchronized (lock) { balance = balance.add(money); } } public void redrawMoney(BigDecimal money) { synchronized (lock) { balance = balance.subtract(money); } } }
Or instead of using an Object for the lock, you can use a Lock from JDK 5 or later. That's a longer topic though. There can be some advantages to syncing only on a privately-held instance, to prevent other threads from creating deadlock by syncing on something they shouldn't (namely, the same instance your thread is syncing on). That doesn't really come up very often, but when it does, it can be a bear.
[Tom]: Wait a minute, the multiple customers don't create instances, do they?...
They could, sure. Instances of what, I'm not certain at the moment, but whatever you mean, the answer is probably yes. If we're talking Accounts, some customers would be creating new Accounts, while others are loading or modifying existing Accounts.
[Tom]: they create a thread, ie call start(),
In many cases they might be using a thread pool, in which case an already existing thread will come to help them. But for simplicity we can imagine each customer starts a new thread, OK.
[Tom]: on the ~imported~ class object, not an instance...right?
I don't know what this means. What does imported mean here? Not the Java keyword import, right? What is a class object, if not an instance? Unless you mean an instance of the class Class, but I don't think so (and you should probably just ignore this sentence if it didn't immediately make sense, as it's probably not relevant).
If you're calling start() as in t.start(), then most likely the thing that you're calling it
on
(the thing referenced by the variable t) is an instance of the class Thread. You can call that an instance of class Thread, or an object of class Thread, either way. I tend to use 'instance' whenever I can, because 'object' seems to have additional meanings to different people that cause confusion. As is the case now, it seems.
"I'm not back." - Bill Harding,
Twister
Tom Griffith
Ranch Hand
Joined: Aug 06, 2004
Posts: 272
posted
Sep 06, 2007 07:39:00
0
It kinda goes back to my original dilemma...if each customer (ATM) is calling a new instance of, say, the "transaction" class (it does
jdbc
or whatever with the database), where would the threads actually "meet" in order to implement sychronization? It seems to me the that individual threads would just live in each individual instance of "transaction"...
[ September 06, 2007: Message edited by: Tom Griffith ]
Stan James
(instanceof Sidekick)
Ranch Hand
Joined: Jan 29, 2003
Posts: 8791
posted
Sep 06, 2007 08:44:00
0
Let's say user 1 comes in on thread 1, creates a new DoSomeWork object. User 2 comes in on thread 2 and creates another DoSomeWork object. Everything is ok so far because they might work on different accounts. But we get to a point where we discover they are both working on the same account. That's where we have to synchronize.
It would seem obvious to synchronize on the Account object but there are some good reasons to prefer locking on something more private under the control of Account, like the "private final Object lock" as Jim just showed. That will force the two users through synchronized methods in single file.
Now in real life, I would be surprised to find a system that uses an Account object this way. Systems I've worked on would create an Account object for each user. Since they retrieve data form the database, they might have identical values. We'd use optimistic or pessimistic database locking to prevent overlapping updates.
On the other hand, I think the Forte 4GL language encouraged just what you suggest, managing concurrency in memory and writing to the database as an optional side effect. They even had rollback on in-memory object state if a transaction failed.
We still have this account transfer problem:
if account 1 has enough money debit account 1 credit account 2
We really want to lock that whole sequence on both account1 and account2 so nobody can change either account in between our lines of code. We might be tempted to do this:
transfer( from, to, amount ) { synchronized(from){ synchronized(to){ from.debit amount to.credit amount } } }
but that risks deadlock with somebody else trying to transfer from Account2 to Account1. Do you see why?
Is that answering the right question?
Tom Griffith
Ranch Hand
Joined: Aug 06, 2004
Posts: 272
posted
Sep 06, 2007 09:11:00
0
ok...thank you everybody...i think i'm confused about something pretty basic...ignoring real world for a minute and stepping back...
user1 creates thread1 in instance1 of DoSomeWork
user2 creates thread2 in instance2 of DoSomeWork
how does thread1 and thread2 meet in Account without DoSomeWork1 and DoSomeWork2 each creating their own instances of Account?
A standalone executable simply creates two threads (thread1 and thread2), a single instance of DoSomeWork and a single instance of Account. It's the creating or accessing of the "common object", Account, by instance1 and instance2 of DoSomeWork which is getting me confused at the moment.
For instance, for
servlets
, i think i would probably set a context attribute to the Account object (the database connection and JDBC stuff)...meaning all DoSomeWork references that spawn threads could access the same "shared object" via getAttribute...and the threads are all synchronized, locked, "sleeped", notified, etc in Account...however, in terms of executables and packages, i don't see how to access/create a single Account object for say, two seperate executables each creating DoSomeWork instances...
thank you again...
[ September 06, 2007: Message edited by: Tom Griffith ]
Bob Ruth
Ranch Hand
Joined: Jun 04, 2007
Posts: 320
posted
Sep 06, 2007 13:06:00
0
just a thought, but it depends on what you are synchronizing.... what you are "guarding"...
You have a bank, a bank manages accounts. You can deposit to an account, you can debit from an account, someone you gave a check to can draw on the account. The bank might give you a checkbook to write checks on the account. Both you and your spouse might need to write checks.
At the checkbook level you might want to imagine "synchronizing" access to the checkbook so that integrity of the checkbook balance is maintained.
At the bank level, they might want to "synchronize" access to the account balance to preserve integrity at that level.
So you might have a thread for each spouse, and a checkbook object.
You might have one or several threads for various transaction types or sources at the bank and an account object.
I'm not sure if all of this yammering in anyway addresses what you were looking for ......... it is a pretty high level look at the issue....
------------------------
Bob
SCJP - 86% - June 11, 2009
Stan James
(instanceof Sidekick)
Ranch Hand
Joined: Jan 29, 2003
Posts: 8791
posted
Sep 07, 2007 11:03:00
0
How does thread1 and thread2 meet in Account without DoSomeWork1 and DoSomeWork2 each creating their own instances of Account?
They'd have to both get references to the same Account object instance. You could manage that with a static map of accounts, keyed by account number. That's the part I've never seen anybody do. (Maybe
EJB
Entity Beans do that? Took a class but never used them.)
More often I've seen concurrency issues - who updates when and in what sequence - handled in the database. That supports a cluster of several app servers connected to one database better then in-memory synchronization could.
Tom Griffith
Ranch Hand
Joined: Aug 06, 2004
Posts: 272
posted
Sep 07, 2007 11:20:00
0
Yeah, I've always used contaner managed ejb's for this kinda stuff but i wanted to check into thread/synchronization alternatives. I wanted to step back briefly to make sure I wasn't missing anyting on the object sharing part, becasue i could never really grasp what or how an infinite number of concurrent threads could share an object sans creating new references or as you said, copying the references. Like i said, i can see it in a
j2ee
model because you could create the shared object on startup and set it as a context attribute. I just wanted to make sure I wasn't missing anything...I'll mess around with the static map and also look further into the deadlock issue. Thank you everybody for the valuable information...it is very helpful.
[ September 07, 2007: Message edited by: Tom Griffith ]
[ September 07, 2007: Message edited by: Tom Griffith ]
Stan James
(instanceof Sidekick)
Ranch Hand
Joined: Jan 29, 2003
Posts: 8791
posted
Sep 10, 2007 12:11:00
0
It would definitely be entertaining (in a very geeky way) to build live in-memory objects that are shared among users and threads and such. I was really intrigued by Forte's notion of rolling back object changes on transaction boundaries. I came across fairly negative about it because no matter how much geek fun it is, it might not be the best bang for my employer's buck.
See if
Naked Objects
addreses this kind of thing. They're surely having some geek fun.
Nicholas Jordan
Ranch Hand
Joined: Sep 17, 2006
Posts: 1282
posted
Sep 15, 2007 23:28:00
0
[Tom Griffith:] ... still kinda don't get why the synchronized methods aren't static,...
This may be addressed several times in the discussion, but I really went through the grinder on this in a post I made awhile back and now it is extremely easy for me.
You have code, static or not.
You have a machine.
The machine has a processor, which even in the virtual machine will have some way of locating the next instruction. No matter how many instructions you have sitting there, waiting, nothing happens until some form of an instruction pointer brings that
pattern
onto a processor somewhere. There are two cases:
Single Processor: Some sort of scheduler decides what happens next.
Multiple Processor: Some sort of scheduler decides what happens next.
Patterns
in RAM - brought into the processor:
00110110 1101110 0001101 0101110 001101110
00110110 1101110 0001101 0101110 001101110 <-- instruction pointer 1
00110110 1101110 0001101 0101110 001101110 <-- instruction pointer 2
00110110 1101110 0001101 0101110 001101110
00110110 1101110 0001101 0101110 001101110
What happens when instruction pointer one overruns instruction pointer two ?
Threads are the machine, it's self, in operation. Code is a pattern you write that the machine tries to follow.
See:
Failure mode of two reference calls to one thread
in which I ultimately went back and admitted I had the question wrong.
See also:
A question from Java Concurrency in Practice
[HW: Can you put that two cooks in the kitchen thing somewhere so that it can be found easily for posts such as this ? Threading is difficult to visualize as the machine in operation, not the code on the page.]
[Fixed link - Dave]
[ September 16, 2007: Message edited by: David O'Meara ]
I agree. Here's the link:
subject: practical applications of threads/synchronization
Similar Threads
My SCEA Part 1Study Notes
long post IBM.158
concurrent access to database row
NX:Client crashed cause deadlock in LockManager
does find(String[] criterion) need to be synchronized ?
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton | http://www.coderanch.com/t/233938/threads/java/practical-applications-threads-synchronization | CC-MAIN-2014-49 | refinedweb | 3,527 | 62.58 |
Core Project
CoreJava Project Hi Sir,
I need a simple project(using core Java, Swings, JDBC) on core Java... If you have please send to my account
Corejava Interview,Corejava questions,Corejava Interview Questions,Corejava
;
Core java Interview Question
page1
An immutable... in the constructor.
Core
java Interview Question Page2
A Java... of an interface.
Core
Java Interview Question Page3
Generally Java Interview Questions
};
System.out.println("Values Before the sort:\n");
for(i = 0; i < array.length; i...();
mergeSort_srt(array,0, array.length-1);
System.out.print("Values after the sort:
Sort
with this
A program is required to ask users to rate the Java programming language... Scanner(System.in);
System.out.print("Rate Java(0-10): ");
int rate...");
}
} }
}
System.out.print("Invalid! Rate Java within the range(0-10): ");
rate=input.nextInt
core java - Java Beginners
Core Java interview Help Core Java interview questions with answers Hi friend,Read for more information.
CORE JAVA
CORE JAVA Q)How to sort a Hash Table, if you pass an Employee Class Object in that First Name, Last Name, Middle Name based on last name how you sorting the Employee Class?
Q)How to display the list variables in reverse
core java - Java Beginners
core java how to create a login page using only corejava(not servlets,jsp,hibernate,springs,structs)and that created loginpage contains database(ms-access) the database contains data what ever u r enter and automatically date
insertion sort
insertion sort write a program in java using insertion sort
bubble sort
bubble sort write a program in java using bubble sort
CoreJava
corejava
core java - Java Beginners
core java Hi Guys,
what is the difference between comparable...?????????????
plzzzzzzzzzzz help me its very urgent in advance thanks If you want to sort...
// Signature of compareTo method
To sort a collection using a Comparator class
core
core where an multythread using
Please go through the following link:
Java Multithreading
core java
core java how to display characters stored in array in core java
bubble sort - Java Beginners
bubble sort how to write program
The bubble-sort algorithm in double... Hi friend,
Bubble Sort program :
public class...[] = {10,5,3,89,110,120,1,8,2,12};
System.out.println("Values Before the sort:\n
Core Java Doubts - Java Beginners
Core Java Doubts 1)How to swap two numbers suppose a=5 b=10; without using third variable?
2)How to sort two strings? By using collections?
3)What...://
sort function - JSP-Servlet
sort function How to sort a string variable in java
Hi friend,
Please give in details and full source code to solve the problem.
For information on java visit to :
Core JAva
Core JAva how to swap 2 variables without temp in java
core java
core java how can we justify java technology is robust
Insertion Sort Timer
Insertion Sort Timer Welcome all
I wanna program in java find the timer of insertion sort and find time complexity for random value
thanks
core java
core java please give me following output
Core java
Core java difference between the string buffer and string builder
core java
core java how to compare every character in one string with every character in other string
Core Java
Core Java What is the significance of static synchronized method?
Why do we have the method declared as static synchronized How to load class dynamically in java ?
To load class dynamically you can use Class class method
Class.forName("abc.xyz.MyClass");
This method load the given class at run time
Core Java
Core Java Hi,
Can any one please share a code to print the below:
1
23
456
78910
thanks a lot in advance
core java
core java public class Check {
public static void main(String[] args) {
System.out.println(11^2);
}
}
how it is work????
plzz explain
core java
core java what is difference between specifier and modifier?
what is difference between code and data?
what is difference between instance and object
core java
core java readLine() function is realated to which class?
readLine() is a method of java.io.Console class it is also available in java.io.BufferedReader.
Selection Sort In Java
Selection Sort In Java
... are going to sort the values of an array using
selection sort.In selection sorting.... Sort the
remaining values by using same steps. Selection sort
Core java
Core java How to use hyperlink that is href tag in core java without swing, frames etc.
My code is
StringBuffer oBodyStringBuffer = new StringBuffer("Message Classification: Restricted.\n\n
Merge Sort In Java
Merge Sort in Java
... to sort integer values of an array using merge
sort.
In merge sorting.... Then merge both parts and sort it. Then again merge the next
part and sort it. Do
Core Java
Core Java Hi,
Can any one please tell me is if possible to access the private member from out side of the class or How to make possible for a base class to access the private member of it's parent class | http://www.roseindia.net/tutorialhelp/comment/41943 | CC-MAIN-2015-06 | refinedweb | 835 | 53.21 |
Hello, all. The following is a problem I'm currently working on. I can't get it to run like the problem asks. Any help would be greatly appreciated.
Hello, all. The following is a problem I'm currently working on. I can't get it to run like the problem asks. Any help would be greatly appreciated.
Here is the code I have so far:
package letterclass; import java.util.Scanner; public class LetterClass { /** * @param args the command line arguments */ public static void main(String[] args) { String a; Scanner input = new Scanner (System.in); System.out.print("Please enter a string: "); a = input.nextLine(); for (char i = 0; i < a.length(); i++) { if (a.equalsIgnoreCase(a)) { System.out.print("Vowel"); switch(i) { case 'a': case 'e': case 'i': case 'o': case 'u': } } else if (a.equalsIgnoreCase(a)) { System.out.print("Semi-Vowel"); switch(i) { case 'y': case 'w': } } else { System.out.print("Consonant"); } } }
Hello there, I can see that you don't quite get how arrays and traversing them work. Not to worry !!!
Imagine a restaurant menu where the dishes are numbered. For example :
1. Chicken
2. Beef
3. Fish
...
Then when you want to order you can just say "Can I have number 2 pls" and you will get a beef dish. In the world of Java you request will look like :
Array menu;
menu[1];
(note that we start counting from 0 and not from 1);
Now if you want to order every item in the menu (maybe you are very hungry)
you would say: "Can I have number 1, number 2 and number 3". In Java world this will be :
menu[0];menu[1];menu[2]; Yeah but I don't want to write this line so many times and moreover i dont know how long the menu is (menu.lenght). So lets loop over the items.
for(int i=0; i<menu.lenght;i++){ order.menu[i]; }
done deal !!
That is partly how the arrays work.
And here is your working program:
import java.util.Scanner; public class LetterClass { /** * @param args the command line arguments */ public static void main(String[] args) { char[] a; Scanner input = new Scanner (System.in); System.out.print("Please enter a string: "); a = input.nextLine().toCharArray(); for (char i = 0; i < a.length; i++) { if(a[i]=='a' || a[i]=='e' || a[i]=='i' || a[i]=='o' || a[i]=='u') { System.out.println("Vowel : " + a[i]); } else if (a[i]=='w' || a[i]=='y') { System.out.println("Semi-Vowel: " + a[i]); } else { System.out.println("Consonant: " + a[i]); } } } }
I see exactly what you're doing here, but the main purpose of this program was to utilize a switch statement which is why I had it set up the way I did. I see that you got rid of the switch statement completely. Is there a way to integrate it in?
This uses the switch you want and will only allow letters(will not accept numbers or a string longer than one, but could not figure out how to eliminate symbols):
package letterclass; import java.util.Scanner; public class LetterClass { public static void main( String[] args ) { Scanner input = new Scanner (System.in); System.out.print("Please enter a letter: "); String inputStr = input.next(); char letter = inputStr.charAt(0); if (inputStr.length() > 1 || Character.isDigit(inputStr.charAt(0))) { System.out.println("Enter a letter!"); }else{ switch(letter) { case 'a': System.out.print("Vowel"); break; case 'e': System.out.print("Vowel"); break; case 'i': System.out.print("Vowel"); break; case 'o': System.out.print("Vowel"); break; case 'u': System.out.print("Vowel"); break; case 'y': System.out.print("Semi-Vowel"); break; case 'w': System.out.print("Semi-Vowel"); break; default: System.out.print("Consonant"); break; } } input.close(); } }
Java 7 allows switches using a String. If you're not on Java 7 then you should ... | https://www.daniweb.com/programming/software-development/threads/466170/beginner-java-programming-help | CC-MAIN-2017-09 | refinedweb | 640 | 69.99 |
Red Hat Bugzilla – Bug 132850
add nscd support for initgroups()
Last modified: 2007-11-30 17:07:04 EST
Description of problem:
When you set "group: files ldap" in "/etc/nsswitch.conf" and you have
a statically build application, a call to "initgroups()" call cause a
segmentation fault.
Version-Release number of selected component (if applicable):
glibc-2.3.2-95.20
How reproducible:
Steps to Reproduce:
1. Set "group: files ldap" in "/etc/nsswitch.conf"
2. Use the following reproducer program. The user is "mysql", but you
can choose another.
#include <stdio.h>
#include <grp.h>
#include <pwd.h>
#include <errno.h>
main()
{
struct passwd *pw_ptr;
char *user = "mysql";
pw_ptr = getpwnam(user);
printf("pw_ptr->pw_gid = %d\n", pw_ptr->pw_gid);
initgroups((char*) user, pw_ptr->pw_gid);
}
3. Compile with "cc filename.c -static"
4. Run "a.out".
Actual results:
# ./a.out
pw_ptr->pw_gid = 101
Segmentation fault
Expected results:
# ./a.out
pw_ptr->pw_gid = 101
Additional info:
This only happens when compiling with "-static".
*** Bug 133116 has been marked as a duplicate of this bug. ***
I have further analyzed the problem and have determined the exact
cause of the problem. I am hoping the RedHat could provide a fix for
this problem now that the cause of the problem is understood. The
details are below.
Problem: Nested "dlopen()" calls from a statically built application
will cause a segmentation fault.
Example: A statically built application a.out does a dlopen() of
libfoo1.so. In turn, libfoo1.so does a dlopen() of libfoo2.so. The
second dlopen(), which is libfoo2.so, will cause a segmentation fault.
Cause: The segmentation fault occurs in the dynamic loader ld.so in
the function _dl_catch_error() [elf/dl-error.c] due to an
uninitialized function pointer GL(dl_error_catch_tsd) which, after
macro expansion, is really _rltd_local._dl_error_catch_tsd
[sysdeps/generic/ldsodefs.h]. Thus, the question becomes, why isn't
GL(dl_error_catch_tsd) being initialized during the second dlopen()?
Keep in mind that I'm picking on GL(dl_error_catch_tsd) because that
is where the segmentation fault occured. There are likely other
variables in the _rtld_local structure may be uninitialized as well.
An explanation follows for both the statically built case, which
crashes, and the dynamically built case, which works.
Application Built Statically (segmentation fault)
-------------------------------------------------
For libc.a, the GL(dl_error_catch_tsd) macro expands to the variable
shown below [elf/dl-tsd.c]
# ifndef SHARED
...
void **(*_dl_error_catch_tsd) (void) __attribute__ ((const)) =
&_dl_initial_error_catch_tsd;
...
#endif
Thus, libc.a has an initialized copy of _dl_error_catch_tsd which
points to the _dl_initial_error_catch_tsd routine.
# nm -A /usr/lib64/libc.a | grep error_catch_tsd
/usr/lib64/libc.a:dl-error.o: U _dl_error_catch_tsd
/usr/lib64/libc.a:dl-tsd.o:0000000000000000 D _dl_error_catch_tsd
/usr/lib64/libc.a:dl-tsd.o:0000000000000000 T
_dl_initial_error_catch_tsd
Also in libc.a, the _dl_catch_error function is defined, which is the
routine in which the segmentation fault occurs.
# nm -A /usr/lib64/libc.a | grep dl_catch_error
/usr/lib64/libc.a:dl-deps.o: U _dl_catch_error
/usr/lib64/libc.a:dl-error.o:0000000000000000 T _dl_catch_error
/usr/lib64/libc.a:dl-open.o: U _dl_catch_error
/usr/lib64/libc.a:dl-libc.o: U _dl_catch_error
For libc.so, none of the symbols mentioned above are defined.
The a.out has the symbols because it was compiled with libc.a.
Thus, the first call to dlopen( libfoo1.so ) resolves its symbols
from the a.out address space. That is, it calls the _dl_catch_error
routine in the a.out address space which, in turn, accesses the
_dl_error_catch_tsd function pointer in the a.out address space which
was initialized with the address of the _dl_initial_error_catch_tsd
routine, which also exists in the a.out address space.
By the way, the reason I know what address space things are coming
from is because I put "_dl_printf" statements in the "glibc" sources
and compared the addresses that were printed at runtime with the
addresses shown in "/proc/<pid>/maps".
The second call to dlopen( libfoo2.so ) tries to resolve its symbols
from the ld.so (loader) address space.
Before I continue, let me say a few words about ld.so. During the
compilation of the loader, the GL(dl_error_catch_tsd) macro expands
to _rtld_local._dl_error_catch_tsd [sysdeps/generic/ldsodefs.h], a
totally different variable that the one in libc.a. That is, GL
(dl_error_catch_tsd) expands to a different variable in libc.a than
ld.so as can be seen by the code snippet shown below
from "sysdeps/generic/ldsodefs.h"
#ifndef SHARED
# define EXTERN extern
# define GL(name) _##name
#else
# define EXTERN
# ifdef IS_IN_rtld
# define GL(name) _rtld_local._##name
# else
# define GL(name) _rtld_global._##name
# endif
As you can see, during the compilation of libc.a, which is NOT
SHARED, GL(dl_error_catch_tsd) becomes _dl_error_catch_tsd. In the
compilation of ld.so, GL(dl_error_catch_tsd) expands to
_rtld_local._dl_error_catch_tsd. The reason I mention this is
because we can't even think about using libc.a's object because they
are completely different.
Anyway, back to the second call to dlopen( libfoo2.so ). This is
going to call the _dl_error_catch routine in the ld.so's address
space. The problem is that, for the loader, GL(dl_error_catch_tsd)
gets initialized in dl_main [elf/rtld.c], but dl_main only gets
called for shared applications, not during a dlopen. Therefore, GL
(dl_error_catch_tsd) never gets initialized and, when it is
referenced in _dl_catch_error [elf/dl-error.c], it contains a value
a "0" (NULL pointer) which causes a segmentation fault.
So, why does the first dlopen( libfoo1.so ) execute routines in the
a.out, while the second dlopen( libfoo2.so ) execute routines in
ld.so?
The reason is that when the a.out calls dlopen() it uses the dlopen
statically linked in from libdl.a . When the first library calls
dlopen() it get resolved to the one in the pulled-in libdl.so.
That's because the a.out does NOT have a ** dynamic symbol table **
(separate from externals and debug symbols) so the first library
can't hook back to the dlopen() in the a.out. Thus it must use the
one pulled in from libdl.so.
Application Built With Shared Libraries (works)
-----------------------------------------------
In the case where the a.out is built with shared libraries, the
ld.so's (loader) dl_main [elf/rtld.c] routine is called which will
initialize GL(dl_error_catch_tsd), so we don't get a segmentation
fault since the variable is properly initialized.
Conclusion
----------
One possible fix would be to put a check in either _dl_catch_error
[elf/dl-error.c] or dlerror_run [elf/dl-libc.c] to see if we are in
the loader code and if dl_main has NOT been called. If we are in the
loader code and dl_main has not been called, then we need to
initialize GL(dl_error_catch_tsd) and other needed variables so that
we don't get a segmentation fault due to uninitialized variables.
I will be adding a small reproducer for this problem shortly.
Rigoberto Corujo
Created attachment 104377 [details]
Reproducer for the problem where nested dlopen()'s cause segmentation fault
Untar this file and compile with the "compile.sh" script.
Set LD_LIBRARY_PATH to your working directory.
Run the "a.out"
dlopen support in statically linked apps is very limited, not meant
to be general purpose library loader for any kind of libraries.
Its role is just to support NSS modules (built against the same
libc as later run on).
dlopen from within the dlopened libraries is definitely not supported.
If libnss_ldap.so.* calls dlopen, then the bug is in that library.
For NSS purposes there is _dl_open_hook through which libraries
that call __libc_dlopen/__libc_dlsym/__libc_dlclose can use the
loader in the statically linked binary.
Using any NSS functionality in statically linked applications is only
supportable if nscd is used. Without nscd you are on your own. We
will not and *can not* handle anything else.
I don't think it makes any sense to keep this bug open. It is an
installation problem if nscd is not running.
Ulrich,
Are you saying that "service nscd start" would prevent the
segmentation fault from occuring? I just tried that with the initial
reproducer that I provided (the one that calls initgroups()) and I
get the same results (segmentation fault). Have you guys been
successful in running my reproducer with nscd?
As a follow-up to Jakub's comment, I just want to add that it is
actually "libsasl.a" that is doing the dlopen().
The "libnss_ldap.so" library links against "libldap.a".
The "libldap.a" links against "libsasl.a".
If the solution to this problem is to run nscd, then so be it. But,
there must be more to it than that because, like I said before, I
don't see a difference. I need some clarification, because I
understood Jakub to mean that what was going on was illegal but
Ulrich seems to suggest that this should work as long as nscd is
running.
Also, if dlopen'ing a shared library from a dlopen'ed library is not
allowed, then it would be beneficial to put a check in "glibc" so
that an error is returned to the calling dlopen() rather than letting
a segmentation fault occur.
Rigoberto
> I just tried that with the initial
> reproducer that I provided (the one that calls initgroups()) and I
> get the same results (segmentation fault). Have you guys been
> successful in running my reproducer with nscd?
That is impossible unless the program cannot communicate with the nscd
and falls back on using NSS itself or you hit a different problem.
There has been at one point a change in the protocol but I don't think
there are any such binaries out there.
Run the program using strace and eventually start nscd by hand and add
-d -d -d (three -d) to the command line. It won't fork then and spit
out lots of information.
Ulrich,
I followed your instructions. Every time I run my "a.out" there is
output from "nscd", so there is communication going on. The
segmentation fault is still occuring.
Can you confirm that you have indeed run my reproducer that calls
initgroups() and have not had a segmentation fault?
The man page for "nscd" states that it is used to cache data. I'm
not sure why running this daemon would solve my problem?
Rigoberto
> Can you confirm that you have indeed run my reproducer that calls
> initgroups() and have not had a segmentation fault?
Which producer which calls initgroups? There is only one attachment
and this is code which uses dlopen() for other purposes than NSS.
This is not supported. If it breaks, you keep the pieces.
Run your applications which uses NSS and make sure there are no other
dlopen calls in the statically linked code. Use strace to see what is
going on.
> The man page for "nscd" states that it is used to cache data. I'm
> not sure why running this daemon would solve my problem?
It's not the caching part which is interesting here, it's the "nscd
takes care of using the LDAP NSS module" part. All the statically
linked application has to do is to communicate the request via a
socket to nscd and receive the result. No NSS modules involved on the
client side. Which is why I say that if you still see NSS modules
used, something is wrong.
One possibility is that you use services other than passwd, group, or
hosts. Is this the case? These services are currently not supported
in nscd. There is usually no need for this since plain files are
enough (/etc/services etc don't change).
So, please make sure your code does not use dlopen() for anything but
NSS and that after starting nscd either it is used or only
libnss_files is used.
Ulrich,
Either I'm misunderstanding you, you're misunderstanding me, or we're
both misunderstanding each other. Please take a look at the very
first entry I made to this bugzilla. Would you please compile and
run the code as I described and then tell me whether you see the same
problem I'm seeing? This problem has nothing to do with any
application that I'm writing. The second reproducer, which I had
attached, was merely to show what is happening under the covers in an
easy to understand way. The first reproducer, which I embedded
directly into the text I entered, is at the heart of the problem.
Please take a look at that and then we can continue our discussion.
Rigoberto
Why don't you just attach the data I'm looking for? Yes, your code
uses initgroups and this cannot fail if nscd is used. Which is why I
ask for the strace output related to the initgroups call and the
actual crash.
Since I do not believe that you can continue to see the same crash
with and without nscd (unless there is something broken in nscd) I
also asked for other places you might use dlopen (explicitly or
implicitly).
So, run strace.
FWIW, with a FC3t2 system I have no problem using the LDAP NSS module
from the statically linked executable but this pure luck. Important
is that once nscd runs no NSS module is used.
Created attachment 104426 [details]
output of the strace with the statically built a.out
The LDAP database contains only one user "johndoe" as well as the group
"johndoe". Running the "id johndoe" command verifies that communications with
the slapd server is good. The "nscd -d -d -d" is also running. Communication
with it also appears to be good. I will attach the output of "ncsd -d -d -d"
shortly.
Created attachment 104427 [details]
output of the "nscd -d -d -d"
Comment on attachment 104427 [details]
output of the "nscd -d -d -d"
The "nscd -d -d -d" is started freshly. The "strace a.out" is immediately run.
The output of "nscd" is shown. The "a.out" is still getting a segmentation
fault.
I see what is going on. The initgroup calls do not try to use nscd at
all but instead use the NSS modules directly. This is fatal in this
situation.
We might be able to get some code changes into one of the next RHEL3
updates but there is not much we can do right now. Except questioning
why you have to link statically. This is nothing but disadvantages.
Ulrich,
I, like you, work for support. You work for RedHat support and I
work for HP support. Our XC (Extreme Clusters) product is based on
RedHat Linux. One of our customers had asked us to document how to
configure LDAP. While configuring LDAP, I found that "mysqld" did
not start when LDAP was configured. After further analysis, I found
that mysqld was linked statically and called initgroups(). To work
around the mysqld problem we simply used a non-static version of
mysqld. However, this was a concern to me because there may be other
packages, or customer written applications, which could potentially
run into this problem. So, I had to get to the bottom of the
situation and find out why statically built applications which called
initgroups() would seg fault. This has led to this conversation that
you and I have been having. As you can see, it is not I who is
developing statically linked applications, but I am concerned that
customers who do develop statically linked applications and turn on
LDAP may run into this problem.
At the very least, for the short term, that second dlopen() should
return an error and not seg fault. Maybe errno could be set to EPERM
(operation not permitted) or something along those lines.
So, we are leaving this as a "to be fixed in a future release",
correct?
Rigoberto
I'm reassigning this bug to glibc and marked it as an enhancement.
This is what it is, NSS simply isn't supported in statically linked
applications. The summary has been changed to reflect the status.
If you are entitled to support for these kind of issues you should
bring this issue up with your Red Hat representative so that it can be
added to IssueTracker. If you don't know what this is then you are
likely not entitled and you might want to consider getting appropriate
service agreements.
> At the very least, for the short term, that second dlopen() should
> return an error and not seg fault.
No, since there are situations when it works. NSS in statically
linked code is simply an "if it breaks you keep the pieces" thing, if
it works you can be very happy, if not, you'll have the find another
way. I cannot prevent people from having at least the opportunity to
get it to work.
> So, we are leaving this as a "to be fixed in a future release",
> correct?
Yes. I'll keep this bug open so that once we have code for this, I
can announce it. Whether we can use this in code in future RHEL3
updates is another issue.
I added support for caching initgroups data in the current upstream
glibc. Backporting the changes to RHEL3 is likely not going to happen
since the whole program changed dramatically since the fork of the
sources for RHEL3. If it is essential, contact your representative
for support from Red Hat. I close this bug since the improvement has
been implemented. | https://bugzilla.redhat.com/show_bug.cgi?id=132850 | CC-MAIN-2017-04 | refinedweb | 2,882 | 67.65 |
Ask most server-side Java technology developers why they haven't become certified yet for the Java platform, and they'll probably tell you that it's because they haven't gotten around to learning AWT (Abstract Window Toolkit) and Swing yet. These developers may be tempted to think that AWT is of little use to them outside of the occasional applet or thick client, yet hidden within this largely ignored toolkit is a wealth of functionality that the server-side programmer can use. In this article, I will show you two examples of useful functionality that can be implemented using AWT in less than an hour.
I know about your hesitance to use AWT. I, too, was one of those developers. For years I had spent my time learning the ins and outs of the javax.servlet branch of the Java tree. I played around with a dozen different frameworks and HTML template schemes, spent sleepless nights worrying about the object-relational impedance mismatch, carefully constructed my three-layer architectures, and delighted in adding color to my UML diagrams. (Okay, maybe I don't get out as much as I should.) In all that that time, I had never even so much as instantiated an AWT class.
DoMouseover()
javax.servlet
Interestingly enough, my first foray into server-side AWT had nothing to do with graphics. I had been toying with the idea of building a generic event-handling system for one of my projects so that I could asynchronously handle mundane things such as writing to a log file, or sending off a notification email -- tasks that complete independently, allowing the main execution thread to complete without waiting. In my research, I kept running across discussions of the AWT delegation-event model. Being a dedicated object-oriented programmer, I was too lazy to code this myself. Instead I decided to co-opt AWT in my servlet.
First I created an Event Manager class that extends java.awt.Component. The Event Manager is responsible for registering listeners, posting events to the System Event Queue, and passing events to the proper listeners. When an event is thrown using the postEvent(AbstractWebEvent) method, the EventManager puts the event in the system event queue. The system event queue, running as a separate thread, calls back to the processEvent(AWTEvent) method, which passes the event to each listener registered with the addListener(ListernerInterface) method. Each listener is responsible for either handling or ignoring the event. The events themselves ultimately derive from java.awt.event.AWTEvent class. Each event has properties that the appropriate handler can call to perform the task. For example, the WebEmailEvent has from, to, subject, and message body attributes that the WebEmailListener relies on to handle the event.
java.awt.Component
postEvent(AbstractWebEvent)
processEvent(AWTEvent)
addListener(ListernerInterface)
java.awt.event.AWTEvent
WebEmailEvent
WebEmailListener
Figure 1: Event Manager Class Diagram
By utilizing the existing event management framework in AWT, I was able to create a robust event management system for my server-side application without having to write potentially buggy multithreaded code.
Best of all, by having a true event management system, I was able to decouple the code that handles an event from the code that generates the event. Now, depending on my needs, I can, for example, plug in a LogListener that writes to a local rolling log file or a network syslog server, without having to change any code in the application itself.
Of course, the obvious use of the AWT libraries is for manipulating graphics. This is where many server-side programmers start to get nervous. But before you run out of the room screaming, let me assure you that there are several useful things that can be done with AWT without having to worry about obscure graphic file formats, or individual pixel manipulation. (The Java platform's robust graphics library will handle these details.) In this example, I will illustrate how you can write a program that scales web images with just three lines of code!
The need for preview or thumbnail versions of an image on a web site can be a thorn in the side for content managers. One low-tech solution tothis problem is to have the user upload multiple versions of every picture. This solution has the advantage of being simple, but has the drawback of being time intensive since an artist has to create two versions of every image.
Another low-tech solution is to set the width and height parameters on the image tag in the HTML. This has the desired effect of creating a scaled-down image, but has several drawbacks. One drawback is that the entire full-size image is sent with each request, instead of a smaller thumbnail image. Another is that the width and height parameters must be set for each individual image to make the image scale proportionately.
For those of you who have felt this pain, AWT has a quick solution (implemented in this example as a servlet) that will dynamically resize a GIF, JPEG, or a PNG file for you.
public class ResizeImageServlet extends HttpServlet
{
private String imageDir = "";
public final void init( ServletConfig config ) throws ServletException
{
// No initialization necessary
}
public final void doGet( HttpServletRequest req, HttpServletResponse res )
throws ServletException, IOException
{
// No difference to us if it's a get or a post.
this.doPost(req,res);
}
public final void doPost( HttpServletRequest req, HttpServletResponse res )
throws ServletException, IOException
{
try
{
int targetWidth=0;
int targetHeight=0;
// Get a path to the image to resize.
// ImageIcon is a kluge to make sure the image is fully
// loaded before we proceed.
Image sourceImage = new ImageIcon(Toolkit.getDefaultToolkit().
getImage(req.getPathTranslated())).getImage();
// Calculate the target width and height
float scale = Float.parseFloat(req.getParameter("scale"))/100;
targetWidth = (int)(sourceImage.getWidth(null)*scale);
targetHeight = (int)(sourceImage.getHeight(null)*scale);
BufferedImage resizedImage = this.scaleImage
(sourceImage,targetWidth,targetHeight);
// Output the finished image straight to the response as a JPEG!
res.setContentType("image/jpeg");
JPEGImageEncoder encoder = JPEGCodec.createJPEGEncoder
(res.getOutputStream());
encoder.encode(resizedImage);
}
catch(Exception e)
{
res.sendError(HttpServletResponse.SC_BAD_REQUEST);
}
}
private BufferedImage scaleImage(Image sourceImage, int width, int height)
{
ImageFilter filter = new ReplicateScaleFilter(width,height);
ImageProducer producer = new FilteredImageSource
(sourceImage.getSource(),filter);
Image resizedImage = Toolkit.getDefaultToolkit().createImage(producer);
return this.toBufferedImage(resizedImage);
}
private BufferedImage toBufferedImage(Image image)
{
image = new ImageIcon(image).getImage();
BufferedImage bufferedImage = new BufferedImage(image.getWidth(null)
,image.getHeight(null),BufferedImage.TYPE_INT_RGB);
Graphics g = bufferedImage.createGraphics();
g.setColor(Color.white);
g.fillRect(0,0,image.getWidth(null),image.getHeight(null));
g.drawImage(image,0,0,null);
g.dispose();
return bufferedImage;
}
}
The doPost() command is primarily concerned with calculating the desired width and height (based on the scale parameter) and figuring out where the source image file is. I chose to use the request variable's getPathTranslated() method, which retrieves everything from the servlet path to the query string and translates it into an actual system path.
doPost()
getPathTranslated()
The interesting part of the code is located in the method scaleImage(). The actual image resizing is done with three lines of code. Once we have converted the GIF, JPEG, or PNG file into an AWT Image object, we can apply the ReplicateScaleFilter to resize the image to the desired proportions. The toBufferedImage() method is a bit code that converts our regular Image object into a BufferedImage object that com.sun.image.codec.jpeg.JPEGImageEncoder requires to create a JPEG.
scaleImage()
ReplicateScaleFilter
toBufferedImage()
BufferedImage
com.sun.image.codec.jpeg.JPEGImageEncoder
I installed the example servlet on Apache Tomcat 3.2.3 under a context called /graphics and set it to be triggered by the key /resize/* in the web.xml file. I placed my source image (photomain1.jpg) in the /graphics folder under /WEB-INF. The idea was to be able to view the original image by using the context without the servlet trigger. As an example URL, returns my source image untouched by the servlet.
/graphics
/resize/*
/WEB-INF
Figure 2: photomain (original image)
By changing the sample URL to, I trigger the servlet and request an image that is half the size of the original.
Figure 3: photomain50 (image scaled down 50%)
This servlet can be deployed along with your application and used to generically scale any images on your site. Just place the resize URL in an IMG tag.
All of this, of course, comes with a slight hit to processor utilization. For high-traffic sites, you might want to consider running this code when a content manager uploads an image and writing the generated JPEG to disk so that it can be served statically.
Before you run off and try this on your Solaris system, you should know that any calls to the AWT toolkit in versions of Java 2 SDK before 1.4 require that the machine running the code have a valid display context, namely X Windows. The moment you try to instantiate an AWT class, the getGraphics() method will look for this context. Unfortunately, if your sys admin is anything like mine, X is long gone from your production servers.
If you do not have X on your production machine, I recommend that you install Xvfb on your servers. Xvfb stands for X Virtual Frame Buffer, and it was first included in the X11R6 sources (available from). In a nutshell, Xvfb pretends to be a full-fledged X server and satisfies AWT's need for one.
Once you have Xvfb set up on your server, you can point your AWT to it by setting the DISPLAY environment variable in your app server's startup script. If possible, I recommend that you install the binary (reference below) instead of trying to compile X from sources. Compiling X is not for the faint of heart.
In the Java 1.4 Platform, Sun made our lives much easier and addressed the lack of headless AWT support. Instead of setting up a dummy X server like Xvfb, just put the parameter "-Djava.awt.headless=true" in your Java invocation line, and that's it! Life is good.
-Djava.awt.headless=true
I think one of the truly great things about the Java programming language is the richness and depth of its standard libraries. Server-side programmers can find a lot of useful functionality in the AWT libraries. From ready-made event models to dynamic graphic rendering, the applications for server-side programmers are almost limitless.
Your Java software developer certification might be just around the corner!
Joe Bella has spent the last 10 years as a developer and software architect of several highly visible Internet sites. When he is not losing sleep over object-relational impedance, he is president of Quimbik, Inc., a San Francisco-based web development and hosting company.
Back to Top | http://developers.sun.com/solaris/tech_topics/java/articles/awt.html | crawl-002 | refinedweb | 1,784 | 54.73 |
NAME
setsid − creates a session and sets the process group ID
SYNOPSIS
#include <unistd.h>
pid_t setsid(void);
DESCRIPTION
On success, the (new) session ID of the calling process is returned. On error, (pid_t) −1 is returned, and errno is set to indicate the error.
ERRORS
CONFORMING TO
POSIX.1-2001, POSIX.1-2008, SVr4.
NOTES.
SEE ALSO
setsid(1), getsid(2), setpgid(2), setpgrp(2), tcgetsid(3), credentials(7), sched(7)
COLOPHON
This page is part of release 4.13 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at−pages/. | https://man.cx/setsid(2) | CC-MAIN-2017-47 | refinedweb | 108 | 66.64 |
Is there a way to remove (or set to zero) specific synapses? Given an ensemble A that outputs to ensemble B, I’d like to lesion the network by choosing some subset of neurons in B and removing all incoming connections to those neurons coming from A, while still having other connections to those neurons from other ensembles or nodes. I know how to zero a neuron’s activations from my previous question, but I would now like to target specific synapses. If anybody knows how to do this or can point me to somewhere in the Nengo codebase where synapses are exposed that would be great!
You can “lesion” specific synapses in a Nengo network using a similar method to the neuron lesion code you linked in your post. First, it is important to note that in Nengo, a synapse is a filter applied to the connection between two ensembles. The synaptic filter is applied on the collective input to a neuron on the
post population. This differs slightly from the biological definition of a synapse, which is between individual neurons. Thus, in the method I will outline below, to achieve the “lesioning” you desire, we modify the weights of the connection between the two ensembles, rather than modifying the synapse itself.
Similar to the “ablate neurons” function, a “lesion connection” function can be defined as follows:
def lesion_connection(sim, conn, lesion_idx): connweights_sig = sim.signals[sim.model.sig[conn]["weights"]] connweights_sig.setflags(write=True) connweights_sig[lesion_idx, :] = 0 connweights_sig.setflags(write=False)
Note that the lesioning index code is dependent on how the
nengo.Connection is created. If a connection between two ensembles is created like so (the default way):
conn = nengo.Connection(ensA, ensB)
The weights signal (i.e.,
sim.signals[sim.model.sig[conn]["weights"]]) has a shape that is
(1, ensA.n_neurons). This means that it only contains the decoders of ensA. If you want to lesion the output of a neuron in ensA to every neuron in ensB it is connected to, no additional modification to the
nengo.Connection is needed, and the lesion function can be used as is.
However, since you want to achieve the reverse (lesion all inputs to a specific neuron in ensB), we’ll need to change the code slightly. Namely, when we create the
nengo.Connection, we specify the solvers with the
weights=True flag to force the connection to be created with the full weight matrix. This weight matrix combines the decoders of ensA with the encoders of ensB. The full code for this is as such:
conn = nengo.Connection(ens1, ens2, solver=nengo.solvers.LstsqL2(weights=True))
With this change to the code, you can lesion the connection similar to how it was done with the neuron ablation code:
with nengo.Network() as model: ... # define your model with nengo.Simulator(model) as sim: lesion_connection(sim, conn, <lesion_index>) sim.run(<runtime>)
I’ve attached an example script (test_lesion_conn.py (1.5 KB)) that demonstrates this code. In the script, two ensembles are constructed, and here is the output plot of the script showing on non-lesioned run, and one lesioned run (the input connections to the 1st, 3rd, 5th, and 7th neurons in
ens2 have been lesioned)
Some additional notes:
- The network is created with a seed so that multiple runs should be identical, which is why you see identical spike patterns for the non-lesioned connections.
- The lesioning is applied on a per-connection basis, so, you should be able to achieve the desired functionality of leaving other connections to “ens B” intact. You can test this out in the example code by adding an additional ensemble and connection to
ens2.
Is this possible with NengoDL? I’m getting
AttributeError: 'Simulator' object has no attribute 'signals'.
It may be possible to do the synapse removal in NengoDL, but it depends on the specifics of your model. Can you provide some example code so that I can investigate how it would work with your code?
I’m using this code, specifically trying to run
a03_GC_PMC_line.py. In framework.py I replaced:
conn = nengo.Connection(net.error[:dim], net.M1.input[dim:])
with
net.pmc_m1_conn = nengo.Connection(net.error[:dim], net.M1.M1[dim:], solver=nengo.solvers.LstsqL2(weights=True))
and in
a03_GC_PMC_line.py I added:
def on_start(sim): ablate_synapses(sim, net.pmc_m1_conn, range(9000))
Hmmmm. The original code I posted is written specifically for the core Nengo backend, and it achieves the synapse ablation through “hacky” means (because we are accessing very low level information within the
nengo.Simulator object). So… if you want to use the NengoDL simulator to run the model, it’ll have to be changed to support that backend. I’m not 100% sure if such a functionality is even supported with NengoDL, although from my initial investigation of the code, I don’t think it’s possible. But, I’ll keep you posted!
In the mean time, if you do want to use NengoDL to train your model, you can try training your network in NengoDL (without the ablation code), then using the
nengo_dl.Simulator.freeze_params() functionality to convert the trained NengoDL model back into a Nengo model. Once you have the standard Nengo model, you can then run your Nengo simulation with
nengo.sim and the ablation code.
I wanted to add another tensorflow net that would stimulate M1, learning an optimal stimulus based on resulting arm movement. This could learn to compensate for the ablation, the idea is that of the neural coprocessor. I don’t think I could accomplish this by freezing the params since the training of the tensorflow net depends on the simulation of the rest of the (ablated) model. I’ll look more into the source code, thanks again for your help!
Just to get a better idea of your workflow, so that I may suggest other potential approaches, am I correct to understand that you are attempting to do this:
- Create a model in Nengo
- Apply the ablation to the model
- Add a tensorflow (or nengo-dl) network on top of the model
- Train in NengoDL (or TF)
Is there an additional simulation step between 1 and 2?
Are there multiple back and forth simulations between regular Nengo and NengoDL?
Essentially what I’m trying to do is:
- Create a model in Nengo
- Record neural activity in M1 and arm position during an arm reaching task
- Train a tensorflow network called EN using that recorded data (outside of Nengo)
- Integrate EN to the Nengo model, it now predicts arm movement based on M1 activity during the reaching task (in NengoDL)
- Ablate some PMC -> M1 synapses (apparently not possible in NengoDL)
I’m here
- Add a tensorflow network called CPN which will predict M1 activity based on PMC. CPN outputs to EN and is trained through backpropped error from EN network to learn the optimal M1 activity (stimulus) to drive the arm towards the target. This requires NengoDL because I want to train CPN and have it stimulate M1 in realtime.
It may also be possible for me to achieve step 6 by first recording PMC and M1 activity for the Nengo model with ablated synapses, training CPN outside of the simulation, then returning to the simulation with both networks trained and using the strategy of freezing the parameters to use the CPN to stimulate M1 using Nengo.
I messed around with Nengo and NengoDL a bit and I believe I have found an approach that will work with your workflow. In essence, this approach is to utilize Nengo’s ability to specify the full connection weight matrix for a connection; i.e., like so:
conn = nengo.Connection(ens1.neurons, ens2.neurons, transform=weights)
and use this to perform the connection ablation. The idea is as follows:
- We use the Nengo (or NengoDL) simulator to solve for the “optimal” connection weights for us, as per usual.
- Extract the solved connection weights from the Nengo simulator object.
- Perform the appropriate ablation on the solved connection weights.
- Recreate the model, but use the neuron-to-neuron connection to create a connection with the ablated weights.
- Since the recreated model is a standard Nengo model, and the ablated weights are defined in the model, rather than having to mess with the simulator signals, we should be able to simulate the model with NengoDL with no problems.
Now, if your model is particularly big, it might take a while to build the whole model, and this would be inefficient if your goal is to build the model just to extract the initial (optimal) connection weights. To get around this, we can define a function that creates a subnetwork with just the components involved in the ablated connection. Then we can create a Nengo simulator object for this subnetwork only, reducing the overall build time.
I’ve implemented this approach in this example code (which builds off the previous example code): test_lesion_conn2.py (5.1 KB)
Note that this neuron-to-neuron connection approach only works with ablating connections. A similar, but more involved approach is needed if you want to lesion specific neurons as well.
Awesome, thanks! I’m going to try to adapt this to my application and I’ll let you know if I have any questions. | https://forum.nengo.ai/t/targeted-synapse-removal/1492 | CC-MAIN-2021-04 | refinedweb | 1,547 | 52.8 |
My R Style Guide
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
This is my take on an R style guide. As such, this is going to be a very long post (long enough to warrant a table of content). Still, I hope it – or at least some parts of it – are useful to some people out there. Another guide worth mentioning is the Tidyverse Style Guide by Hadley Wickham.
So without further ado, let’s dive into the guide.
Table of Content
Below a rough overview of the content. The sections can be read in any order so feel free to jump to any topic you like.
Introduction and purpose
There are universal views about readability due to the way how humans process information or text. For example, consider the following number written in two ways:
823969346 823 969 346
Certainly the second version, which splits the sequence into groups of numbers, is easier to process by humans, implying that spacing is important especially if abstract information is presented.
The style guide at hand provides a set of rules designed to achieve readable and maintainable R code. Still, of course, it represents a subjective view (of the author) on how to achieve these goals and does not raise any claims of being complete. Thus, if there are viable alternatives to the presented rules or if they are against the intuition of the user, possibly even resulting in hard-to-read code, it is better to deviate from the rules rather than blindly following them.
Coding style
Notation and naming
File names
File names end in .R and are meaningful about their content:
Good:
- string-algorithms.R
- utility-functions.R
Bad:
- foo.R
- foo.Rcode
- stuff.R
Function names
Preferrably function names consist of lowercase words separated by an underscore. Using dot (.) separator is avoided as this confuses with the use of generic (S3) functions. It also prevents name clashes with existing functions from the standard R packages. Camel-case style is also suitable especially for predicate functions returning a boolean value. Function names ideally start with verbs and describe what the function does.
# GOOD create_summary() calculate_avg_clicks() find_string() isOdd() # BAD crt_smmry() find.string() foo()
Variable names
Variable names consist of lowercase words separated by an underscore or dot. Camel-case style is also suitable especially for variables representing boolean values. Variable names ideally are attributed nouns and describe what (state) they store.
Good:
summary_tab selected_timeframe out.table hasConverged
Bad:
smrytab selTF outtab hascnvrgd
Name clashes with existing R base functions are avoided:
# Very bad: T <- FALSE c <- 10 mean <- function(a, b) (a + b) / 2 file.path <- "~/Downloads" # clashes with base::file.path function
Loop variables or function arguments can be just single letters if
- the naming follows standard conventions
- their meaning is clear
- understanding is preserved
otherwise use longer variable names.
# GOOD for (i in 1:10) print(i) add <- function(a, b) a + b rnorm <- function(n, mean = 0, sd = 1) # BAD for (unnecessary_long_variable_name in 1:10) print(unnecessary_long_variable_name) add <- function(a1, x7) a1 + x7 rnorm <- function(m, n = 0, o = 1)
Function definitions
Function definitions first list arguments without default values, followed by those with default values. In both function definitions and function calls, multiple arguments per line are allowed; line breaks are only allowed between assignments.
# GOOD rnorm <- function(n, mean=0, sd=1) pnorm <- function(q, mean=0, sd=1, lower.tail=TRUE, log.p=FALSE) # BAD mean <- function(mean=0, sd=1, n) # n should be listed first pnorm <- function(q, mean=0, sd=1, lower.tail= TRUE, log.p=FALSE)
Function calls
When calling a function, the meaning of the function call and arguments should be clear from the call, that is, usually function arguments beyond the first are explicitly named or at least invoked with a meaningful variable name, for example, identical to the name of the function argument:
# GOOD rnorm(100, mean=1, sd=2) identical(1, 1.0) # no need for explicit naming as meaning of call is clear mean <- 1 sd <- 2 std.dev <- sd rnorm(100, mean, sd) rnorm(100, mean, std.dev) # BAD rnorm(100, 1, 2)
Syntax
Assignment
For any assignment, the arrow
<- is preferable over the equal sign
=.
x <- 5 # GOOD x = 5 # OK
Semicolons are never used.
# BAD x <- 5; y <- 10; z <- 3 # break into three lines instead
Spacing around …
… commas
Place a space after a comma but never before (as in regular English)
# GOOD v <- c(1, 2, 3) m[1, 2] # BAD v <- c(1,2,3) m[1 ,2]
… operators
Spaces around infix operators (
=,
+,
-,
<-, etc.) should be done in a way that supports readability, for example, by placing spaces between semantically connected groups. If in doubt, rather use more spaces, except with colons
:, which usually should not be surrounded by spaces.
# GOOD # Spacing according to semantically connected groups x <- 1:10 base::get average <- mean(feet/12 + inches, na.rm=TRUE) # Using more spaces - also ok average <- mean(feet / 12 + inches, na.rm = TRUE) # BAD x <- 1 : 10 base :: get average<-mean(feet/12+inches,na.rm=TRUE)
… parentheses
A space is placed before left parentheses, except in a function call, and after right parentheses. Arithmetic expressions form a special case, in which spaces can be omitted.
# GOOD if (debug) print(x) plot(x, y) # Special case arithmetic expression: 2 + (a+b)/(c+d) + z/(1+a) # BAD if(debug)print (x) plot (x, y)
No spaces are placed around code in parentheses or square brackets, unless there is a comma:
# GOOD if (debug) print(x) diamonds[3, ] diamonds[, 4] # BAD if ( debug ) print( x ) diamonds[ ,4]
… curly braces
An opening curly brace is followed by a new line. A closing curly brace goes on its own line.
# GOOD for (x in letters[1:10]) { print(x) } add <- function(x, y) { x + y } add <- function(x, y) { x + y } # BAD add <- function(x, y) {x + y}
Indentation
Code is indented with ideally four, but at least two spaces. Usually using four spaces provides better readability than two spaces especially the longer the indented code-block gets.
# Four-space indent: for (i in seq_len(10)) { if (i %% 2 == 0) { print("even") } else { print("odd") } } # The same code-block using two-space indent: for (i in seq_len(10)) { if (i %% 2 == 0) { print("even") } else { print("odd") } }
Extended indendation: when a line break occurs inside parentheses, align the wrapped line with the first character inside the parenthesis:
fibonacci <- c(1, 1, 2, 3, 5, 8, 13, 21, 34)
Code organization
As with a good syntax style, the main goal of good code organization is to provide good readability and understanding of the code, especially for external readers/reviewers. While the following guidelines generally have proven to be effective for this purpose, they harm things if applied the wrong way or in isolation. For example, if the user wants to restricts himself to 50 lines of code for each block (see below), but, instead of proper code-reorganization, achieves this by just deleting all comments in the code block, things probably have gotten worse. Thus, any (re-)organization of code first and foremost must serve the improvement of the readability and understanding of the code, ideally implemented by the guidelines given in this section.
Line length
Ideally, the code does not exceed 80 characters per line. This fits comfortably on a printed page with a reasonably sized font and therefore can be easily processed by a human, which tend to read line by line. Longer comments are simply broken into several lines:
# Here is an example of a longer comment, which is just broken into two lines # in order to serve the 80 character rule.
Long variable names can cause problems regarding the 80 characters limit. In such cases, one simple yet effective solution is to use interim results, which are saved in a new meaningful variable name. This at the same time often improves the readability of the code. For example:
# Longer statement total.cost <- hotel.cost + cost.taxi + cost.lunch + cost.airplane + cost.breakfast + cost.dinner + cost.car_rental # Solution with interim result travel.cost <- cost.taxi + cost.airplane + cost.car_rental food.cost <- cost.breakfast + cost.lunch + cost.dinner total.cost <- travel.cost + food.cost + hotel.cost
Similarly, four-space indenting in combination with multiple nested code-blocks can cause problems to maintain the 80 character limit and may require to relax this rule in such cases. At the same time, however, multiple nested code-blocks should be avoided in the first place, because with more nesting the code usually gets harder to understand.
Block length
Each functionally connected block of code (usually a function block) should not exceed a single screen (about 50 lines of code). This allows the code to be read and understood without having to line-scroll. Exceeding this limit usually is a good indication that some of the code should be encapsulated (refactorized) into a separate unit or function. Doing so, not only improves the readability of the code but also flexibilizes (and thereby simplifies) further code development. In particular, single blocks that are separated by comments, often can be refactorized into functions, named similar to the comment, for example:
Long single-block version:
# Sub-block 1: simulate data for some model x <- 1:100 y <- rnorm(length(x)) . . . longer code block generating some data . . . data <- ... # Sub-block 2: plot the resulting data points ylims <- c(0, 30) p <- ggplot(data) + . . . longer code block defining plot object . . . # Sub-block 3: format results and export to Excel file outFile <- "output.xlsx" . . export to Excel file . .
The singe-block version may exceed a single page and requires a lot of comments just to separate each step visually, but even with this visual separation, it will be unnecessary difficult for a second person to understand the code, because allthough the code might be entirely sequential, he possibly will end up jumping back and forth within the block to get an understanding of it. In addition, if parts of the block are changed at a later time point, the code can easily get out of sync with the comments.
Refactorized version:
# Simulate data, plot it and export it to Excel file data.sim <- simulate_data(x = 1:100, y = rnorm(length(x)), ...) plot_simulated_data(data.sim, ylims = c(0, 30), ...) write_results_into_table(data.sim, outFile="output.xlsx")
In the refactorized version each sub-block was put into a separate function (not shown), which is now called in place. In contrast to the single-block version, each of these functions can be re-used, tested and have their own documentation. Since each of such functions encapsulate their own environment, the second (refactorized) design is also less vulnerable to side-effects between blocks. A second person can now read and understand function by function without having to worry about the rest of the block.
Last but not least, the block comments in the single-block versions could be transformed into function names so that the documentation is now part of the code and as such no longer can get out of sync with it.
Packages and namespaces
Whenever the
:: operator is used, the namespace of the corresponding package is loaded but not attached to the search path.
tools::file_ext("test.txt") # loads the namespace of the 'tools' package, ## [1] "txt" search() # but does not attach it to the search path ## [1] ".GlobalEnv" "package:stats" "package:graphics" ## [4] "package:grDevices" "package:utils" "package:datasets" ## [7] "package:methods" "Autoloads" "package:base" file_ext("test.txt") # and thus produces an error if called without namespace prefix ## Error in file_ext("test.txt"): could not find function "file_ext" # base::mean and stats::rnorm work, because base and stats namespaces are # loaded and attached by default: mean(rnorm(10)) ## [1] -0.04888008
In contrast, the
library and
require commands both load the package’s namespace but also attach its namespace to the search path, which allows to refer to functions of the package without using the
:: operator.
library(tools) # loads namespace and attaches it to search path search() ## [1] ".GlobalEnv" "package:tools" "package:stats" ## [4] "package:graphics" "package:grDevices" "package:utils" ## [7] "package:datasets" "package:methods" "Autoloads" ## [10] "package:base" file_ext("test.txt") # now works ## [1] "txt"
Since a call to a function shall not alter the search path,
library or
require statements are not allowed in functions used in R packages. In contrast,
library statements are suitable for local (data analysis) R scripts especially if a specific function is used frequently. An alternative is to locally re-map the frequently used function:
file_ext <- tools::file_ext file_ext("test.txt") ## [1] "txt" file_ext("test.docx") ## [1] "docx" file_ext("test.xlsx") ## [1] "xlsx"
Code documentation
Function headers
A function header is placed above any function, unless it is defined inside another function.
It is recommended to use the roxygen format, because it
- promotes a standardized documentation
- allows for automatic creation of a user-documentation from the header
- allows for automatic creation of all namespace definitions of an R-package
A function header at least contains the following elements (the corresponding roxygen keyword is listed at the start):
- @title: short sentence of what the function does
- @description: extended description of the function (optionally the @details keyword can be used to describe further details)
- @param (or @field with RefClasses): For each input parameter, a summary of the type of the parameter (e.g., string, numeric vector) and, if not obvious from the name, what the parameter does.
- @return: describes the output from the function, if it returns something.
- @examples: if applicable, examples of function calls are provided. Providing executable R code, which shows how to use the function in practice, is a very important part of the documentation, because people usually look at the examples first. While generally example code should work without errors, for the purpose of illustration, it is often useful to also include code that causes an error. If done, the corresponding place in the code should be marked accordingly (use with roxygen).
Example of a roxygen-header:
#' @title String suffix matching #' #' @description #' Determines whether \code{end} is a suffix of string \code{s} (borrowed from #' Python, where it would read \code{s.endswith(end)}) #' #' @param s (character) the input character string #' @param end (character) string to be checked whether it is a suffix of the #' input string \code{s}. #' @return \code{TRUE} if \code{end} is a suffix of \code{s} else \code{FALSE} #' #' @examples #' string_ends_with("Hello World!", "World!") # TRUE #' string_ends_with(" Hello World!", "world!") # FALSE (case sensitive) string_ends_with <- function(s, end) { # Implementation ... }
Inline code comments
Inline comments should explain the programmer’s intent at a higher level of abstraction than the code, that is, they should provide additional information, which are not obvious from reading the code alone. As such, good comments don’t repeat, summarize or explain the code, unless the code is so complicated that it warrants an explanation, in which case, however, it is often worth to revise the code to make it more readable instead.
Examples of suitable, informative comments:
# Compare strings pairwise and determine first position of differing characters splitted_s <- strsplit(s, split = "")[[1]] splitted_url <- strsplit(url, split = "")[[1]][1:nchar(s)] different <- splitted_s != splitted_url first_different_position <- which(different)[1] # Provide index via names as we need them later names(v) <- seq_along(v)
Bad redundant comments:
v <- 1:10 # initialize vector # Loop through all numbers in the vector and increment by one for (i in 1:length(v)) { v[i] <- v[i] + 1 # increment number }
That’s it already!. | https://www.r-bloggers.com/2019/11/my-r-style-guide/ | CC-MAIN-2021-43 | refinedweb | 2,593 | 50.87 |
HTML Imports polyfill
HTML Imports are a way to include and reuse HTML documents in other HTML
documents. As
<script> tags let authors include external Javascript in their
pages, imports let authors load full HTML resources. In particular, imports let
authors include Custom Element
definitions from external URLs.
Getting Started
Include the
html_import.debug.js or
html_import.min.js (minified) file in
your project.
<script src="packages/html_import/html_import.debug.js"></script>
html_import.debug.js is the debug loader and uses
document.write to load
additional modules.
Use the minified version (
html_import.min.js) if you need to load the file
dynamically.
Basic usage
For HTML imports use the
import relation on a standard
<link> tag, for
example:
<link rel="import" href="import-file.html">
Polyfill details
You can read more about how the polyfill is implemented in JavaScript here:
Getting the source code
This package is built from:
You'll need node.js to rebuild the JS file. Use
npm install to get dependencies and
grunt to build. | https://www.dartdocs.org/documentation/html_import/0.7.0/index.html | CC-MAIN-2017-09 | refinedweb | 169 | 52.26 |
You can use a large variety of spatial data to query Earth OnDemand. This page shows you how to use the crowdsourced OpenStreetMap (OSM) to query Earth OnDemand for related imagery. To access OSM data, we will use the osmnx library. There are many ways to access OSM data, and this is useful for datasets like streets, buildings and points of interest.
Import Libraries
import osmnx from earthai.init import *
Retrieve OpenStreetMap Data
We will extract all the libraries in the state of Vermont, USA.
lib_gdf = osmnx.geometries_from_place("Vermont, United States", tags={"amenity": ["library"]}) print("There are "+str(len(lib_gdf))+" libraries in Vermont, USA.")
The data are returned as a GeoPandas GeoDataFrame. This enables easy plotting and reasoning over the spatial data. You can use the
head method to view the top few rows, and the
plot method to plot the geometries.
lib_gdf.head(5)
5 rows × 59 columns
lib_gdf.plot()
Query Earth OnDemand for Imagery
We pass the GeoDataFrame into our
earth_ondemand.read_catalog function. Because some geometries are points and others are polygons, we create a small buffer to force them all to be polygons, since the EarthAI Catalog API requires all input geometries to be the same type.
lib_gdf.geom_type.value_counts()
lib_cat = earth_ondemand.read_catalog(lib_gdf.buffer(0.0001), '2019-06-01', '2019-06-11', collections='landsat8_l1tp' )
You can download a companion notebook that runs through these steps from the attachment below.
Please sign in to leave a comment. | https://docs.astraea.earth/hc/en-us/articles/360051410052-How-to-Query-the-EarthAI-Catalog-with-OpenStreetMap-Data | CC-MAIN-2021-31 | refinedweb | 240 | 57.67 |
7 Essential Node.js Interview Questions*
Consider the following JavaScript code:
console.log("first"); setTimeout(function() { console.log("second"); }, 0); console.log("third");
The output will be:
first third second
Assuming that this is the desired behavior, and that we are using Node.js version 0.10 or higher, how else might we write this code?
Node.js version 0.10 introduced
setImmediate, which is equivalent to
setTimeout(fn, 0), but with some slight advantages.
setTimeout(fn, delay) calls the given callback
fn after the given
delay has ellapsed (in milliseconds). However, the callback is not executed immediately at this time, but added to the function queue so that it is executed as soon as possible, after all the currently executing and currently queued event handlers have completed. Setting the delay to 0 adds the callback to the queue immediately so that it is executed as soon as all currently-queued functions are finished.
setImmediate(fn) achieves the same effect, except that it doesn’t use the queue of functions. Instead, it checks the queue of I/O event handlers. If all I/O events in the current snapshot are processed, it executes the callback. It queues them immediately after the last I/O handler somewhat like
process.nextTick. This is faster than
setTimeout(fn, 0).
So, the above code can be written in Node as:
console.log("first"); setImmediate(function(){ console.log("second"); }); console.log("third");
What is “callback hell” and how can it be avoided?
.
Find top Node.js talent today. Toptal can match you with the best developers to finish your project.Hire Toptal’s Node.js Developers.:
function callback(err, results) { // usually we'll check for the error before handling results if(err) { // handle error somehow and return } // no error, perform standard callback handling }
Consider following code snippet:
{ console.time("loop"); for (var i = 0; i < 1000000; i += 1){ // Do nothing } console.timeEnd("loop"); }
The time required to run this code in Google Chrome is considerably more than the time required to run it in Node.js. Explain why this is so, even though both use the v8 JavaScript Engine.
Within a web browser such as Chrome, declaring the variable
i outside of any function’s scope makes it global and therefore binds it as a property of the
window object. As a result, running this code in a web browser requires repeatedly resolving the property
i within the heavily populated
window namespace in each iteration of the
for loop.
In Node.js, however, declaring any variable outside of any function’s scope binds it only to the module’s own scope (not the
window object) which therefore makes it much easier and faster to resolve. | https://www.toptal.com/nodejs/interview-questions | CC-MAIN-2017-26 | refinedweb | 450 | 58.18 |
AWS Compute Blog distributed microservices using thousands of containers.
Getting started with ECS isn’t too difficult. To fully understand how it works and how you can use it, it helps to understand the basic building blocks of ECS and how they fit together!
Amazon EC2 building blocks
We currently provide two ways to run containers: EC2 and Fargate. With Fargate, the Amazon EC2 instances are abstracted away and managed for you. Instead of worrying about ECS container instances, you can just worry about tasks. In this post, the infrastructure components used by ECS that are handled by Fargate are marked with a *.
Instance*
EC2 instances are good ol’ virtual machines (VMs). And yes, don’t worry, you can connect to them (via SSH). Because customers have varying needs in memory, storage, and computing power, many different instance types are offered. Just want to run a small application or try a free trial? Try t2.micro. Want to run memory-optimized workloads? R3 and X1 instances are a couple options. There are many more instance types as well, which cater to various use cases.
AMI*
Sorry if you wanted to immediately march forward, but before you create your instance, you need to choose an AMI. An AMI stands for Amazon Machine Image. What does that mean? Basically, an AMI provides the information required to launch an instance: root volume, launch permissions, and volume-attachment specifications. You can find and choose a Linux or Windows AMI provided by AWS, the user community, the AWS Marketplace (for example, the Amazon ECS-Optimized AMI), or you can create your own.
Region
AWS is divided into regions that are geographic areas around the world (for now it’s just Earth, but maybe someday…). These regions have semi-evocative names such as us-east-1 (N. Virginia), us-west-2 (Oregon), eu-central-1 (Frankfurt), ap-northeast-1 (Tokyo), etc.
Each region is designed to be completely isolated from the others, and consists of multiple, distinct data centers. This creates a “blast radius” for failure so that even if an entire region goes down, the others aren’t affected. Like many AWS services, to start using ECS, you first need to decide the region in which to operate. Typically, this is the region nearest to you or your users.
Availability Zone
AWS regions are subdivided into Availability Zones. A region has at minimum two zones, and up to a handful. Zones are physically isolated from each other, spanning one or more different data centers, but are connected through low-latency, fiber-optic networking, and share some common facilities. EC2 is designed so that the most common failures only affect a single zone to prevent region-wide outages. This means you can achieve high availability in a region by spanning your services across multiple zones and distributing across hosts.
Amazon ECS building blocks
Container
Well, without containers, ECS wouldn’t exist!
Are containers virtual machines?
Nope! Virtual machines virtualize the hardware (benefits), while containers virtualize the operating system (even more benefits!). If you look inside a container, you would see that it is made by processes running on the host, and tied together by kernel constructs like namespaces, cgroups, etc. But you don’t need to bother about that level of detail, at least not in this post!
Why containers?
Containers give you the ability to build, ship, and run your code anywhere!
Before the cloud, you needed to self-host and therefore had to buy machines in addition to setting up and configuring the operating system (OS), and running your code. In the cloud, with virtualization, you can just skip to setting up the OS and running your code. Containers make the process even easier—you can just run your code.
Additionally, all of the dependencies travel in a package with the code, which is called an image. This allows containers to be deployed on any host machine. From the outside, it looks like a host is just holding a bunch of containers. They all look the same, in the sense that they are generic enough to be deployed on any host.
With ECS, you can easily run your containerized code and applications across a managed cluster of EC2 instances.
Are containers a fairly new technology?
The concept of containerization is not new. Its origins date back to 1979 with the creation of chroot. However, it wasn’t until the early 2000s that containers became a major technology. The most significant milestone to date was the release of Docker in 2013, which led to the popularization and widespread adoption of containers.
What does ECS use?
While other container technologies exist (LXC, rkt, etc.), because of its massive adoption and use by our customers, ECS was designed first to work natively with Docker containers.
Container instance*
Yep, you are back to instances. An instance is just slightly more complex in the ECS realm though. Here, it is an ECS container instance that is an EC2 instance running the agent, has a specifically defined IAM policy and role, and has been registered into your cluster.
And as you probably guessed, in these instances, you are running containers.
AMI*
These container instances can use any AMI as long as it has the following specifications: a modern Linux distribution with the agent and the Docker Daemon with any Docker runtime dependencies running on it.
Want it more simplified? Well, AWS created the Amazon ECS-Optimized AMI for just that. Not only does that AMI come preconfigured with all of the previously mentioned specifications, it’s tested and includes the recommended ecs-init upstart process to run and monitor the agent.
Cluster
An ECS cluster is a grouping of (container) instances* (or tasks in Fargate) that lie within a single region, but can span multiple Availability Zones – it’s even a good idea for redundancy. When launching an instance (or tasks in Fargate), unless specified, it registers with the cluster named “default”. If “default” doesn’t exist, it is created. You can also scale and delete your clusters.
Agent*
The Amazon ECS container agent is a Go program that runs in its own container within each EC2 instance that you use with ECS. (It’s also available open source on GitHub!) The agent is the intermediary component that takes care of the communication between the scheduler and your instances. Want to register your instance into a cluster? (Why wouldn’t you? A cluster is both a logical boundary and provider of pool of resources!) Then you need to run the agent on it.
Task
When you want to start a container, it has to be part of a task. Therefore, you have to create a task first. Succinctly, tasks are a logical grouping of 1 to N containers that run together on the same instance, with N defined by you, up to 10. Let’s say you want to run a custom blog engine. You could put together a web server, an application server, and an in-memory cache, each in their own container. Together, they form a basic frontend unit.
Task definition
Ah, but you cannot create a task directly. You have to create a task definition that tells ECS that “task definition X is composed of this container (and maybe that other container and that other container too!).” It’s kind of like an architectural plan for a city. Some other details it can include are how the containers interact, container CPU and memory constraints, and task permissions using IAM roles.
Then you can tell ECS, “start one task using task definition X.” It might sound like unnecessary planning at first. As soon as you start to deal with multiple tasks, scaling, upgrades, and other “real life” scenarios, you’ll be glad that you have task definitions to keep track of things!
Scheduler*
So, the scheduler schedules… sorry, this should be more helpful, huh? The scheduler is part of the “hosted orchestration layer” provided by ECS. Wait a minute, what do I mean by “hosted orchestration”? Simply put, hosted means that it’s operated by ECS on your behalf, without you having to care about it. Your applications are deployed in containers running on your instances, but the managing of tasks is taken care of by ECS. One less thing to worry about!
Also, the scheduler is the component that decides what (which containers) gets to run where (on which instances), according to a number of constraints. Say that you have a custom blog engine to scale for high availability. You could create a service, which by default, spreads tasks across all zones in the chosen region. And if you want each task to be on a different instance, you can use the distinctInstance task placement constraint. ECS makes sure that not only this happens, but if a task fails, it starts again.
Service
To ensure that you always have your task running without managing it yourself, you can create a service based on the task that you defined and ECS ensures that it stays running. A service is a special construct that says, “at any given time, I want to make sure that N tasks using task definition X1 are running.” If N=1, it just means “make sure that this task is running, and restart it if needed!” And with N>1, you’re basically scaling your application until you hit N, while also ensuring each task is running.
So, what now?
Hopefully you, at the very least, learned a tiny something. All comments are very welcome!
Want to discuss ECS with others? Join the amazon-ecs slack group, which members of the community created and manage.
Also, if you’re interested in learning more about the core concepts of ECS and its relation to EC2, here are some resources:
Pages
Amazon ECS landing page
AWS Fargate landing page
Amazon ECS Getting Started
Nathan Peck’s AWSome ECS
Docs
Amazon EC2
Amazon ECS
Blogs
AWS Compute Blog
AWS Blog
GitHub code
Amazon ECS container agent
Amazon ECS CLI
AWS videos
Learn Amazon ECS
AWS videos
AWS webinars
— tiffany
| https://aws.amazon.com/jp/blogs/compute/building-blocks-of-amazon-ecs/ | CC-MAIN-2020-45 | refinedweb | 1,682 | 64.2 |
Using Element Trees to Parse XBEL Files
July 15, 2002 | Fredrik Lundh
The XML Bookmark Exchange Language (XBEL) is a simple XML format that can be used to store “bookmark collections” as used by Internet browsers.
Parsing XBEL Files
XBEL files are ordinary XML files. Just parse them, and you’re done:
import elementtree.ElementTree as ET bookmarks = ET.parse("bm1.xbel") for bookmark in bookmarks.getiterator("bookmark"): print bookmark.get("href"), bookmark.findtext("title")
Merging XBEL Files
Over at the ASPN Cookbook, Uche Ogbuji has posted a 90 line script which merges two XBEL bookmark files.
My ElementTree version isn’t quite ready for public consumption, but I can assure you that it’s a little bit shorter… ;-)
To be continued… | http://www.effbot.org/zone/element-xbel.htm | CC-MAIN-2018-17 | refinedweb | 122 | 57.67 |
The Firebase Realtime Database stores and synchronizes data using a NoSQL cloud database. Data is synchronized across all clients in realtime, and remains available when your app goes offline.
Before you begin
Before you can use Firebase Realtime Database,).
Setting up public access.
Create and initialize firebase::App
Before you can access the Realtime Database, you'll need to create and initialize the
firebase::App.
Include the header file for
firebase::App:
#include "firebase/app.h""));
Access the firebase::database::Database class
The
firebase::database::Database
is the entry point for the Firebase Realtime Database C++ SDK.
::firebase::database::Database *database = ::firebase::database::Database::GetInstance(app);
If you have chosen to use public access for your rules, you can proceed to the sections on saving and retrieving data.
Setting up restricted access.
If you do not want to use public access you can add Firebase Authentication to your app to control access to the database.
Next Steps
- Learn how to structure data for Realtime Database.
- Scale your data across multiple database instances.
- Save data.
- Retrieve data.
- View your database in the Firebase console.
Known Issues
- On desktop platforms (Windows, Mac, Linux), the Firebase C++ SDK uses REST to access your database. Because of this, you must declare the indexes you use with Query::OrderByChild() on desktop or your listeners will fail.
- The desktop workflow version of Realtime Database does not support offline or persistence. | https://firebase.google.com/docs/database/cpp/start?hl=nl | CC-MAIN-2019-51 | refinedweb | 234 | 57.67 |
import "net/http/httptrace"
Package httptrace provides mechanisms to trace the events within HTTP client requests.
Code:) } func(err error) // GotFirstResponseByte is called when the first byte of the response // headers is available. GotFirstResponseByte func() // Got100Continue is called if the server replies with a "100 // Continue" response. Got100Continue func() //ly successfully. // If net.Dialer.DualStack ("Happy Eyeballs") support is // enabled, this may be called multiple times. ConnectDone func(network, addr string, err error) // TLSHandshakeStart is called when the TLS handshake is started. When // connecting to a HTTPS site via) }.
func ContextClientTrace(ctx context.Context) *ClientTrace
ContextClientTrace returns the ClientTrace associated with the provided context. If none, it returns nil. struct { Host string }
DNSStartInfo contains information about a DNS request.. | https://static-hotlinks.digitalstatic.net/net/http/httptrace/ | CC-MAIN-2018-51 | refinedweb | 119 | 52.76 |
The Gaia Beta was released today by a friend of mine, Steven Sacks. Gaia is a Flash framework created for Flash Designers & Developers who create Flash sites. The reason this is an important is that it now supports ActionScript 3 and Flash CS3.
Discussions
I must of racked up at least $500 in cell phone / mobile bills talking to Steven about Gaia over the phone over the past few months (he’s in Los Angeles, I’m in Atlanta). This doesn’t include numerous emails and IM’s. We’ll argue about implementation details, coding styles, and design pattern implementations. Sometimes their just discussions about details because we agree and are on the same page. The arguments about terminology and Flash authoring techniques are usually one sided; Steven stands his ground, has chosen pretty appropriate industry lingo, and knows his audience way better than I do.
My job isn’t to congratulate him on the immense amount of work he’s done on the AS3 version, on porting the new ideas gained in AS3 development BACK into AS2, or for just the good execution of his passion. My job is to be his friend. That means to question everything to ensure he’s thought thoroughly about something, to be devils advocate, and generally be a dick to see if he cracks. If he does crack, it’s a weakness exposed, and we then have to discuss about who’s opinion on fixing it is better.
This doesn’t happen with everything, only small parts of Gaia that he asks for feedback on. The rest I have confidence he already got right… although, I did manage to write 24 TODO’s/FIXME’s for 3 classes he wanted my feedback on. F$@#ker only agreed with like 2… or at least, he only openly admitted 2 were decent. I’m sure if I did the whole framework, I’d have more, although, I might have less once I then understand most of the design decisions :: shrugs ::. Doesn’t mean Steven would agree; it’s his framework and he’s a good Flash Developer. With his understanding of other Flash Designers & Dev’s and how they work, he ultimately knows best.
Solving the “no more _global” Problem
One part Steven DID let me actually help a lot on was the global Gaia API. In Flash Player 8 and below, this Singleton existed on a namespace called “_global”. This was a dynamic object you could put anything you wanted on and all code everywhere, including dynamically loaded SWF’s, could access. Aka, the perfect place for the Gaia API Singleton. Naturally, we both were like… crap, what the heck do we do since there is no _global in AS3. Damn Java developers can do DIAF. Someone get that Python creator guy’s number and tell him that Macromedia would like to re-consider their offer back in Flash 5 instead of going with ECMA… oh wait… Macromedia is no more… dammit!
It just so happens, Steven remembered reading my blog entry with the proposed solution for Flash CS3 not having an exclude.xml option. The server architect and long time Java dev at my work, John Howard, suggested the Bridge pattern idea initially, explaining that interfaces are smaller than actual class implementations in file size. Steven and I discussed the Bridge pattern way I suggested, using internal classes in an SWC sneakily injected into people’s Libraries, and another solution proposed by one of my readers, Sanders, in the comments. The Bridge pattern seemed best, but we were concerned about file size because it was an un-tested theory. As you can see, this turned out to be a good theory; 1.3k == f’ing dope!
When I went back and re-read my blog post I realized I didn’t really explain how the Bridge pattern works in Flash Developer lingo. As my blog reader audience has accumulated Java & C++ devs just getting into Flex, I’ve tried to use lingo they’d jive with. So, let me re-hash what the Bridge pattern attempts to solve in
1 2 3 4 sentences 1 paragraph.
You cannot exclude classes in Flash CS3 using exclude.xml like you could in Flash MX 2004 using AS2. Therefore, if you re-use classes, say “Gaia.api.goto” in other FLA’s that will later be loaded in, you’re duplicating classes in many SWF’s, greatly increasing the file size of your entire site. Instead, we just created Gaia to be a shell that makes calls an object “we’ll set in the parent SWF”. This Gaia shell class compiles to 1.3k vs. the 6 to 12k the implementation would of normally taken. That’s 10k (probably more) savings per SWF.
These savings make HUGE differences on enterprise size Flash sites like Ford Vehicles and Disney; basically any huge Flash portal that gets one million+ visitors a day. Akamai or other CDN’s aren’t exactly cheap. The 10k you save per SWF could be $10,000 in bandwidth costs per month. But screw the bandwidth costs, it’s all about the user experience, baby! Fast for the win.
The gaia_internal namespace
The down side was I KNEW we’d have to expose at least 1 public variable on the Gaia Singleton. We don’t want people setting things on the Gaia api class they aren’t supposed to; whether on purpose or by accident (accidental h@xn04?). So, I copied what the Flex SDK does. They use this thing called “mx_internal”. It’s a namespace the Flex team created for the same situation: You want to expose a public property, but you don’t want other people messing with it.
You can’t use private because it’s not accessible by other classes. You can’t use protected because you have to extend the class. You can’t use public because that implies its ok to touch… like certain outfits certain genders wear… and in the same vein, that doesn’t really mean you CAN touch! In that scenario, it’s a wedding band. In the ActionScript scenario, it’s using a specifically named namespace you create your self. I suggested gaia_internal. That way, only Steven can use that namespace and thus set those properties. If other people do it, they’re either really smart, or crackheads. For the latter, it makes it easier to call someone out on doing something un-supported if they are actively using the gaia_internal namespace in their code.
It ALSO makes it easier to change implementation details in the future if Steven so chooses. Like all things in Flash, even AS3, things will be custom created for certain projects. This could include changes or extensions to the Gaia framework itself. You should encourage this instead of be against it. Therefore, keeping weird internal things in a specific namespace helps, at least a little, ensure existing projects won’t have to worry too much about changes & improvements in future versions of Gaia.
Future Solution: Using Flex’ compc
Yes, Sanders, your solution is technically superior. As others have told you, however, it is too complicated. Flash Developers thrive on getting cool stuff done quickly. While I’m sure some Linux afficiando, command line pro, Emacs weilding zealot will argue that he can run your solution faster than I can hit Control + Enter, most Flash Devs don’t care.
We all agree the Gaia api should be 1 class; not 3. The whole point of the Bridge pattern is to support new implementations. I highly doubt Steven will ever create a new implementation of Gaia; we just followed the pattern to save filesize.
Therefore, what you need to do to both win the hearts of millions of Flash designer & developers everywhere as well as fix a flaw in Flash CS3 is to write a JSFL script that does your solution; and then have a way to map your JSFL script as a keyboard shortcut (this part is easy; its built into Flash). The golden rule in Flash is you should be able to “Test Movie” and see it work. It’s the same thing as checking in code that compiles into a Subversion repository. If you nail that, you’re golden and the Bridge pattern way will then become a nightmare of the past we can all forget.
If you need help writing JSFL, let me know; my skills are rusty but I can re-learn pretty quick. The goals are:
1. Get a JSFL script to compile as normal so you can see a movie work via Control + Enter “Test Movie”
2. Get a JSFL script to run your magic so the classes you don’t want compiled in (aka Gaia, PageAsset, etc.); you can then map this to like Control + Alt + Enter (or whatever)
Conclusions
If you’re a Flash Developer who builds Flash sites, go check out Gaia. If you’re using a ton of loaded SWF’s in your site, go check out my original entry as I now have proof the theory works. If you’re Sanders, GET TO WORK! AS3 Flash Site is about to die… needs Sanders’ bandwidth reduction, badly!!!
24 TODOs? Ha! I think there were maybe 10 and most of them were unnecessary null checks. I know, I know. You think every argument passed in a function is out to get you! Might want to take some chlorpromazine for that.
Go ahead and look at the whole thing and slather my code with your TODOs. FlashDevelop has a nifty little “Tasks” panel that shows me your paranoid scribblings in an itemized list. Heeeeeeeeeeeeeeeeeeeeere’s Jesse!
Steven Sacks
January 23rd, 2008
That being said, thanks for taking a look at all, hehe. IOU one beer.
Steven Sacks
January 23rd, 2008
You’re right! I do need to get to work
And now I know for sure my explanation sucks! But I still will not be writing any JSFL though, you can actually see you’re movie working using the magic ctrl + enter combination, just download my rpc examples and see for yourself…
I believe that if you know what intrinsic is and you’re into ‘Bridge’ patterns and the sorts then you’re already a bit of a Linux afficiando, command line pro, Emacs weilding zealot anyway. So is the manual compilation of a separate ‘library’ really such a big deal?
Now I’ll stop evangelizing my solution, and so some real work!
Sanders
January 23rd, 2008
Hah, no way dude, only play games hosted on Linux servers, I don’t actually use Linux. No, your explanation doesn’t suck. There are just certain audiences that totally agree with what your doing, and others who are like, “That’s too much trouble”. We need to satiate those peeps. Cool, I’ll go download your RPC and take a closer look. Yes, compiling a separate library is hard; it’s an extra step. If you make this shiz easy for Flash peeps, it rocks. Stay tuned…
JesterXL
January 23rd, 2008
Ok, you finally got me tuned in! What are your thoughts (implementation specific) on integrating this library stuff into Flash?
Sanders
January 23rd, 2008
I’m a G at JSFL. That’s G for Gangsta AND Guru. Explain what needs to be done, and I’ll make it happen.
Steven Sacks
January 23rd, 2008
bbcode bracket tags FTL
Steven Sacks
January 23rd, 2008
HTML fix FTW
JesterXL
January 23rd, 2008
Hey, instead of duplicating all of the GaiaImp methods in the Gaia class, why don’t you just use Gaia as an holder for the GaiaImp instance ?
so in your swfs, instead of calling:
Gaia.api.goto(”…”);
you could call :
Gaia.api.impl.goto(”…”);
Gaia.api.impl would be a public var typed as IGaia, so you get strict typing and you don’t have to maintain 3 classes with the same methods…
I’d prefer to use Gaia.instance instead of Gaia.api though, it’s more standard singleton wording imo.
Or better yet, no need for a singleton, just make a public static var impl:IGaia in Gaia so you can call it like this :
Gaia.impl.goto(”…”);
Patrick Matte
January 23rd, 2008
The second you do an import AND use GaiaImpl, a child SWF will then compile that in. We don’t want that. That same 6k/10k (whatever GaiaImpl.as compiles to) is now duplicated unnecessarily in each child SWF. The only classes we are duplicating in child.swf’s on purpose are Gaia and IGaia; those together compile to 1.3k.
That way, the main.fla, the dude who runs GaiaMain, can then set this instance variable AND be the ONLY SWF who actually compiles GaiaImpl.as. Make sense?
Maintaining 2 classes isn’t hard when they both extend the same interface; if we mess up, Flash yells at us and refuses to compile.
…however, you do have a point about the instance naming convention. I believe Steven chose Gaia.api because it’s easier to type than Gaia.instance. Flash Developer pragmatism over Programmer Purism Methodology. :: shrugs ::
JesterXL
January 23rd, 2008
The reason I chose api over instance is because the actual implementation of it as a singleton is irrelevant to the developer who is accessing the API. The developer doesn’t know it’s a singleton (it used to be a static class, anyway), shouldn’t know it’s a singleton and doesn’t need to treat it like a singleton.
So now it comes down to a naming convention and I think api is very specific and successfully conveys what it is you’re accessing, while instance is too general and exposes the implementation when that exposure is, IMO, counter-productive.
Gaia.api means you’re accessing my framework’s api. It would be great if I could just say Gaia.whatever() like it was before when it was a static class, but unfortunately, you cannot put static methods in an interface.
Steven Sacks
January 23rd, 2008
No that’s what i’m saying, you don’t reference GaiaImpl explicitely because it is only typed as IGaia, not as GaiaImpl..
Patrick Matte
January 23rd, 2008
Patrick Matte
January 23rd, 2008
Great idea, Patrick!
JesterXL
January 23rd, 2008
It’s ends up being 872 bytes less in the child swf, which is about half the size it was before (1.63k). This makes logical sense in that I was duplicating every method. Awesome optimization, Patrick. Thanks!
Steven Sacks
January 23rd, 2008
Not to mention it’s very easy to maintain now.
Steven Sacks
January 23rd, 2008
[…] on Jesse’s blog, Patrick Matte pointed out that the duplication of code in the bridge class (Gaia) was entirely […]
Update: Gaia bridge pattern API | flash developer | steven sacks
January 23rd, 2008
I’m having trouble adding a trackback…
Jesse, I’ve made some JSFL in combination with a small GUI. Please check it out, and tell me what you think:
Sanders
January 29th, 2008 | http://jessewarden.com/2008/01/gaia-arguments-real-world-bridge-pattern-and-gaia_internal.html | crawl-001 | refinedweb | 2,505 | 71.65 |
Steve Langasek <vorlon@debian.org> writes: > As Ian has described it, yes: lsb-release is not "installed" until > after the python-support trigger is run, so dpkg will run that trigger > before trying to move up the stack and configure dkms. And since dkms > is not yet configured, nvidia-kernel-dkms won't be configured either. > The only exceptions would be a bug in dpkg trigger support, or a bug in > a higher level package manager passing --force-depends to dpkg. This implies to me that the following information in the python-support documentation is partially incorrect: Namespace packages are empty __init__.py files that are necessary for other .py files to be considered as Python modules by the interpreter. To avoid this being a problem, python-support will add them automatically as needed. However, this will be done later than the update-python-modules call when dpkg installs the package, because this is, like byte-compilation, a time-consuming operation. What this means is, if you need a namespace package or depend on a package that needs it, *and* that you need to use it during the postinst phase (e.g. for a daemon), you will have to add the following command in the postinst before starting your daemon: update-python-modules -p If you depend on another package that contains a namespace package, the trigger support plus the dependency should ensure that the other package is correctly configured before your postinst runs. I believe the only case where you would need to explicitly run update-python-modules -p in your postinst is if the postinst's package itself installs a Python namespace package and needs that namespace package to be configured before running that action in the postinst. In other words, the daemon package itself, if it also contains the namespace module, may need to do this. But if the namespace module is in a dependency, this should never be needed. Is that correct? -- Russ Allbery (rra@debian.org) <> | https://lists.debian.org/debian-devel/2010/07/msg00381.html | CC-MAIN-2017-39 | refinedweb | 331 | 51.89 |
reference error - cannot find tcmtextbirnerseff Jun 6, 2011 9:14 AM
I get this kind of error after upgrading flash from cs5 to cs5.5
My library paths seem to be correct
1. Re: reference error - cannot find tcmtextJin-Huang
Jun 6, 2011 6:23 PM (in response to birnerseff)
Pls give me the complete error message. For example, tcmtext in which class cannot be found?
2. Re: reference error - cannot find tcmtextbirnerseff Jun 6, 2011 8:42 PM (in response to Jin-Huang)
Hello, it is no more than
Reference Error: Error #1065 Variable TCMText is undefined
The message appears at the end of "test movie", and the movie does not seem to include any code.
I have tried two different configurations (one with authortime import of a bunch of symbols from a separate fla file, and the other one using Flex "Embed" mechanism to reference them, but the end result is the same.
3. Re: reference error - cannot find tcmtextJin-Huang
Jun 6, 2011 9:42 PM (in response to birnerseff)
There must be xxx.TCMText. Pls tell me the caller's class name. Is caller an instance of fl.text.TLFTextField or a TLF object?
4. Re: reference error - cannot find tcmtextbirnerseff Jun 7, 2011 12:32 PM (in response to Jin-Huang)
that is all of the error message...
I just saved as xfl and searched for contained text: there is no mention of TCMText anywhere, there is, however, both DOMText and DOMTLFText
5. Re: reference error - cannot find tcmtextJin-Huang
Jun 8, 2011 1:53 AM (in response to birnerseff)
I did not find tcmtext even in TLF 1.0. So pls port the question to.
6. Re: reference error - cannot find tcmtextgrandcedric Jun 10, 2011 7:26 PM (in response to Jin-Huang)1 person found this helpful
I was running Adobe Flash Professional CS5 CIAB through my recently upgraded Flash CS5.5. In Chapter 7, Using Text, I got these errors:
ReferenceError: Error #1065: Variable TLFTextField is not defined.
ReferenceError: Error #1065: Variable TCMText is not defined.
I also though it was a Text Layout Format issue. That's how I landed here. I was about to give after reading this and similar threads when I re-read my Actions panel and discovered I had misspelled 'addEventListener' (see below).
calculateBtn.addEventListtener(MouseEvent.CLICK, calculateMonthlyPayment);
After correcting the typo, the errors went away. Thought this might be helpful.
Now, I'm wondering why my typo doesn't get flagged through Check Syntax. But, that's for another time.
7. Re: reference error - cannot find tcmtextbirnerseff Jun 11, 2011 12:10 AM (in response to grandcedric)
Hi, it is possible that a typo can go undetected .... if it refers to a dynamic class; it should throw a runtime error instead.
This behavior looks like a definite bug - misreporting errors.
So this would leave me with the Herculean task of extracting all names from the project and finding the one that is misspelled....
I dont know whether something similar exists for flash, but other environments would allow to inspect the generated code (and its import and export names) right before linking all the project files together
8. Re: reference error - cannot find tcmtextJin-Huang
Jun 12, 2011 6:43 PM (in response to grandcedric)
See it from #2
What's more, TLFTextField is not a class in TLF but a class given by Flash pro to hold TLF text.
9. Re: reference error - cannot find tcmtextbirnerseff Jun 13, 2011 2:08 PM (in response to Jin-Huang)
Hi, I finally realized that I just missed the error messages
With CS5 (as with every flash version before), the movie starts playing even if compilation fails, and flash shows compiler messages.
With CS5.5, although publish settings are to merge code, the movie starts playing and reports the tcmtext error, thereby switching from compiler messages to runtime output
10. Re: reference error - cannot find tcmtextTim*Lewis Mar 18, 2012 4:21 PM (in response to birnerseff)
This wasn't answered. Just some people giving up because the Adobe user forums suck so bad. This is a bug. Like a lot of Adobe's glitches there doesn't seem to be any support from the corporation just a bunch of us poor fools searching for another work around.
I get the same error code after importing some text as symbols in to Flash from Illustrator.
Searched for the last hour and Adobe has no info out there that leads to a sollution. Another dead end and time to start from scratch. Thanks Adobe, your monopoly stands only because nothing better has come along yet. As soon as it does you can kiss my license fees good by.
11. Re: reference error - cannot find tcmtextkeldonrush May 28, 2012 7:56 PM (in response to birnerseff)
I have Flash Pro CS 5.5.
I made a singleton to manage the contextMenu for the document class of a couple of different SWFs.
I immediately experienced the dreaded : ReferenceError: Error #1065: Variable TCMText is not defined.
It wasn't until I was importing these two classes that I had this problem (with this project) :
import flash.ui.ContextMenu;
import flash.ui.ContextMenuItem;
My class that is importing these two classes would instantiate and then I immediately get the runtime error (1065).
THE FIX :
I added this line and it compiled and worked for me :
public var TCMText : *;
I think the bug is that some Adobe code is expecting a dynamic class that it can stick the nefarious 'TCMText' property on and when it can't it freaks out. Likely if my class was declared as dynamic (like the MovieClip class is) this wouldn't be a problem.
I have generally stayed completely away from TLF TextFields because I have perceived them to be problematic. I have gotten this error before and converting a TextField to a 'classic' TextField in the IDE properties panel made it go away.
I hope this helps. It *would* be cool if someone from Adobe could shed some light on this. | https://forums.adobe.com/message/4274716?tstart=0 | CC-MAIN-2018-26 | refinedweb | 1,012 | 61.46 |
I was recently asked if there was a way to set the Azure IoT Hub connection string for an MXChip board in code. Normally you'd push this to the EEPROM using the tooling in VS code, or from a terminal using SSH as described here. In this situation, this was for students and was needed for two reasons:
NOTE - this is potentially a very bad thing as you can end up essentially putting secrets in code. DO NOT do this for public code, code then ends up on GitHub or anything like this, this only makes sense for private code submitted internally for something like a students assessment using a hub on a free tier so cannot cause any cost if it gets flooded.
Out of the box there are no APIs available to do this. However, there is a way!
When connecting to Azure IoT Hub over MQTT, you call DevKitMQTTClient_Init and this loads the connection string from EEPROM and uses it for the connection. As it turn out, as well as being able to read from EEPROM in code, you can also write to the EEPROM, meaning you can set the value before it is read.
Using this, it wasn't too hard to write the code to set this value:
#include "EEPROMInterface.h" #include "SerialLog.h" ... void setup() { ... if (WiFi.begin() == WL_CONNECTED) { // Write the connection string to EEPROM as an array of uint8_t EEPROMInterface eeprom; char connString[] = "<my connection string>"; int ret = eeprom.write((uint8_t*)connString, strlen(connString), AZ_IOT_HUB_ZONE_IDX); // Check the write worked - 0 means it was written // Less than 0 is an error if (ret < 0) { LogError("Unable to set the connection string in the EEPROM."); return; } // Connect as normal, this will read the new value // for the connection string DevKitMQTTClient_Init(); ... }
Replace <my connection string> in the above code with your connection string. It will then be written to the EEPROM before the call to DevKitMQTTClient_Init.
If you read the EEPROM write documentation, you will see zones listed. These are defined areas in the EEPROM and you can use these to write the WiFi SSID and Password as well as the connection string. This is useful if you want to build a solution that downloads new WiFi details.
Want more IoT Content? Check out the IoT Show on Channel 9, or work through one of our hands-on self guided IoT learning paths at Microsoft Learn.
Thanks for Sharing with the Community | https://techcommunity.microsoft.com/t5/educator-developer-blog/setting-an-azure-iot-hub-connection-string-in-code-on-an-mxchip/ba-p/1183506 | CC-MAIN-2020-50 | refinedweb | 407 | 69.21 |
Type: Posts; User: ]jk[; Keyword(s):
Did you register first?
My problem is that I cant calculate the original unsigned int of these 4 bytes... I have searched google for an hour but that didnt get me further....
Can anyone please explain me the way to make...
When I read in from port 5842 I get 16 bytes. There are supposed to be 4 unsigned ints, so every unsigned int have 4 bytes.
But every unsigned int normally have only 2 bytes, or?
Here are 4...
Ok... thanks. ;)
But its possible to take this "challenge" in python, or? is unfortunately down...
Hi,
I am currently trying to get past vortex level 00.. but I have a problem. :o
Here is my code (in python):
import socket
HOST...
Hi DakX,
you have to code yourself a programm in C or whatever language you prefer. And this program have to ignore every second byte when it reads from port 24000.
The guys from...
Thanks for the links guys. :)
Hi,
maybe this article helps:
Havent looked at it but maybe this works for your purpose. | http://www.antionline.com/search.php?s=0c927829b2fdfcb8ed4e5c2761a11b12&searchid=2245949 | CC-MAIN-2015-14 | refinedweb | 182 | 86.6 |
In Scala, def defined a function, But i don't understand the below code.
Ex.
def v = 10
it's a function that always returns 10. in Java, the equivalent would be
public int v() { return 10; }
this might seem pointless, but the difference is real, and sometimes importantly useful. for example, suppose i define a trait like this:
trait Wrench { val size = 14 //millimeters, the default, most common size }
if i need different size wrench, i can refine the trait
val bigWrench = new Wrench { override val size = 21 }
but what if I want an adjustable wrench?
// mutable! not thread safe! class AdjustableWrench extends Wrench { var adjustment = 0 override val size = 14 + (3 * adjustment) // oops! def adjust( turns : Int ) : Unit = { adjustment += turns } }
this won't work!
size will always be 14!
if I had defined my trait originally as
trait Wrench { def size = 14 //millimeters, the default, most common size }
i'd be able to define
bigWrench exactly as I did above, because a
val can override a
def. but now i can write a functional adjustable wrench too:
// mutable! not thread safe! class AdjustableWrench extends Wrench { var adjustment = 0 override def size = 14 + (3 * adjustment) // this works def adjust( turns : Int ) : Unit = { adjustment += turns } }
by originally defining
size as a
def, rather than a
val in the base trait, even though it looked dumb, I preserved the flexibility to override with
def or
val. it's quite common to define a base trait with very simple defaults, but where implementations might want to do something more complicated. so statements like
def v = 10
are not at all rare.
to get your head around the difference a bit more, compare these two:
def vDef = { println("vDef") 10 }
and
val vVal = { println("vVal") 10 }
both
vDef and
vVal will evaluate to 10 whenever you access them. but each time you access vDef, you will see the side effect, a print out of
vDef. no matter how any times you access
vVal, you will see
vVal printed out just once. | https://codedump.io/share/EJpwA9DIwOGR/1/scala-what39s-def-actually-do | CC-MAIN-2016-50 | refinedweb | 338 | 68.6 |
About this series
Part 1, this article, kicks off the series by showing you how to install everything necessary to get started writing Python scripts on your Android device. Part 2 will present useful scripting examples to get real work done. It will also explore some of the available Android API calls, including the various windows. Finally, the series will explore how to build a complete user interface just like you would in the Java language.
A common misconception about developing for the Google Android platform is that you have to write your code in the Java™ language. The truth is, you actually have a multitude of options through the Scripting Layer for Android (SL4A) project. SL4A started out as a 20% project by Google employee Damon Kohler. That was almost two years and four major versions ago.
SL4A provides a platform for several scripting languages, including Beanshell, Lua, Perl, Python, and Rhino. There's also support for basic shell scripting. Today, the Python portion of the SL4A project has developed into a project of its own, due in part to the popularity of Python and the desire to decouple the releases of new Python functionality from the main SL4A release cycle.
This article focuses on using Python to write applications for the Android platform. Python is a great tool for writing both simple scripts and complex, multi-threaded applications. The great thing about having Python on Android is the opportunity to use the untold thousands of lines of code already written and freely available. Python is an easy language to learn if you've never used it before, and you will find many resources available on the Internet to help get you up to speed.
Installation and setup
You must download and install several prerequisites before you can start developing with SL4A. The first is a full Java Development Kit (JDK). You can find the latest version on the Oracle Developer site.
Next you need the Android software development kit (SDK). Two download choices are available on the main Android developer site: a .zip file and an .exe file. When you download and run the .exe file, you'll be presented with a screen where you must choose which versions of the SDK and support files you want to install (see Figure 1).
Figure 1. Choose which Android SDK options you want to download
For this article, I installed and tested everything on a Windows® 7 64-bit machine.
Because this article is about developing applications for the Android platform using Python, you obviously need to install Python on your development machine. Windows does not come with Python installed. As of this writing, the SL4A Python version is 2.6.2. Download either the 32- or 64-bit version of Python 2.6 to stay compatible.
It's a good idea to add a few links to the Android SDK in your PATH statement to make it easier to launch the SDK Manager and other tools. To do this in Windows 7, perform these steps:
- Press the Windows key, and then click Search.
- In the text box, enter Environment.
- Click Edit the system environment variables.
- In the window that opens, click Environment Variables, then select the PATH variable in the User variables list.
- Click Edit, and then add the path to your Android SDK tools directory.
The string you need to add looks like this:
;C:\Users\paul\Downloads\android-sdk-windows\platform-tools
You must add the semicolon (
;) before the new path to
append a new directory. Once that's entered, click OK three times.
Installing SL4A on an Android device is similar to the process for any other Android application. You can scan the QR code on the main SL4A project site with your device to download the SL4A installation file. It should automatically launch when the download is finished. At this point, you should see an installation screen like the one in Figure 2.
Figure 2. SL4A installation screen
Clicking Install starts the process.
The final step is to install the Python interpreter on your device. You can do so using
any of several methods. From the emulator, you can enter
sl4a download in the browser's search box
(Figure 3).
Figure 3. The SL4A download screen
Clicking the PythonForAndroid_r4.apk link starts the download. To actually launch the installer, view the notification screen by clicking and dragging from the top of the emulator screen toward the bottom of the screen. Clicking the Download complete message launches the Python for Android installer (Figure 4).
Figure 4. Python for Android initial installation screen
Clicking Install launches a process that downloads and unpacks several .zip files. For the purposes of this article, simply click Install on the primary installation screen (Figure 5).
Figure 5. Python for Android primary installation screen
You should see three separate progress windows. The first shows the download, and then the extraction of the files onto the SD card. If everything works, an "Installation Successful" message appears.
Android SDK basics
There are two basic methods for testing your Python code using SL4A: using an emulator or using an actual physical device. The Android SDK provides basic emulator capability and the tools to create an emulated device with the same characteristics as a physical device. In some cases, as with the Samsung tablet add-on, you have a preconfigured emulator available for your use.
The SDK Manager functions as both an update manager and a virtual device creation tool. Each time you launch SDK Manager, it connects to the Android developer site to check for new releases. (You can bypass this process by clicking Cancel.) At this point, you should see the Android SDK and AVD Manager window, shown in Figure 6.
Figure 6. Android SDK and AVD Manager
Selecting Virtual devices in the directory tree displays all previously defined Android virtual devices (AVDs) in the details pane. To create a new emulator device, click New. In the Create New Android Virtual Device (AVD) window, provide the required information in the Name, Target, and SD Card Size fields. Figure 7 shows the entries for my test device. The name must not contain spaces, and you should allow at least 100MB for storage. Choose the appropriate Android version number for your target device. This drop-down list displays only the available options previously downloaded.
Figure 7. The Create New AVD Wizard
Next, click Create AVD. A pop-up window provides the details of your new device. To launch any of the available emulator images, select the desired target, and then click Start. In the Launch Options window, you can proceed with defaults for screen size, or you can select the Scale display to real size check box and choose something larger. A value of 7 seems to work well (see Figure 8). To launch the emulator with a clean slate, select the Wipe user data check box.
Figure 8. AVD launch options
Another indispensable tool provided with the Android SDK is the Android Device Bridge (ADB). This tool provides such functions as installing applications (.apk files), copying files to or from a device, and issuing shell commands on the device. You also use ADB to actually launch SL4A on a device so that you can execute programs from your workstation. To establish communication between your host workstation and a device, you must use ADB to forward TCP/IP traffic from port 9999 to a specific port on the device. Open a Command window, and enter the following command line:
$ adb forward tcp:9999 tcp:42306
The second port number comes from the device. With the latest version of the SL4A, you can set this number in the preferences. For the standard release, you have to use the number SL4A gives you.
Now, launch SL4A, and then click Menu. From the six options at the bottom of the window, click View, then click Interpreters (Figure 9).
Figure 9. Launch a remote server from the SL4A Interpreters menu
Click Menu once more, then click Private to launch a private server.
For a real device, the difference is that Private starts the server using the USB port, and Public uses Wi-Fi. If you view the notifications page again, you'll see that the SL4A service is running (Figure 10).
Figure 10. Android notification screen
Click the message to see the actual port number assigned. In this case, you use port
number 42306 for the second TCP value in the
adb forward
command. Now, you're ready to actually write some Python code and test it on the
device.
Hello World in Python for Android
It's almost obligatory in any introductory programming article to write a "hello world" program. I do that here to demonstrate the number of ways you can write and test your Python code using SL4A. Here's what the code looks like:
import android droid = android.Android() result = droid.makeToast('Hello, world!')
Every SL4A scripting language uses an import file—android.py for Python, in this case—to set up the interface between the script and the underlying Android application programming interfaces (APIs). You can enter this code directly on your device either in the interpreter (refer back to Figure 9) or by using the editor. To use the interpreter, from the Interpreters screen, launch the Python interpreter by selecting Python 2.6.2. On the resulting screen, you can enter the code above; after the last line, you should see a pop-up window with the words "Hello, world!"
Typing on either an emulated or real device can be tedious. Python's IDLE console and
editor prove indispensable when combined with the ADB tool to write code on a PC
and test it on an Android device. The only thing you'll need is a local copy of the
android.py file. You can either extract it from the python_extras_r14.zip file
available on the SL4A downloads page or transfer it from the device using the
adb pull command. It's also handy to have a directory
named SDCARD at the root of your primary system drive to mirror what's
on your emulated device. This makes things easier from a file path perspective
whenever you run a script on the local machine that needs to access the file system.
Figure 11 shows the Hello World script in the IDLE console.
Figure 11. Hello World in the Python IDLE console
If you launched the server and executed the
adb forward
command, you should see no error and the "Toast" message shown in
Figure 12.
Figure 12. Hello World pop-up message
In Windows, you can launch an edit window in IDLE by clicking File > New Window. This window gives you a complete edit and test capability from your development machine to either an emulated or real Android device.
Resources
Learn
- Visit the SL4A Google Project site.
- Learn more at the Python for Android Project page.
- Find the Python resources you need at Python.org.
- Read more developerWorks articles by Paul Ferrill.
- In the developerWorks Open source zone, find hundreds of how-to articles and tutorials, as well as downloads, discussion forums, and a wealth of other resources for developers.
- Check for mobile updates on the developerWorks Mobile development blog.
- You'll find more Android SL4A, or visit the unofficial release site.
- Download Python for Android.
- Download the Android SDK.
- Download the latest JDK.
-. | http://www.ibm.com/developerworks/library/mo-python-sl4a-1/ | CC-MAIN-2014-52 | refinedweb | 1,900 | 64.3 |
July 20, 2016 byMichael
This is a technical post, probably only of interest to PyQt developers.
When developing a PyQt app, you will need to codesign it in order to avoid the following warning on your users' machines:
What's more, you often want your app to be able to automatically update itself. This post shows how you can implement both auto-updating and codesigning in PyQt-based apps on OS X.
Esky is not the answer
Esky is an open-source auto-update framework for Python apps. It has a nice API and makes it seemingly easy to have your app update itself automatically. The problem is, it does not really work with codesigning because it does not conform to OS X's required bundle structure . What's more, development of Esky seems to be borderline inactive.
Use PyInstaller
There are a couple of tools for turning Python code into deployable applications, a process called "freezing". I looked at the following options:
- bbfreeze does not support Python 3 and is unmaintained .
- py2app is "not moving forward" because the author lacks the time.
- cx_Freeze was last updated 18 months ago.
- pyqtdeploy involves a Qt-based build process. Its GUI helper crashed when I tried to set up the (comprehensive) configuration.
- PyInstaller seems to be the most actively developed, with the last release from 2 months ago.
I have used cx_Freeze, py2app and PyInstaller extensively in the past two weeks. Esky (which I originally wanted to use) only supports cx_Freeze and py2app. But I've had immense trouble with the two, probably because they don't support the latest versions of PyQt. I gave up on py2app after not being able to find out why it made my app crash with message
Abort trap: 6 . If you found this page on Google searching for this error, I recommend you use PyInstaller. Despite being cross-platform, it can output OS X .app bundles with the required directory structure and supports PyQt5 out of the box.
Sparkle for PyQt apps
Sparkle is an auto-update framework for OS X applications. You normally configure it using Xcode. But as it turns out, it's also possible to use it with PyQt applications. You need the pip dependency
pyobjc-core (the whole
pyobjc is not required) and the following code:
# Your Qt QApplication instance QT_APP = ... # URL to Appcast.xml, eg. APPCAST_URL = '...' # Path to Sparkle's "Sparkle.framework" inside your app bundle SPARKLE_PATH = '/path/to/Sparkle.framework' from objc import pathForFramework, loadBundle sparkle_path = pathForFramework(SPARKLE_PATH) objc_namespace = dict() loadBundle('Sparkle', objc_namespace, bundle_path=sparkle_path) def about_to_quit(): # See objc_namespace['NSApplication'].sharedApplication().terminate_(None) QT_APP.aboutToQuit.connect(about_to_quit) sparkle = objc_namespace['SUUpdater'].sharedUpdater() sparkle.setAutomaticallyChecksForUpdates_(True) sparkle.setAutomaticallyDownloadsUpdates_(True) NSURL = objc_namespace['NSURL'] sparkle.setFeedURL_(NSURL.URLWithString_(APPCAST_URL)) sparkle.checkForUpdatesInBackground()
This is the absolute core of the Python part of the solution. For more information, please consult the Sparkle Documentation .
If you want to use Sparkle's Delta Update mechanism, you also need to move the file
your.app/Contents/MacOS/base_library.zip which is created by PyInstaller to
your.app/Contents/Resources/base_library.zip . Then create a symlink to
../Resources/base_library.zip at
your.app/Contents/MacOS/base_library.zip so PyInstaller can still find the file.
If you are a programmer, you may be interested infman. It's a modern file manager that can save you a lot of time in your daily work. | http://126kr.com/article/5cfe4z2l2lp | CC-MAIN-2016-50 | refinedweb | 563 | 50.23 |
(18)
Mahesh Chand(10)
Jean Paul(2)
Swatismita Biswal(1)
Saravanan Ponnusamy(1)
Manpreet Singh(1)
Prasanna Murali(1)
Sibeesh Venu(1)
Kuppurasu Nagaraj(1)
Nakkeeran Natarajan(1)
Venkatesan Jayakantham(1)
Gagan Sharma(1)
Ranjan Dailata(1)
Chandradev (1)
Vinod Kumar(1)
Destin joy(1)
Prabhakar Maurya(1)
Shubham Srivastava(1)
Muralidharan Deenathayalan(1)
Amit Choudhary(1)
Rahul Saxena(1)
Praveen Alwar(1)
Munir Shaikh(1)
Moses Soliman(1)
K S Ganesh(1)
Amit Anajwala(1)
Resources
No resource found.
Configure Cross Firewall Access Zone In SharePoint 2013 Central Administration
Feb 15, 2017.
In this article, we will see how to configure cross Firewall access zone in SharePoint 2013 Central Admin.
Working On DNS Zone Using Azure
Sep 18, 2016.
In this article, you will learn how to create DNS Zone using Azure.
Creating Time Zone Calculator Using jQuery & Google API
Jul 04, 2016.
In this article, you will learn about Time Zone Calculator.
How To Create A DNS Zone In Azure
Apr 10, 2016.
In this article you will create a DNS zone using the Azure portal..
Add Clock To The Existing Clock In Windows 10
Feb 16, 2016.
In this article you will learn how to add clock to the existing clock in Windows 10.
Convert Local Time From Other Time Zones In SQL Server
Nov 23, 2015.
This article consists of examples to resolve issues related to find out time as per different time zones and basic time related functions.
Android Wear Watch Face Design And Development
Oct 17, 2015.
In this article, I will be walking you through android wear watch face design and development.
How to Add URL to Trusted Zone in IE
Nov 20, 2014.
In this article you will learn how to add an URL to a Trusted Zone in IE.
Configure App Catalog in Sharepoint 2013
Aug 06, 2014.
An App Catalog is the page from where users can choose new apps; there are paid and free apps available. But, the App Catalog does not come pre-configured. We need to do 10-30 minutes of activities to configure it, depending on the farm complexity.
How to View When a Zone Can Start Scavenging of Stale Resource Records
Jun 13, 2013.
In this article you will learn how to view when a Zone can start Scavenging of Stale Resource Records.
How to Enable Automatic Scavenging of Stale Records
Jun 12, 2013.
In this article you will learn how to enable Automatic Scavenging of Stale Records.
How to Apply Only Secure Dynamic Updates to the Forward Lookup Zone
Jun 11, 2013.
In this Article you will learn about How to Apply Only Secure Dynamic Updates to the Forward Lookup Zone.
How to Modify Security For the Active Directory-Integrated Zone
Jun 11, 2013.
In this article you will learn how to modify security for the Active Directory-Integrated Zone. Create and Manage Notify List For a Zone
Jun 07, 2013.
In this article you will learn how to create and manage a Notify List for a Zone.
How to Create a Zone Delegation in Forward Lookup Zone
Jun 07, 2013.
In this Article you will learn about How to Create a Zone Delegation in Forward Lookup Zone.
How to Specify Other DNS Server as Authoritative For a Zone
Jun 05, 2013.
In this Article you will learn about How to Specify other DNS Server as Authoritative for a Zone.
How to Modify Zone Transfer Settings Using Windows Interface
Jun 05, 2013.
In this article you will learn how to modify the Zone Transfer Settings using the Windows interface.
Providing Refresh, Retry and Expire Interval For a Zone
Jun 05, 2013.
In this Article you will learn about How to Provide Refresh, Retry and Expire Interval for a Zone.
How to Add a Resource Record to a Zone Using Windows Interface
Jun 03, 2013.
In this article you will learn how to add a Resource Record to a Zone using the Windows interface in Windows Server 2012.
How to Change the Zone Type Using Windows Interface in Windows Server 2012
Jun 03, 2013.
In this article you will learn how to change the Zone Type using the Windows interface in Windows Server 2012.
How to Create a Stub Zone in Windows Server 2012
Jun 01, 2013.
In this article you will learn how to create a Stub Zone in Windows Server 2012.
How to Create Secondary Zone in Windows Server 2012
Jun 01, 2013.
In this article you will learn how to create a Secondary Zone in Windows Server 2012.
Time Zone Function in PHP
May 27, 2013.
In this article I will explain two important time zone function date default timezone get and date default timezone set in PHP.
How to Configure the DNS Reverse Lookup Zone
May 13, 2013.
In this Article you will learn about How to Configure the DNS Reverse Lookup Zone.
Using DNS Server to Add New Host and New Zones in Windows Server 2012.
Feb 26, 2013.
In today's article you will learn how to use DNS Server to add New Zones and New Host in Windows Server 2012.
Using DNS Server to Add New Zone and New Host in Windows Server 2012:Part 2
Feb 26, 2013.
In this article you will learn what changes are required for your browser so that it can support the New Zones and New Host made in Forward and Backward Lookup Zones.
Configure Office Web Apps For SharePoint 2013: Part II
Feb 14, 2013.
This article is a continuation of the deployment procedures that are described in my previous article Configure Office Web apps for SharePoint 2013 Part I.
How to Change Time Zone in Windows 8
Dec 27, 2012.
In this article we are explaining how to change date and time zone in Windows 8.
No connection could be made because the target machine actively refused it
Apr 26, 2012.
Fix for error no connection could be made because the target machine actively refused it.
Multi-Time Zone Clock in Windows 8
Apr 23, 2012.
The Windows 8 operating system has the ability to show multiple clocks with two different time zones.
How To Do with SharePoint Web Part
Jun 01, 2011.
A SharePoint Web Part is a server side control that can be added into Webpart zones in Webpart pages in a SharePoint environment...
System.Windows.Xps.Packaging Reference Missing
Mar 01, 2010.
If you need to use XpsDocument class in your WPF project, you must add reference to System.Windows.Xps.Packaging namespace.
Geographical Information by IP Address in ASP.NET and C#
Jun 02, 2009.
This article and attached code demonstrates how to get a website visitor's geographical information such as country, region, city, latitude, longitude, zip code, time zone by using his or her IP address..
Invalid FORMATETC structure Error
Sep 12, 2008.
This tip shows how to fix Invalid FORMATETC structure error when you drag and drop a control from Toolbox to a XAML file.
One or more rows contain values violating non-null, unique, or foreign-key constraints
Mar 14, 2008.
You may get this error when using a typed DataSet. This tip shows how to fix it.
Understanding WEBPARTS in ASP.NET 2.0: Part I
Sep 24, 2007.
In this article I am going to discuss about WEB PARTS in ASP.NET 2.0, and the most exciting feature of ASP.NET 2.0.
ASP.Net User Control as Web Parts
Jul 09, 2007.
In this article I would like to give answers to some of the terms and also given some steps to deploy the web part.
Web Access Failed Error
Mar 30, 2007.
Web Access Failed Error Message When You Use Visual Studio .NET with IIS 6.0 to Create an ASP.NET Web Application.
Aspnet_wp.exe was recycled error
Mar 27, 2007.
Occasionally, slow writes to a client cause Aspnet_wp to recycle on false deadlocks, which generates this error in event log..
HTTP:/1.1 500 Internal Server Error
Feb 09, 2007.
If you are running two versions of ASP.NET, you may get HTTP:/1.1 500 Internal Server Error when creating a new Web project or opening an existing Web project in Visual Studio..
Get Current Time Zone in C#
Aug 02, 2005.
This how do I explains how to use the TimeZone class and its members to get information about current time zone using C#.
About WOPI-zones
NA
File APIs for .NET
Aspose are the market leader of .NET APIs for file business formats – natively work with DOCX, XLSX, PPT, PDF, MSG, MPP, images formats and many more! | http://www.c-sharpcorner.com/tags/WOPI-zones | CC-MAIN-2017-43 | refinedweb | 1,435 | 74.08 |
Pop Searches: photoshop office 2007 PC Security
You are here: Brothersoft.com > Windows > Social & Communication > SMS Tools > | n70 nokia pc | wav to text | gtunes for pc | mobile phone pc | google goggles for | bulk sms 1.0 | bulk sms software | mobile phone locator | youtube to phone | pc sms bulk | free bulk sms | brother software
Please be aware that Brothersoft do not supply any crack, patches, serial numbers or keygen for Pocket PC Bulk SMS,and please consult directly with program authors for any problem with Pocket PC Bulk SMS.
Advertisement
import sms pocket pc | wassup software | cell phone video player | samsung pc suite | pc to mobile bulk sms sender | 160by2 sms software | mp3 cutter software | antivirus jar software mobile | send bulk sms | n70 nokia pc suite | bulk sms software | bulk SMS | bulk sms pc to mobile | pc bulk sms | online pc bulk sms | youtube software | bluetooth software for pc | pc to mobile bulk sms sender | youtube mobile software | jar software | bulk sms sender 1.0 | mobile phone pc suite | phone redtube 3gp | http://www.brothersoft.com/pocket-pc-bulk-sms-download-210597.html | CC-MAIN-2018-30 | refinedweb | 171 | 54.49 |
On Wed, Aug 11, 2004 at 05:16:35AM -0500, James Ketrenos wrote:
> We're currently working to clean up ipw2100 and ieee80211 code for submission
> to
> netdev for discussion and hopefully inclusion in the future. The ieee80211
> code
> is still being heavily developed, but its usable. If anyone wants to help
> out,
> or if folks feel its ready as-is to get pulled into wireless-2.6, let me know.
Maybe we should switch to your ieee802.11 for a generic wireless stack then
instead of the original hostap code. At least it seems more actively
maintained right now and supports two drivers already.
Btw, I've looked at the ipw2100 and have to concerns regarding the firmware,
a) yo'ure not using the proper firmware loader but some horrible
handcrafted code using sys_open/sys_read & co that's not namespace
safe at all
b) the firmware has an extremly complicated and hard to comply with license,
I'm not sure we want a driver that can't work without a so strangely
licensed blob in the kernel. Can you talk to intel lawyers and put it on
simple redristribution and binary modification for allowed for all purposes
license please?
> Thanks,
> James
>
>
>
---end quoted text--- | http://oss.sgi.com/archives/netdev/2004-08/msg00243.html | CC-MAIN-2014-15 | refinedweb | 205 | 68.91 |
hello.
i was wondering if same one can help me with 2 thinks and if you se samthing wrong white the code you can say that 2 so i can fix it.
this is my code.
using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace Uppgift_1 { class Product { private string Produuct; private double Price; private bool Food; private int count; private const double foodVATRate = 0.12, otherVATRate = 0.25; private decimal finalprice; public void Readinput() { Console.Write("What is the product you want: "); Produuct = Console.ReadLine(); Console.Write("Unit price: "); Price = Console.ReadLine(); Console.Write("Food item y/n: "); Food = Console.ReadLine(); Console.Write("Count: "); count = Console.ReadLine(); private void calculateValues() { finalprice = Price * count; } } }
and i'm wondering what do i need to do so i can but this in a printresults, dont need the code for it i only want to know where i can put it. i mean in a private void or in a public and where do i write it
and where i'm i going to put the if and else code, dont need that code for it
Sorry for the bad english | https://www.daniweb.com/programming/software-development/threads/406940/if-and-printresults | CC-MAIN-2018-30 | refinedweb | 192 | 67.55 |
When it comes to web development, Sinatra is amazingly flexible. Unlike Rails, it isn’t opinionated in the slightest and basically lets you make all the design decisions. It does have some conventions, such as automatically looking for view templates in the ‘views’ folder, but virtually all of these default settings can easily be changed. Sinatra doesn’t make any decisions for you – you literally start with a blank slate. Konstantin Haase, maintainer of Sinatra, refers to this as Sinatra’s biggest strength but also its biggest weakness, since Sinatra isn’t going to stop you from writing bad code.
Given that there are so many choices that you can make when creating an application in Sinatra, I decided to ask around and find out how people roll when they use it. I asked the following questions:
- Do you have a set folder structure or coding patterns?
- Do you tend to use classic or modular style?
- Do you use any bootstrap code?
- Do you ever use inline-views?
- Anything Else?
I got some interesting responses that I thought I’d share on here. You can also view the whole thread here:
Do you have a set folder structure or coding patterns?
Sinatra has no directory structure to speak of – you don’t even get an application folder, unless you create it yourself. As mentioned earlier, it has some nice defaults like automatically keeping view templates in the ‘views’ directory and public assets in the ‘public’ directory and using a file called layout.erb as the default layout. All of these can be easily changed using the set method, like so:
set :public_folder, 'assets' set :views, 'templates' set :erb, :layout => :base
A lot of the people I asked tended to use a Rails-like structure of ‘Models, Views and Controllers’ folders. They also tended to use a similar structure to that used by RubyGems with folders such as ‘lib,test/spec/, public’.
Another popular technique was to use a file called ‘init.rb’ that requires all the other relevant files. This makes it useful for running your app from the console or during tests.
Blake Mizerany, the creator of Sinatra, said that he preferred to use a single directory where all of his modules and views were kept in one place.
I like to keep my folder strucutre very simple, usually with a file called main.rb that contians most of the application code. I will then usually use a public and views folder and then leave it at that. Any extra files will usually go in the root directory.
Do you tend to use classic or modular style?
Sinatra has two distinct styles of coding – classic and modular. Most examples that you find on the web are classic applications, here is another example:
require 'sinatra' get '/hello' do "Hello World!" end
The same app done modular style would look like this:
require 'sinatra/base' class Hello < Sinatra::Base get '/hello' do "Hello World!" end end
As you can see, the main difference in a modular-style application is that all of the code is wrapped in a class that is subclassed from Sinatra::Base. Whereas, in a classic application you just require ‘sinatra’ and get on with it – this tends to be the style used in most onine tutorials.
Most people who responded to my questions preferred to use the modular style. Josh Cheek mentioned that classic style is useful for demonstrating techniques (hence the reason why it’s probably used for most examples on the web).
John Nunemaker (GitHub) said:
I would never use classic anymore. Too pollutive.
This refers to the fact that the global namespace can become polluted with methods of the same name. This is not usually a problem when writing small applications, but can become more of an issue if you are writing a large modular application (particularly if different people are working on different modules).
Jason Rogers also pointed out a useful technique that helps map your urls to a specific class:
If I’m going to have resources split out under separate paths (eg. “/admin”, “/api”, etc.) I will use a modular approach and map the individual modules in Rack under their path name.
This can be done in the config.ru file using Rack’s
map method, like so:
require 'sinatra/base' require './main' map('/admin') { run adminController } map('/api') { run apiController }
Personally, I love the to use classic-style applications and think they are very direct, allowing you to get started writing code quickly. The downsides are few and it is very easy to move over to a modular-style application if it grows bigger. If you want to package up your application as a gem or extension, however, then you really do need to go for the modular-style.
I should point out that the developers of Sinatra remain committed to keeping the two different styles of application.
Do you use any bootstrap code?
Rails has a lot of code generators that will quickly get you up and running with various bootstrap code. I wondered if people had used anything similar to get their projects off the ground in Sinatra.
Geoffrey Grosenbach used a little bit of bootstrap code to save setting up the same things over and over:
Sometimes I start from an existing simple Git repo, especially if I’m going to be using Backbone or other frameworks that need some setup.
Others liked to include things like Knockout.js and Twitter Bootstrap initially (presumablly to make the front end development easier).
User diminish mentioned a code generation tool on Sinatra’s Google Groups page that was in development and sounded interesting. It would be good to see if any progress had been made with this.
In my own Sinatra projects, I don’t tend to use any bootstap or generator code, as it’s so easy to get started with a project. Although, I have considered putting together a minimalistic file structure that includes some basic CSS and layouts that I usually use.
There are a number of similar projects available such as Sinatra-Bootstrap-Starter, although these start to take some of the design decisions away from you – always better to develop your own that works for you!
Do you ever use inline-views?
Inline views let you keep all your view code in the same file as your app. Here’s a quick example:
require 'sinatra' get '/hello/:name' do @name = params[:name] erb :hello end __END__ @@hello <h1>Hello!</h1>
In this example, the view called ‘hello’ is placed after the
__END__ declaration. All views are marked by starting with ‘@@’ followed by the name of the template.
Most people didn’t use these, although one notable exception was Blake Mizerany, who liked to use them for small applications:
I’ll use inline templates when there are only a few and they are small
Personally, I often like to use inline views and I think they are one of Sinatra’s coolest features. When I’m playing around with some code or starting a project off, I really like the fact that I can create something all from within one file. In fact, Avdi Grimm managed to create a Sinatra application that had everything in the same file, including tests! ()
Anything Else?
A lot of people use Sinatra differently and the overriding opinion was that they wanted to choose their own way of doing things.
Geoffrey Grosenbach likes how Sinatra exposes how things work more and therefore helps you to learn those skills to a greater degree:
Learning and using Sinatra helped me master other tasks better (like setting up tests).
He also thought that the extra effort paid off with faster development time:
Even though there’s a bit of work, I love the speed of working with Sinatra.
He also went on to say:
I rarely use Rails generators, and often use a NoSQL database, so Sinatra is perfect for most of the apps I want to write.
Rick Olson (GitHub), liked to use
Lots of Mustache and Rack-Test
This is different to what other people use, but perfectly easy to do with Sinatra’s flexibility.
As for the point made by some people that Sinatra lacks functionality, Jason Rogers counters with:
I’d say that’s what gems are for. One of the great benefits of Sinatra is its lack of opinions.
He also pointed out that even in this small sample, people choose to use Sinatra differently, meaning that everybody could not possibly be satisfied with a one-size fits all approach. This gets to the heart of what Sinatra is all about – letting you choose your own way of doing things, perfect for control freaks who like things done a particular way!
soldier.coder used a great metaphor for explaining why you might want to use the fine-grained solution of choosing your own gems offered by Sinatra:
Why not just use rails? Rails is like bringing a destroyer to a pirate/hostage situation when you really need a Special Forces or Seal team. Large applications? Large apps is kind of misleading as it really depends on the break down of how it is organized. Large and monolithic is very different than large and modular. Writing software as a service encourages separation of concerns. Sinatra seems ideal for such separation.
This is a good point. Sinatra’s modular style, actively encourages you to write modular code that can pieced together in large applications. There are a number of advantages to this – you can reuse modules in other applications, you can remove a module if it isn’t required any longer, different teams can work independently on separte modules.
Blake Mizerany also pointed out the benefit of a bit of advance planning:
In general, I like to put a good amount of forethought into what I’m doing; This allows me to keep things simple.
With some planning in advance, you can set up your project in such a way that its design remains simple and Sinatra will let you do this.
I think all of the feedback I received helps to highlight just why Sinatra is so flexible: If you want to get something up and running quickly then you can fire up a classic application with inline views in no time. If you want to build a big application then you can go modular with your own bespoke file structure and separate all of your concerns.
Most of these topics are covered in greater depth in my new book, Jump Start Sinatra.
I hope this article has given a taste of the many different ways there are to roll with Sinatra. What about you? If you’ve used Sinatra, how do you roll? If you haven’t tried Sinatra yet, then has this post helped to show what Sinatra is capable of?
Don’t forget, Jump Start Sinatra will be out very soon. Go sign up to be notified when it’s ready!
- Gene Smith
- ZPH
- Jurgen Herrmann | http://www.sitepoint.com/rolling-with-sinatra/ | CC-MAIN-2016-18 | refinedweb | 1,840 | 69.21 |
I created a sample wcf REST service in .NET 3.5 vs.net 2008 Defined interface cdservice
[ServiceContract] public interface CdService { [OperationContract] [WebGet(UriTemplate = "service/*", ResponseFormat = WebMessageFormat.Xml)] string[] GetConfig(); }
Implemented GetConfig in Service1.cs
namespace CdService { public class Service1 : CdService { public string[] GetConfig() { string[] services = new string[] { "Sales", "Marketing", "Finance" }; return services; } } }
Set virtual directory and pointed it to my project
I go to my virtual directory and try to browse service1.svc file, I get error message: You are not authorized to view this page Can someone please tell me what is wrong?
add comment
When I run the program it opens up browser and says:
You have created a service. To test this service, you will need to create a client and use it to call the service. You can do this using the svcutil.exe tool from the command line with the following syntax: svcutil.exe
When I change the URL to Gives following error: Server Error in '/' Application. '/Service1.svc/service/' is not a valid virtual path.
I have two questions: 1) Please can someone tell why I am getting virtual path error? 2) Also How can I direct the service to a different port of my choice localhost:8080, so I can use access url localhost:8080/service/* to get the config results? | https://www.daniweb.com/programming/software-development/threads/450347/wcf-net-3-5-gives-virtual-path-errors | CC-MAIN-2019-04 | refinedweb | 219 | 55.44 |
#include <iostream> #include <string> int main(void) //tells a pirate story { using std::cout; using std::cin; using std::string; int buddies; int afterBattle; string exit; cout<< "you are a pirate and are walking" << "along in the crime filled \n" << "city of Havana (in 1789). " << "How many of your pirate buddies \n" << "do you bring along? (Any numbers between 11 and 115)\n" //records the amount of friends you bring along cin >> buddies; //calculates the amount of pirates left after the battle. after the battle = 1 + buddies - 10; cout << "suddenly 10 musketeers jump out " << "from the local tavern and \n" << "draw their swords. " << "10 musketeers and 10 pirates die in the \n" << "battle. There are only " << (buddies + 1 - 10) << "pirates left. including you. \n\n"; cout << "the fallen drop a total of 107 gold coins.\n" << "the bounty is split evenly. which works out to " << (107 / after battle) << "gold coins \n" << "for each survivor.\n"; cout << "the last " << (107 % afterbattle) << "are fought over " << "in a big drunken brawl.\n "; cout << "These last few coins are spent on more booze during the\n" << "course of the brawl. Eventually everyone retires\n" << "peacefully on the bar room floor.\n" << "Another successful day as a pirate!\n" return 0; } << (buddies + 1 - 10) << "pirates left, including you.\n\n"; cout << "the fallen drop a total of 107 fold coins.\n" << "the loot is split evenly, which works out to " << (107 / afterBattle) << "gold coins \n" << "for each survivor. leaving "; cout << (107 % afterBattle) << " unclaimed coins.\n"; << "how many of your pirate buddies \n" << "did you bring along? (Any number between 11 and 115)\n"; cout << "you and the others argue over who should get the extra \n" << "coins, and soon a big drunken brawl breaks out!\n\n"; cout << "In the end, you are triumphant and " << (107 / afterBattle) + (107 % afterBattle) << " coins richer!\n\n"; return 0; }
This post has been edited by Salem_c: 06 October 2011 - 10:44 PM
Reason for edit:: Fixed the [code][/code] tags | http://www.dreamincode.net/forums/topic/250231-trying-to-build-a-simple-game-with-c-for-beginners-by-mark-lee/page__p__1454364 | CC-MAIN-2013-20 | refinedweb | 331 | 73.58 |
Re: Wince4.2 core rotation
- From: "Dean Ramsier" <ramsiernospam@xxxxxxxxxx>
- Date: Tue, 17 Jul 2007 09:01:18 -0400
(1) Won't work unless the driver already supports rotation. Apparently
yours doesn't.
(2) Looks like the driver doesn't support rotation. There's a little more
to getting it to work than just calling a couple APIs. It's actually not
that difficult, but you or whoever is writing the driver will have to do the
work to implement it.
--
Dean Ramsier - eMVP
BSQUARE Corporation
"eeh" <terrylaiiloveu@xxxxxxxxx> wrote in message
news:1184637570.827273.178070@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Hi,
I am using WinCE4.2 core on an embedded arm board which is bought
online. I have tried 2 methods to rotate the screen:
1. Try writing eVC program with the code here:
DEVMODE DeviceMode;
memset(&DeviceMode, NULL, sizeof(DeviceMode));
DeviceMode.dmSize=sizeof(DeviceMode);
DeviceMode.dmFields = DM_DISPLAYORIENTATION;
DeviceMode.dmDisplayOrientation = DMDO_90;
However, the WinCE screen has not rotated.
2. Try change code in the BSP file
$(_WINCEROOT)\PLATFORM\smdk2440\drivers\display\S3C2440lcd
\s3c2440disp.cpp
Original code is
#ifdef ROTATE
m_iRotate = 0;
SetRotateParms();
#endif //ROTATE
changing to
//#define ROTATE
#ifdef ROTATE
m_iRotate = DMDO_90;
SetRotateParms();
#endif //ROTATE
But the compilation generates many errors.
Could anyone help me to explain why this happens or how can I do the
rotation?
Thanks!
.
- References:
- Wince4.2 core rotation
- From: eeh
- Prev by Date: Re: PCMCIA driver sometimes not loading on CE 5.0
- Next by Date: Re: ActiveSync and CE 5.0 Platform Builder Monthly Update (April 2007)
- Previous by thread: Re: Wince4.2 core rotation
- Next by thread: I want to develop web service in windowsce device using vs.net2003
- Index(es): | http://www.tech-archive.net/Archive/WindowsCE/microsoft.public.windowsce.embedded/2007-07/msg00138.html | crawl-002 | refinedweb | 275 | 50.12 |
Want - A generalisation of
wantarray
use Want; sub foo :lvalue { if (want(qw'LVALUE ASSIGN')) { print "We have been assigned ", want('ASSIGN'); lnoreturn; } elsif (want('LIST')) { rreturn (1, 2, 3); } elsif (want('BOOL')) { rreturn 0; } elsif (want(qw'SCALAR !REF')) { rreturn 23; } elsif (want('HASH')) { rreturn { foo => 17, bar => 23 }; } return; # You have to put this at the end to keep the compiler happy } of lvalue subroutines in Perl 5.6 has created a new type of contextual information, which is independent of those listed above. When an lvalue subroutine is called, it can either be called in the ordinary way (so that its result is treated as an ordinary value, an.
Either the caller is directly assigning to the result of the sub call:
foo() = $x; foo() = (1, 1, 2, 3, 5, 8);
or the caller is making a reference to the result, which might be assigned to later:
my $ref = \(foo()); # Could now have: $$ref = 99; # Note that this example imposes LIST context on the sub call. # So we're taking a reference to the first element to be # returned _in list context_. # If we want to call the function in scalar context, we can # do it like this: my $ref = \(scalar foo());
or else the result of the function call is being used as part of the argument list for another function call:
bar(foo()); # Will *always* call foo in lvalue context, # (provided that foo is an C<:lvalue> sub) # regardless of what bar actually does.
The reason for this last case is that bar might be a sub which modifies its arguments. They're rare in contemporary Perl code, but perfectly possible:
sub bar { $_[0] = 23; }
(This is really a throwback to Perl 4, which didn't support explicit references.).
This makes it very easy to write lvalue subroutines which do clever things:
use Want; use strict; sub backstr :lvalue { if (want(qw'LVALUE ASSIGN')) { my ($a) = want('ASSIGN'); $_[0] = reverse $a; lnoreturn; } elsif (want('RVALUE')) { rreturn scalar reverse $_[0]; } else { carp("Not in ASSIGN context"); } return } print "foo -> ", backstr("foo"), "\n"; # foo -> oof backstr(my $robin) = "nibor"; print "\$robin is now $robin\n"; # $robin is now robin
Notice that you need to put a (meaningless) return statement at the end of the function, otherwise you will get the error Can't modify non-lvalue subroutine call in lvalue subroutine return.
The only way to write that
backstr function without using Want is to return a tied variable which is tied to a custom class.
Sometimes in scalar context the caller is expecting a reference of some sort to be returned:
print foo()->(); # CODE reference expected print foo()->{bar}; # HASH reference expected print foo()->[23]; # ARRAY reference expected print ${foo()}; # SCALAR reference expected print foo()->bar(); # OBJECT reference expected my $format = *{foo()}{FORMAT} # GLOB reference expected
You can check this using conditionals like function, then it will return true or false according to whether at least that many items are wanted. So if we are in the definition of a sub which is being called as above, then:
want(1) returns true want(2) returns true want(3) returns false
Sometimes there is no limit to the number of items that might be used:
my @x = foo(); do_something_with( foo() );
In this case,)) { ... }.
Sometimes the caller is only interested in the truth or falsity of a function's return value:
if (everything_is_okay()) { # Carry on } print (foo() ? "ok\n" : "not ok\n");
In the following example, all subroutine calls are in BOOL context:
my $x = ( (foo() && !bar()) xor (baz() || quux()) );
Boolean context, like the reference contexts above, is considered to be a subcontext of SCALAR.
This is the primary interface to this module, and should suffice for most purposes. You pass it a list of context specifiers, and the return value is true whenever all of the specifiers hold.
want('LVALUE', 'SCALAR'); # Are we in scalar lvalue context? want('RVALUE', 3); # Are at least three rvalues wanted? want('ARRAY'); # Is the return value used as an array ref?
You can also prefix a specifier with an exclamation mark to indicate that expectation count, i.e. the number of items expected. If the expectation count is undefined, that indicates that an unlimited number of items might be used (e.g. the return value is being assigned to an array). In void context the expectation count is zero, and in scalar context it is one.
The same as
want('COUNT').
Returns the type of reference which the caller is expecting, or the empty string if the caller isn't expecting a reference immediately.
The same as
want('REF').
use Carp 'croak'; use Want 'howmany'; sub numbers { my $count = howmany(); croak("Can't make an infinite list") if !defined($count); return (1..$count); } my ($one, $two, $three) = numbers(); use Want 'want'; sub pi () { if (want('ARRAY')) { return [3, 1, 4, 1, 5, 9]; } elsif (want('LIST')) { return (3, 1, 4, 1, 5, 9); } else { return 3; } } print pi->[2]; # prints 4 print ((pi)[3]); # prints 1. | http://search.cpan.org/~robin/Want-0.21/Want.pm | CC-MAIN-2017-47 | refinedweb | 838 | 55.07 |
This kernel will overclock your device to 1.15GHz. The overclocking will provide a better performance and a smoother user experience.
I'm using a kexec multiboot kernel on my device, so I didn't test the kernel packed as elf format. It should be good because I have done flashing the self-built kernels several times.
The overclock source doesn't seems to be the latest but everything works~ Maybe that's because the modules in initramfs? (I did nothing to initramfs)
It's nearly impossible for me to upload the kernel on something like devhost/mediafire, so sorry for that. Could someone upload it on these sites please?
V2 Changelog:
1) Use GCC 4.9 Linaro to compile.
2) Change the SLQB allocator to SLUB. I don't really think SLQB is stable.
3) use 2G/2G user/kernel split. It seems to have a better performance.
4) PID namespace
5) NOTE:NO recovery in V2! Instead of recovery, the kexec multiboot menu by @percy_g2 is added. Big thanks to percy_g2. For how to use it, see Link:
* To enter boot menu you should press the power key when vibrate
V2 Link:
V1(Stable) Link:
Finally, sorry for my poor English
Hope you like the kernel~ | http://forum.xda-developers.com/xperia-u/p-development/kernel-overclock-kernel-4-4-kitkat-t2857827/post55036157 | CC-MAIN-2015-22 | refinedweb | 208 | 76.82 |
In part one of this packing series I have shown, how you can automate your build process, test and optionally mock by using the popular .net deployment tool NAnt. If you don't know what I am talking about and just landed here by help of search engine. Let me put the link below again.
In this post, I will do a replay to mostly what I have talked in my last post but with MsBuild. Let's start with building the code. If you have a chance to see the above link you must have seen that I have to do strict maps of source files under the csc task as well as the references to make things work. Now, searching the NAnt project I have found that it has a cool solution task, which even let you define your own output path for libraries but unfortunately it does not work with .net 3.5 and Visual studio 2008.
Typically , the solution task is defined under NAnt.VSNetTasks.dll also depends on the following assemblies
To use the task inside build script you need to do something like
<loadtasks assembly="ThirdParty\Nant\NAnt.VSNetTasks.dll" />
<solution solutionfile="LinqToFlickr.sln" configuration="release" outputdir="Bin" />
But , it ends up with "Solution format not supported" exception for VS 2008 solution. We can obviously use the csc task instead and someday it will support the solution task but you can do things with MsBuild more easily. Let's see the snippet below
<Project xmlns="">
<Target Name="Build">
<MSBuild Projects="LinqToFlickr.sln"
Properties="Configuration=Release" />
</Target>
</Project>
I have put the snippet under Linq.flickr.Targets. Now, if your solution has multiple projects and you want the output to go to a specific folder you can easily do that using the project's "property page". The advantage here is that, if a developer adds a new file, the build guy don't need to worry about syncing them up and thus if you have a nice set of rules in your team, it can be far more time saving than strict mapping.
The next step is to test and deploy the library. I have defined another .target file for holding the named variables , which I will be using later. I named it include.Targets.
For , those who are curious, declaring property in MsBuild looks like
<Project xmlns="">
<PropertyGroup>
<TaskDirectory>Tasks</TaskDirectory>
<NUnit>ThirdParty\NUnit\nunit-console.exe</NUnit>
</PropertyGroup>
<UsingTask AssemblyFile="$(TaskDirectory)\MSBuild.Community.Tasks.dll"
TaskName="MSBuild.Community.Tasks.Zip"
/>
</Project>
UsingTask is the MsBuild way of loadtasks in NAnt which also mimics with the namespace directives in C#.
To wrap up everything, I have created a master.proj file which is the entry point. it includes the targets and does the test and additionally zip things up. You can open up the master.proj in VS 2008. This will give you the advantage of intellisense, but here to mention that the intellisense wont work nicely for custom made tasks. Inside the project node of master.proj you can specify default target like NAnt and can make other targets depend on it. So, even if the target is in one of files included, it will still fire up everything else before playing the final target.
<Project DefaultTargets="Deploy" xmlns="">
....
</Project>
We have built the project, its time to copy them in a folder to do something on them before doing the final packing. I have created a tiny Task called BatchCopy (I really missed one in the community tasks). The usage is pretty simple and it works pretty nice in my project scope
<BatchCopy source="$(SourceDir)\Bin\Release" DestinationFolder="$(BinDir)" ExtensionToExclude=".pdb;" />
<BatchCopy source="$(TestDir)\Bin\Release" DestinationFolder="$(BinDir)" ExtensionToExclude=".pdb;" />
As you can see, it requires a source and destination path and there is another property where I can specify the extensions which I want to skip separated by commas.
Till this line, I have built the project , copied assemblies to a folder, next cool thing is to test them out. Testing definitely to be done by NUnit. In Athena(Flickr API), I have used Typemock to fake request, responses, authentication, etc out so that I can test the whole API while I am on a plane to PDC 08 :-).
In the previous post, I have mentioned Typemock 5 made auto deploy possible for open source projects. Let's get that going.
Step one is to include three tasks
<UsingTask TaskName="TypeMock.MSBuild.TypeMockRegister" AssemblyFile="$(TypeMockLocation)\TypeMock.MSBuild.dll" />
<UsingTask TaskName="TypeMock.MSBuild.TypeMockStart" AssemblyFile="$(TypeMockLocation)\TypeMock.MSBuild.dll" />
<UsingTask TaskName="TypeMock.MSBuild.TypeMockStop" AssemblyFile="$(TypeMockLocation)\TypeMock.MSBuild.dll" />
TypemockLocation is not the application folder rather where I have copied necessary dlls to make the auto deploy possible. Basically to run it as a standalone your deployment folder should have the following dlls in place copied from the Typemock installation folder.
Once everything is in place, the steps are the same that I have mentioned for NAnt. Only, few syntax that you need to watch out with wide eyes.
<TypeMockRegister Company ="Open Source" License="Get one for you" AutoDeploy="True"/>
<TypeMockStart/>
<Exec ContinueOnError="false" Command="$(NUnit) $(BinDir)\Linq.Flickr.Test.dll"/>
<TypeMockStop/>
Don't forget to add the ContinueOnError="false" that halts the script as soon as there is an error. Once, things are right running the build script will register auto deploy and run over your test project giving you the results.
Lastly, moving to master.proj, I just wanted to zip everything up with the proper content.
<CreateItem Include="$(BinDir)\**\Linq.Flickr.dll;$(BinDir)\**\LinqExtender.dll;readme.txt;" >
<Output ItemName="FilesToZip" TaskParameter="Include"/>
</CreateItem>
<Zip Files="@(FilesToZip)" Flatten="true"
ZipFileName="$(BinDir)\Linq.Flickr.Lib.zip" />
The CreateItem task lets you prepare an item collection based on the parameters you passed in the Include/Exclude attribute and finally outputs the list to a variable that you can use later on (in this case, it is used with the Files attribute of the Zip task).
This is just a simple way of getting things around with MsBuild. To spice a bit more, you can wrap the msbuild command in a batch file, which requires the following two lines
@ECHO ON
C:\Windows\Microsoft.NET\Framework\v3.5\msbuild.exe Master.proj /m:2 /fileLogge
fileLogger will create an output file (msbuild.log) after the build that contains the detail of execution. /m defines that msbuild will run in two process in parallel, this option is pretty good for large solution (for ex, Telerik has plenty of projects in a solution and I found building with msbuild much faster than using the VS IDE), of course having quad core processor with /m:4 will give more boost than in two cores . In my other project which is LinqExtender, I created a script that download source code and setup working base with a click of a button. Therefore, those who check in/out things in codeplex or any other source control environment could find this task useful.
<TFSGet
Server="tfs05.codeplex.com"
Port="443"
Secured="true"
Repository="$/LinqExtender/somthing"
UseUICredentials="false/true"
Username="$(User)"
Password="$(Pass)"
Domain ="snd"
LocalPath="local path to host the source"
UnBind = "true/false"
/>
Here, UseUICredential = true, will bring up the TFS login prompt every time, so if you don't want to type in user/pass every time then just leave as it is and populate the username and password parameter with proper values. Unbind will remove the source control binding after downloading , if set to true.
Last but not the least, there is another task that I want to share with you is called XmlFindReplace, this is useful if you are automating VSI package creation which I did in my LinqExtender project (I hate doing this manually every time for deployment :-)) or want to create a starter project with predefined template.
<XmlFindReplace
FilePath=".\Template\project.xml"
Ns =""
Attribute = "Include"
Element ="Reference"
Text ="#ASSEMBLY#"
ReplaceText="LinqExtender, Version=1.4.0.0, Culture=neutral, processorArchitecture=MSIL"
DestinationFile="$(TempDir)\LINQProvider.csproj"
/>
Here you can do two sets of find and replace. Either replace the attribute value of an element which requires both the Element and its Attribute or just replace the element text in which case you need to provide only the Element name.
So far, you can see that it is possible to do same things with Msbuild that is possible with NAnt, but MsBuild gives more integration to VS environment which sometimes give more power over how the build goes. Earlier, in a post I have shown that at the end of each project file there is a "AfterBuild" and "BeforeBuild" target where you can put your own script to do some special tasks. Also, Msbuild comes as part of .net framework so that you don't ever need to bother about the script runtime and distributes.
Hope all these info are useful for you to get going, I have added the tasks with test classes so that you can play, use or extend it as you like.
Hi AviK
Thanks for the mention !!! Its a pleasure for me :-). | http://weblogs.asp.net/mehfuzh/archive/2008/09/07/deploy-test-and-pack-your-code-part-2-using-msbuild.aspx | CC-MAIN-2014-15 | refinedweb | 1,510 | 54.22 |
The Internet of Things (IoT) is big and upcoming; we see more connected devices around us each day. When typical devices are “connected,” there’s an opportunity to make them smart and introduce new services, insights, and more. As a result, many organisations are looking at how they can leverage these new technologies so they may benefit from them.
Designing and building a new IoT solution is an exciting project that has many facets, ranging from a product design, to an end-user experience test, to building and managing a solution. With hundreds of IoT platforms available — many as a cloud service eg. AWS, Azure, etc. — deciding where and how messages (data) are processed is a key component. Part of the challenge is knowing what data is collected and how sensitive that data is for the customer or the organisation. Are we willing (or allowed) to process and store the captured data in a cloud service?
Scale and secure
Organisations who host their own IoT service are confronted with a typical question; how do I scale and secure my service? Not an uncommon question when we build a typical application like webservers, but is it really that different?
Yes and no. Yes, because a connected device has one (or a few) connections where are long lived (and consists of events) whereas a web client has many connections which are short lived. No, because — just like with any applications — you need to balance the data streams between the servers, secure the connection with SSL/TLS and in some cases, authenticate the device before a connection is setup. This a typical workload for the Citrix NetScaler is also a Secure Event Delivery Controller (S-EDC).
MQTT
A common protocol used in IoT is Message Queueing Telemetry Transport (MQTT), a machine-to-machine (M2M) data transfer protocol. MQTT clients can publish messages and (other) clients can subscribe to receiving messages, clients connect with MQTT brokers which – as the name implies – brokers all traffic between the clients. MQTT brokers (like HiveMQ) are available in clusters, so they scale and are resilient. However, they require a load balancer to spread to load from the clients. Just like you don’t want to expose your application/webserver directly to the internet, but have a reverse-proxy instead; the same goes for your MQTT broker. Especially when SMQTT (MQTT over TLS) is used to securing the IoT data stream the en-/decryption of the TLS connection is resource intensive not to mention the first layer of defense. A second layer of defense is typically an authentication layer, so the server can verify the identity of the client during the TLS handshake, for which X509 client certificates can be used.
Protocol Extensions
What’s interesting with NetScaler 12 is that we didn’t just release support for MQTT, we’ve introduced the Protocol Extensions framework. Meaning it’s now possible to introduce new protocols by writing extension code! Quite awesome if you ask me!
When you’re looking to scale and secure your IoT service, look no further! But do read on as I’ll share with you how I configured my NetScaler to secure the connection with my connected devices.
SMQTT in my lab
In my lab, I’ve got a NetScaler exposed to the internet and two MQTT brokers. I’ve got no intention to provide access to my network for my IoT sensors (no strangers in my network) as that reduces my risk significantly. So, these sensors are using SMQTT to connect to my NetScaler which is configured with a SSL certificate from an authorized certificate authority (CA). The NetScaler in its turn connect via MQTT to the MQTT brokers, as that’s my trusted network.
Requirements
- NetScaler release 12;
- SSL certificate from an authorized certificate authority (CA); Ensure you include the certificate chain when intermediate certificates are used!
- mqtt.lua file (see docs.citrix.com) to /var/download/extensions on your NetScaler (I’m using FileZilla on my Mac)
- MQTT broker(s).
CLI commands
import ns extension local:mqtt.lua mqtt_code add service lb_service_mqtt1 [MQTT_BROKER_1] USER_TCP 1883 add service lb_service_mqtt2 [MQTT_BROKER_2] USER_TCP 1883 add lb vs lb_vserver_MQTT USER_TCP bind lb vs lb_vserver_MQTT lb_service_mqtt1 bind lb vs lb_vserver_MQTT lb_service_mqtt2 add user protocol MQTT -transport TCP -extension mqtt_code add user vs u_vserver_mqtt MQTT [MQTT_VIP] 80 -defaultlb lb_vserver_MQTT
Let’s take a look at the steps
import ns extension local:mqtt.lua mqtt_code
This create a protocol extension with the name mqtt_code with the file mqtt.lua which is found in /var/download/extensions. When you try to load this policy extension in the GUI (via AppExpert > Policy Extensions > Policy Extensions) you’ll get an error “Extension loading error. [Extension mqtt, line 12: attempt to index global ‘client’ (a userdata value)]”. Don’t worry, a bug (BUG0699181) is filed and will be fixed!
add service lb_service_mqtt1 [MQTT_BROKER_1] USER_TCP 1883 add service lb_service_mqtt2 [MQTT_BROKER_2] USER_TCP 1883
Here we create two services pointing to the MQTT Brokers on port 1883 (default port). [MQTT_BROKER_1] refers to the IP address of the first MQTT broker, [MQTT_BROKER_2] to the IP address of the second MQTT broker.
add lb vs lb_vserver_MQTT USER_TCP bind lb vs lb_vserver_MQTT lb_service_mqtt1 bind lb vs lb_vserver_MQTT lb_service_mqtt2
Next we create a Load Balancing Virtual Server with the name lb_vserver_MQTT with the protocol USER_TCP. We binded the two servers (the MQTT brokers) to this Virtual Server.
add user protocol MQTT -transport TCP -extension mqtt_code add user vs u_vserver_mqtt MQTT [MQTT_VIP] 80 -defaultlb lb_vserver_MQTT
We then added a user protocol with the name MQTT using the TCP transport with the protocol extension we just added (mqtt_code). After that we create a User Virtual Server using the MQTT user protocol on port 80 which points to the Load Balancing Virtual Server we created in the step before: lb_vserver_MQTT. [MQTT_VIP] refers to the IP address where the IoT clients will connect to. This is not the SMQTT, so just for reference!
Next we create another user protocol but this time with the name MQTT_SSL which is using the SSL transport. Another User Virtual Server is created with port 8883 that points to the same Load Balancing Virtual Server as before. And of course we disable SSLv3 as that’s no longer considered safe.
Last we bind the SSL certificate to the User Virtual Server using the SSL transport u_vserver_mqtt_ssl where [CERT_KEYNAME] refers to the name of the Server certificate.
Testing
After this my sensors should be able to connect to [MQTT_VIP] via port 8883 and setup a SMQTT session. We can verify this quite easily by using a MQTT application, I’m using MQTT-fx on my Mac.
First, configure a connection profile where you point the broker address to the [MQTT_VIP] you specified. The broker port is the port specified in u_vserver_mqtt_ssl (8883). Select the tab SSL/TLS and check Enable SSL/TLS. Since we’ve used a SSL certificate from an authorized CA we can keep the CA signed server certificate selected.
You can now Connect and the light on the right should turn green with the lock locked (meaning you’re using SMQTT). In the Subscribe tab you can subscribe to messages (plural) or scan for topics collected. In the Publish tab you can “post” messages to a certain topic, which should be visible in the subscribe tab.
References
- Tutorial – Adding MQTT Protocol to the NetScaler appliance by using Protocol Extensions
- Some useful commands for troubleshooting:
sh ns extension sh user protocol sh connectiontable sh persistentSessions sh lb vs lb_vserver_MQTT sh user vs u_vserver_mqtt
Citrix TechBytes – Created by Citrix Experts, made for Citrix Technologists! Learn from passionate Citrix Experts and gain technical insights into the latest Citrix Technologies.
Want specific TechBytes? Let us know! tech-content-feedback@citrix.com | https://www.citrix.com/blogs/2017/12/07/scaling-and-securing-iot-data-streams/ | CC-MAIN-2018-13 | refinedweb | 1,290 | 60.85 |
PICO-8: Retroarch lr-retro8 core installation script"
Wow! It looks like great minds think alike! I have been trying to implement a nicer experience of playing pico8 on retropie too!
I have made a logo for the default EmulationStation skin for pico8:
It comes from this post. I used Aseprite to put a 1 pixel black border around the white version so that the logo will show no matter what color the background is. It would be nice for someone to also make one of those red outline graphics for it too.
The Retro8 core for RetroArch will run many pico8 games but not all. It'd be nice to give people the option to choose between the Retro8 core or the official Pico8 binary if they've placed it on their system just like how people can choose between the ppsspp official binary or the ppsspp retroarch core by holding the "A" button after they select a game in EmulationStation to get that little menu. I'm still not entirely clear on how that all works.
In my opinion:
We need to figure out how to set RetroArch to run the games correctly whether they're
.p8or
.p8.pngor just
.png.. Those are all correct pico8 cart filenames.
Also, always display the cart itself as the image for the cart in the EmulationStation menu. Even if the cart is
.p8-- it should display it as a png image anyway.
My thought was that I'd have a little python script that runs every time you exit Splore which updates the gamelist.xml with whatever new games you added so they'll show in the EmulationStation menu with their little cartridge pictures. I was even trying to make it use picotool to scrape the metadata from the code inside the cartridge but I couldn't get that to work right.
Here's the little Python script I wrote to make a gamelist.xml for pico8 carts:
import argparse, io, os, lxml from lxml.etree import ElementTree from lxml.etree import Element from lxml.etree import SubElement from glob import glob parser = argparse.ArgumentParser(description='Input XML file') parser.add_argument('file', metavar='file', type=str, nargs='+', help='Input XML file') filename = parser.parse_args().file[0] path = os.path.dirname(os.path.abspath(filename)) et = ElementTree.parse(filename).getroot() if os.path.exists(filename) else ElementTree(Element('gameList')) files = glob(os.path.join(path, '*.png')) + glob(os.path.join(path, '*.p8')) + glob(os.path.join(path, '*.p8.png')) for file in files: relativeFile = os.path.join('.', os.path.basename(file)) foundit = False for game in et.getroot().iter('game'): if (game.find('path').text == relativeFile): foundit = True break if (not foundit): game = SubElement(et.getroot(), 'game') gamePath = SubElement(game, 'path') gamePath.text = relativeFile gameName = SubElement(game, 'name') gameName.text = os.path.basename(file).removesuffix('.png').removesuffix('.p8') gameImage = SubElement(game, 'image') gameImage.text = relativeFile lxml.etree.indent(et.getroot(), space='\t') with io.open(filename, "w", encoding="utf-8") as f: f.write(lxml.etree.tostring(et, encoding='unicode', pretty_print=True))
A known issue: I should make the file extensions case insensitive somehow.
There are probably cleaner and faster ways to do this. I'm not real experienced with Python.
Contributions to the project are always appreciated, so if you would like to support us with a donation you can do so here.
Hosting provided by Mythic-Beasts. See the Hosting Information page for more information. | https://retropie.org.uk/forum/topic/30965/pico-8-retroarch-lr-retro8-core-installation-script/12?lang=en-US | CC-MAIN-2021-43 | refinedweb | 575 | 60.51 |
Why does the following not compile:
class A {} protocol P { var a: A { get set } } final class B: A {} final class X: P { // Error: Type 'X' does not conform to protocol 'P' var a: B // Error: Candidate has non-matching type 'B' init(a: B) { self.a = a } }
Class B is a sub-class of class A. Liskov's Substitution Principle says anything that conforms to B, also conforms to A. So... why doesn't this compile?
I can "fix" this by adding:
extension P where Self: X { var a: A { get { self.a } set { self.a = newValue // compiles, but crashes if it actually gets called } } } var x: P = X(a: B()) x.a = B() // crashes!
Why does this compile? Clearly it allows X to satisfy P but now if the setter gets called it will crash anytime
newValue doesn't conform to
B.
It can be worked around with a guard statement in the setter to return if
newValue as? B fails, but since the compiler knows
self.a is of type
B, it seems like this should fail to compile otherwise.
Note: if
P is made to be an
@objc protocol and
a is made to be an optional requirement, now, rather than crashing, the compiler will state that
a is immutable even though it is declared as
get set:
import Foundation class A: NSObject {} @objc protocol P { @objc optional var a: A { get set } } class B: A {} @objcMembers class X: P { var a: B init(a: B) { self.a = a } } var x: P = X(a: B()) x.a = B() // Compiler error: Cannot assign to property: 'x' is immutable | https://forums.swift.org/t/protocols-question-how-can-i-achieve-covariance-on-a-requirements-implementation/49210 | CC-MAIN-2021-25 | refinedweb | 271 | 72.26 |
Speaking of changes to display, it seems useful to easily *not* display units. A lot of pages and templates put the units in a hyperlink, e.g.
has
Area 840.0
km²
where km² is a link to Square_kilometre made with [[Square_kilometre|km²]] .
We can represent the attribute in SMW 0.4 as [[area:=840 km²]], but then the units appear as part of the value display. To not display the units you have to repeat the number, [[area:=840 km²|840]] , which is prone to error.
I'm not sure what the wiki shortcut for not showing units could be. Maybe putting a trailing | with no space could only show the number. Currently it falls through to chopping off "area:" as if it were a namespace, so you see
=840 km²
Some number display is even more complicated, e.g. in
both the number and the units are hyperlinks:
Surface area
6.09
×10
12
km²
the 6.09 part is a hyperlink to the
page that explains this order of magnitude. This conflicts with the SMW tooltip that displays the area in other units. I couldn't represent both at once in wiki source.
People are cramming a lot of info into those hyperlinks, it's almost to the point they need a context menu to document and link to attribute type, unit conversion, order of magnitude, units explanation, etc.
--
=S | https://sourceforge.net/p/semediawiki/mailman/attachment/44739140.7010301@earthlink.net/1/ | CC-MAIN-2016-40 | refinedweb | 233 | 74.39 |
I believe I've found the issue. The __init__() referred to in the error message seems to be that of the class I'm defining. Presumably when I run the path_semigroup() method, it attempts to initialize a new DoubleQuiver for some reason, and includes an input for the "weighted" argument, something that the DiGraph constructor takes but the DoubleQuiver constructor, as written, does not.
Inserting **kwargs into the arguments of __init__ and the call it makes to DiGraph's __init__ appears to fix the problem by allowing for any missing arguments:
def __init__(self, digraph, multiedges=True, **kwargs):, **kwargs)
An aside: Another problem that this revealed, which provides further evidence that path_semigroup() calls __init__ again, is that the resulting path semigroup actually had all the arrows doubled again. I fixed this by checking whether the digraph input to __init__ is already an instance of DoubleQuiver. | https://ask.sagemath.org/answers/49654/revisions/ | CC-MAIN-2022-05 | refinedweb | 145 | 51.52 |
Asked by:
Can a XML Schema Collection refer simpleType/complexType defined in another XML schema collection?
Hi there,
I have several XSDs which all use simpleType/complexType in GlobalTypes XSD. I tried to create a XML Schema Collection called GlobalTypes and created other XML Schema Collections using xs:import to import GlobalTypes but I always get error message - Reference to an undefined name 'XXX' within namespace 'XXX'. But if I copied GlobalTyps XSD into the XSD, I can create the XML Schema Collection.
I wonder if there is a way to refer simpleType/complexType defined in the GlobalTypes XML Schema Collections, otherwise I have to alter all XML Schema Collections if the GlobalTypes changed.
Thanks.
- Edited by wirelessoracle Thursday, December 15, 2011 3:56 PM
Question
All replies
I think this is possible using xsd:import, assuming you have your XML SCHEMA COLLECTIONs stored as .xsd files, and create a routine to import them. SQL Server ( as at 2008 R2 ) does not support xsd:include.
See this link which does something similar:
Hello,
It’s impossible to be sure without some kind of repro. This particular error message can be thrown for several different reasons.
A few of the more likely possibilities:
- It could be that the imported XSD contains an element at the top level that has both a name attribute and a ref. This is not legal.
- It could be complaining about a bad location in the schemaLocation attribute. We had problems with that in older versions of MSXML. I don’t see in the post which version you are using. Is this .Net or Native
- Import will fail if you try to derive by restriction in the second schema, using a base type defined in the first schema
Hope this helps
Terrell-An
Terrell An -MSFT | http://social.msdn.microsoft.com/Forums/sqlserver/en-US/53b4de44-95c7-41a5-92c0-860d31716434/can-a-xml-schema-collection-refer-simpletypecomplextype-defined-in-another-xml-schema-collection?forum=sqlxml | CC-MAIN-2013-48 | refinedweb | 299 | 61.56 |
team is proud to present a release with more than 450 bug fixes and features.
Xtend is a great choice for Android application development because it compiles to Java source code and doesn't require a fat runtime library. With version 2.4 the Android support has been further improved.
Debugging Android applications works now. Previously Xtend supported debugging through JSR-45 only, which is not supported by the Dalvik VM. Now you can configure the compiler to install the debug information in a Dalvik-compatible manner.
There is also a Maven archetype to set up a working Android project easily. If you have installed Maven and the Android SDK you only need the following command to get started:
mvn archetype:generate -DarchetypeGroupId=org.eclipse.xtend \ -DarchetypeArtifactId=xtend-android-archetype \ -DarchetypeCatalog=
The following new features have been added to the Xtend language.
In 2.4.2 we have introduced new (more Java-like) ways to access nested classes and static members. Also type literals can be written by just using the class name.
Here is an example for a static access of the generated methods in Android's ubiquitous R class:
R.id.edit_message // previously it was (still supported) : R$id::edit_message
Type literals can now be written even shorter. Let's say you want to filter a list by type:
myList.filter(MyType) // where previously you had to write (still supported) : myList.filter(typeof(MyType)
If you use the Java syntax (e.g. MyType.class), you'll get an error marker pointing you to the right syntax.
Active Annotations let developers particpate in the translation process from Xtend code to Java source code. The developer declares an annotation and a call back for the compiler where the generated Java code can be customized arbitrarily. This doesn't break static typing or the IDE! Any changes made in an active annotation are completely reflected by the environment. A simple example would be a JavaBeans property supporting the Observer pattern. Here you need a getter and a setter method for each field and also an observer list and the proper code to notify them about changes. In many software systems you have hundreds of these properties. Active Annotation allow you to define and automate the implementation of such patterns and idioms at a single point and let the compiler expand it on the fly. And all this based on lightweight, custom libraries. You do no longer have to write nor read the boiler plate code anymore. Read more...
Xtend now has literals for unmodifiable collections.
val listOfWords = #["Hello", "Xtend"] val setOfWords = #{"Hello", "Xtend"} val mapOfWords = #{1->"Hello", 2->"Xtend"}
Collections created with a literal are immutable. The list literal can be used to natively create arrays, too. If the target type is an array, it will compile to an array initializer.
val String[] arrayOfWords = #["Hello", "Xtend"]
In addition to literals for arrays you can now also easily access and modify arrays as well as create empty arrays of any size.
val String[] arrayOfWords = newArrayOfSize(2) arrayOfWords.set(0, 'Hello') arrayOfWords.set(1, 'Xtend')
Interfaces, enumerations and annotation types can now be declared directly in Xtend.
interface Container
{ def T findChild((T)=>boolean matcher) } enum Color { RED, GREEN, BLUE } @Retention(RetentionPolicy::RUNTIME) @Target(ElementType::TYPE) annotation DependsOn { Class<? extends Target> value val version = "2.4.0" // type 'String' inferred }
Extension methods allow to add new methods to existing types without modifying them.
Consider the omnipresent class
java.lang.String.
If you have to parse a string to a number, you could always write
Integer::parseInt('42')
but what you actually think of is
'42'.parseInt
To make that possible, you simply import the class
Integer as a static extension:
import static extension java.lang.Integer.*
This enables to pass the base of the number as an argument, too:
'2A'.parseInt(16)Extension methods are available in other language such as C# as well, but Xtend can do better. The new Extensions Providers render a former limitiation obsolete: In Xtend 2.4, fields, parameters and local variables can provide extensions, too. Read more...
Lambda expressions now work with interfaces and classes with a single abstract method
(SAM types). For example, the
AbstractIterator
from the Guava library has a single abstract method
computeNext(). A lambda can be used to implement that:
val AbstractIterator<Double> infiniteRandomNumbers = [| Math::random]
Some new operators have been added. In addition to the usual
== and
!=
operators which map to
Object.equals(Object), the operators
=== and
!== respectively can be used to test for identity equality.
if (myObject === otherObject) { println("same objects") }
Also new exclusive range operators have been introduced. In order to iterate over a list and work with the index you can write:
for (idx : 0 ..< list.size) { println("("+idx+") "+list.get(idx)) }
Or if you want to iterate backwards :
for (idx : list.size >.. 0) { println("("+idx+") "+list.get(idx)) }
Being an Eclipse project Xtend has always been designed with IDE integration in mind. The team is proud to announce that the editing support is now almost on par with Java's and in some aspects already even better. A user recently wrote in the newsgroup:
Tooling for Xtend is unlike any other language for the JVM after Java. The IDE support is first class. It will take years for some languages to catch up. Some never will.
The following new IDE features improve the editing experience significantly:
With the new release we have overhauled the Organize imports action. It processes all kinds of imports, asks to resolve conflicts, and shortens qualified names automatically.
New refactorings have been added. You can now extract code into a new local variable
or into a new method.
Follow-up error markers are now suppressed and errors in general are much more local, so it is very easy to spot the problem immediately.
The severity of optional compiler errors can be configured globally as well as individually for a single project. They can either be set explicitly or delegate to the equivalent setting of the Java compiler.
Xtend now offers to create missing fields, methods, and types through quick fix proposals.
The content assist has become much smarter. It now proposes lambda brackets if the method accepts a single function and it offers hints on the parameter types when you are working with overloaded methods.
A configurable formatter which pretty prints and indents code idiomatically is now available.
An Xtend editor now has validation and content assist within JavaDoc comments.
You can use Copy Qualified Name in the editor and the outline view to copy the name of types, fields and methods into the clipboard. | http://www.eclipse.org/xtend/releasenotes.html | CC-MAIN-2014-49 | refinedweb | 1,101 | 57.87 |
There are at least a couple of rapidxml characteristics you should be aware before start working with it.
Rapidxml parsing is destructive. The xml_document::parse() method gets in input a non-constant C-string of characters, that it uses as an its own internal buffer. If you want to keep your XML as it is, you'd better pass in a copy of it.
Preconditions are usually checked with assertions. Exceptions are thrown from the xml_document::parse() method only. Be careful in testing what you are passing to an asserting function (for instance, xml_node::last_node() requires the node to have at least a child (it asserts its first_node is not NULL), and try/catching the parse call.
I have written a test case (using the Google Test framework) that shows how to parse a simple XML and to read the information in it. Notice that I just read a document, without performing any editing on it, this keeps the example simple enough.
#include "rapidxml/rapidxml.hpp" #include <gtest/gtest.h> TEST(RapidXml, simple) { char buffer[] = "<root><first>one</first><second>two</second><third>whatever</third></root>"; // 1 rapidxml::xml_document<char> doc; // 2 ASSERT_NO_THROW(doc.parse<0>(buffer)); // 3 rapidxml::xml_node<char>* root = doc.first_node(); // 4 ASSERT_TRUE(root); ASSERT_STREQ("root", root->name()); // 5 bool fields[4] {}; // 6 for(rapidxml::xml_node<char>* node = root->first_node(); node != NULL; node = node->next_sibling()) // 7 { if(strcmp(node->name(), "first") == 0) // 8 { ASSERT_STREQ("one", node->value()); fields[0] = true; } else if(strcmp(node->name(), "second") == 0) { ASSERT_STREQ("two", node->value()); fields[1] = true; } else if(strcmp(node->name(), "third") == 0) // 9 { fields[2] = true; } else // 10 { fields[3] = true; // unexpected! std::cout << "Unexpected node: " << node->name() << std::endl; } } EXPECT_TRUE(fields[0]); // 11 EXPECT_TRUE(fields[1]); EXPECT_TRUE(fields[2]); EXPECT_FALSE(fields[3]); }1. Remember that rapidxml is going to change this C-string (NULL-terminated array of characters) for its own purposes.
2. The xml_document template class has a template parameter that defaults to char. If you want to save some typing you can rewrite this line without specifying the parameter, and using the char default:
rapidxml::xml_document<> doc3. xml_document::parse() expects an int as template parameter, pass zero to get the default behavior. In your code you should try/catch this call for rapidxml::parse_error exception (it extends the std::exception). Here I assert that it should not throw.
4. xml_document IS-A xml_node, so I call on doc the xml_node::first_node() method to get the first document child. If doc has no child, first_node() returns a NULL pointer, otherwise we have a pointer to that node.
5. I expect the root to be there, so I assert that it is not zero (AKA false), then I get its name and I assert it is as expected. xml_node IS-A xml_base, where we can see that the name() method never returns NULL, if the node has no name, an empty C-string is returned instead.
6. Root has three children. I want to ensure I see all of them and nothing more. This bunch of booleans keeps track of them. They are all initialized to false (through the handy C++ empty list initializer) and then, in the following loop, when I see one of them I set the relative flag to true. There are four booleans, and not three, because I want to flag also the case of an unexpected child.
7. The for-loop is initialized getting the first root child, then we get the next sibling, until we reach the end of the family (a NULL is returned). We should pay attention using xml_node::next_sibling(), since it asserts when the current node has no parent. But here we call next_sibling() on a node that is surely a children of another node.
8. For first and second node, we want to ensure it has a specific value, hence the assertion.
9. The third node could have any value, I just set the flag when I see it.
10. In case an unexpected node is detected, I keep track of this anomaly setting the relative flag.
11. Check if the expectations are confirmed. | http://thisthread.blogspot.com/2013_10_01_archive.html | CC-MAIN-2018-17 | refinedweb | 687 | 63.29 |
Closed Bug 361268 Opened 16 years ago Closed 11 years ago
64 bit operations in libjs depend on hand maintained list of operating systems
Categories
(Core :: JavaScript Engine, defect)
Tracking
()
People
(Reporter: pw-fb, Unassigned)
References
Details
User-Agent: Mozilla/5.0 (X11; U; NetBSD i386; en-US; rv:1.9a1) Gecko/20051031 Firefox/1.6a1 Build Identifier: Mozilla/5.0 (X11; U; NetBSD i386; en-US; rv:1.9a1) Gecko/20051031 Firefox/1.6a1 libjs provides "Portable access to 64 bit numerics" via macros in jslong.h which operate on types JS{,U}Int64 and JS{,U}int32 defined in jstypes.h. The definition of JS{,U}Int64 is based on whether or not JS_HAVE_LONG_LONG is defined. JS_HAVE_LONG_LONG is defined in jsosdep.h according to a manually maintained list of operating systems. This list can't be accurate (e.g., Doesn't NetBSD have 64-bit datatypes? What about an ancient copy of said operating system?), and the autoconf philosopy of testing for features rather than OS names seems to be applicable in this case. Before getting out the history books to write an autoconf 2.13 macro to test for long long and define JS_HAVE_LONG_LONG, is that really the right thing to do? To quote mozilla/configure.in: dnl pass -Wno-long-long to the compiler MOZ_ARG_ENABLE_BOOL(long-long-warning, [ --enable-long-long-warning Warn about use of non-ANSI long long type], So, do we really want (quoting jstypes.h) typedef long long JSInt64; typedef unsigned long long JSUint64; ? or rather typedef int64_t JSInt64 typedef int32_t JSInt32 ? so write autoconf 2.13 equivalents of modern day autoconf's -- Macro: AC_TYPE_INT64_T If `stdint.h' or `inttypes.h' defines the type `int64_t', define `HAVE_INT64_T'. Otherwise, define `int64_t' to a signed integer type that is exactly 64 bits wide and that uses two's complement representation, if such a type exists. and -- Macro: AC_TYPE_UINT64_T ? (We could in fact do all the types, so we would't need JS_BYTES_PER_x either, and typedef int16_t JSInt16 etc is quite readable..) Thoughts? Cheers, Patrick Reproducible: Always
(bug born in bug 361075)
*** Bug 361267 has been marked as a duplicate of this bug. ***
spidermonkey doesn't always use autoconf for building, you can also build it with Makefile.ref. That said, it might be better to require a native 64-bit type, like NSPR and Gecko already do...
Status: UNCONFIRMED → NEW
Ever confirmed: true
Flags: blocking1.9?
(In reply to comment #3) > spidermonkey doesn't always use autoconf for building, you can also build it > with Makefile.ref. It seems that you get libjs from Makefile.ref rather than libmozjs - is that intentional? Is there a difference between them? > That said, it might be better to require a native 64-bit type, like NSPR and > Gecko already do... .. at the moment I am autoconfing this, and wondering about word/dword. At least JSWord = sizeof(void*). (and so came across possible trivial bug 363166)
I think bug 97954 or a morphed-to-use-autoconf-only version of it blocks this bug. /be
(In reply to comment #5) > I think bug 97954 or a morphed-to-use-autoconf-only version of it blocks this > bug. I would rather like to think solving this one inadvertently solves that one too.. Any ideas on the libmozjs vs libjs question? (I'm just doing libjs for now..)
(In reply to comment #3) > That said, it might be better to require a native 64-bit type, like NSPR and > Gecko already do... .. back to this part. Do we want to require a native 64-bit type? If that is the case, is saves me having to do the "int64_t" doesn't exist side of configure.ac.
(In reply to comment #6) > (In reply to comment #5) > > I think bug 97954 or a morphed-to-use-autoconf-only version of it blocks this > > bug. > > I would rather like to think solving this one inadvertently solves that one > too.. Think of it however you like, but that bug has the lower number and the wider scope. There's more to unifying build systems than the 64-bit issue. > Any ideas on the libmozjs vs libjs question? (I'm just doing libjs for now..) If you mean this question: > It seems that you get libjs from Makefile.ref rather than libmozjs - is that > intentional? Is there a difference between them? Certainly it's intentional, for historical reasons. libmozjs is built with JS_THREADSAFE defined, while libjs is not. But with a unified build system, while we still could produce both libraries, perhaps we could take a further step and unify the two. Anyway, I think we could unifdef to require long long native support. That would simplify the code quite a bit. That's yet a different bug topic (a bug may already be on file about it), but fixing it would make this bug moot. /be
(In reply to comment #8) > Anyway, I think we could unifdef to require long long native support. That > would simplify the code quite a bit. Also: the only reason we saw the problem in the the non long long case is because jsosdep.h's list was incomplete. If that is replaced by a test, who would actually end up using the typedef struct { #ifdef IS_LITTLE_ENDIAN JSUint32 lo, hi; #else JSUint32 hi, lo; #endif } JSInt64; case? Wouldn't bit-rot set in?
Finally I have managed to autotool js/src, which was rather involved. The distfile created is avaible at Anyway, I have now come to testing, and seem to still have some sort of 32/64 bit bug left behind. Taking this attachment, and doing the traditional gunzip jsref-1.70.tar.gz tar xvf jsref-1.70.tar cd jsref-1.70 ./configure make make check works for NetBSD-4.99.19/i386, but for SunOS-5.9/sun4u/sparc and IRIX-6.5/IP32 make check fails. By hand I see: % ./js js> print(0x123456 * 0x10) 19088736 js> print(0x1234567 * 0x10) 305419888 js> print(0x12345678 * 0x10) and it hangs using cpu. Any thoughts on where to look before I try to debug this on systems I don't own? (BTW --enable-threadsafety is the equivalent of JS_THREADSAFE for the "in-tree" build...)
On the sun: js> print(0x123456 * 0x10) 19088736 js> print(0x1234567 * 0x10) 305419888 js> print(0x12345678 * 0x10) ^C Program received signal SIGINT, Interrupt. 0x9b520 in __muldi3 (u=0x000000001305d130, v=0x000000004cbae924) (gdb) bt #0 0x9b520 in __muldi3 (u=0x000000001305d130, v=0x000000004cbae924) #1 0x34800 in mult (a=0xd337c, b=0xd245c) at jsdtoa.c:688 #2 0x34924 in pow5mult (b=0xd1348, k=43177) at jsdtoa.c:805 #3 0x36e60 #4 0x37510 in JS_dtostr (buffer=0xffbfe2b0 "", bufferSize=26, mode=DTOSTR_STANDARD, precision=0, d=4886718336) at jsdtoa.c:2796 #5 0x57e14 in js_NumberToString (cx=0xbb998, d=4886718336) at jsnum.c:717 #6 0x86f10 in js_ValueToString (cx=0xbb998, v=779610) at jsstr.c:2665 #7 0x205c0 in JS_ValueToString (cx=0xbb998, v=779610) at jsapi.c:543 #8 0x1d4b0 in Print (cx=0xbb998, obj=0xc1420, argc=1, argv=0xcaacc, rval=0xffbfe4c8) at js.c:691 #9 0x4ae24 in js_Invoke (cx=0xbb998, argc=1, flags=0) at jsinterp.c:1354 #10 0x51934 in js_Interpret (cx=0xbb998, pc=0xcaa96 ":", result=0xffbfe6e0) at jsinterp.c:4043 #11 0x4b3b8 in js_Execute (cx=0xbb998, chain=0xffbfe6c0, script=0xcaa60, down=0x0, flags=0, result=0xffbfe804) at jsinterp.c:1613 #12 0x24c70 in JS_ExecuteScript (cx=0xbb998, obj=0xc1420, script=0xcaa60, rval=0xffbfe804) at jsapi.c:4213 #13 0x1c8f4 in Process (cx=0xbb998, obj=0xc1420, filename=0xcaa60 "", forceTTY=4) at js.c:269 #14 0x1d040 in ProcessArgs (cx=0xbb998, obj=0xc1420, argv=0xffbff978, argc=0) at js.c:494 #15 0x1f748 in main (argc=0, argv=0xffbff978, envp=0xffbff97c) at js.c:3158 and 4886718336 = 0x123456780 which means it is printing the answer which is the problem(!) At least it is a clue..
I now made a new distribution from yesterday's CVS-head (Incidentally, bug 366355 didn't quite remove all vestiges of perlconnect, so the job is completed in this tar file) The printing problem (NB output, not actual computation) still exists on sparc-sun-solaris2.9, but trying to build with Makefile.ref failed completely for me anyway, so I don't think this is a regression, and having js/src autoconf'd means it is much easier to test: % gunzip jsref-1.80.tar.gz % tar xvf jsref-1.80 % cd jsref-180 % ./configure --disable-shared % make % gdb js ... (gdb) run ... js> print(0x123456 * 0x10) 19088736 js> print(0x1234567 * 0x10) 305419888 js> print(0x12345678 * 0x10) ^C (hangs on sparc-sun-solaris2.9, not i386-unknown-netbsdelf4.99.20) Program received signal SIGINT, Interrupt. 0x34a94 in mult (a=0xd4324, b=0xd669c) at jsdtoa.c:688 688 z = *x++ * (ULLong)y + *xc + carry; (gdb) bt #0 0x34a94 in mult (a=0xd4324, b=0xd669c) at jsdtoa.c:688 #1 0x34c04 in pow5mult (b=0xd8330, k=21588) at jsdtoa.c:834 #2 0x370fc #3 0x377ac in JS_dtostr (buffer=0xffbfe2b0 "", bufferSize=26, mode=DTOSTR_STANDARD, precision=0, d=4886718336) at jsdtoa.c:2796 #4 0x580b0 in js_NumberToString (cx=0xbc940, d=4886718336) at jsnum.c:717 #5 0x8895c in js_ValueToString (cx=0xbc940, v=783714) at jsstr.c:2665 #6 0x207b8 in JS_ValueToString (cx=0xbc940, v=783714) at jsapi.c:543 #7 0x1d6a8 in Print (cx=0xbc940, obj=0xc2420, argc=1, argv=0xcba74, rval=0xffbfe4c8) at js.c:700 #8 0x4afe0 in js_Invoke (cx=0xbc940, argc=1, flags=0) at jsinterp.c:1333 #9 0x51b14 in js_Interpret (cx=0xbc940, pc=0xcba3e ":", result=0xffbfe6e0) at jsinterp.c:4026 #10 0x4b584 in js_Execute (cx=0xbc940, chain=0xffbfe6c0, script=0xcba08, down=0x0, flags=0, result=0xffbfe804) at jsinterp.c:1592 #11 0x24e90 in JS_ExecuteScript (cx=0xbc940, obj=0xc2420, script=0xcba08, rval=0xffbfe804) at jsapi.c:4694 #12 0x1ca64 in Process (cx=0xbc940, obj=0xc2420, filename=0xcba08 "", forceTTY=5) at js.c:265 #13 0x1d220 in ProcessArgs (cx=0xbc940, obj=0xc2420, argv=0xffbff978, argc=0) at js.c:515 #14 0x1f940 in main (argc=0, argv=0xffbff978, envp=0xffbff97c) at js.c:3261 Any chance of testing the distribution / folding in the changes?
Patrick: can you provide the additional perlconnect removal changes as a patch to bug 366355, instead of including them here? We need to tackle a bite-sized chunk in each bug or mayhem ensues. :) Thanks
This is a great bug that should get more love (Patrick, I haven't heard from you in a while, are you still interested in it?), but not a blocker.
Flags: blocking1.9? → blocking1.9-
Not even wanted-1.9?
We now use stdint.h. \o/
Status: NEW → RESOLVED
Closed: 11 years ago
Resolution: --- → WORKSFORME | https://bugzilla.mozilla.org/show_bug.cgi?id=361268 | CC-MAIN-2022-40 | refinedweb | 1,767 | 68.67 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.