text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Load The MNIST Data Set in TensorFlow So That It Is In One Hot Encoded Format Import the MNIST data set from the Tensorflow Examples Tutorial Data Repository and encode it in one hot encoded format. < > Code: Transcript: We’ll begin by creating our file. # command line # e stands for emacs # e create-simple-feedforward-network.py We’ll just call it simply create-simple-feedforward-network.py. We’ll begin by importing TensorFlow as tf as a standard. import tensorflow as tf Then we’re going to import a helper function from the tensorflow.examples.tutorials.mnist called input_data. from tensorflow.examples.tutorials.mnist import input_data It helps us load our data. Today, we’re going to be using the MNIST data set which consists of data showing images of different handwritten digits which are numbers from 0 through 9. We’re going to access our data in this lesson by just using the input_data.read_data_sets("MNIST_data/", one_hot=True). mnist = input_data.read_data_sets("MNIST_data/", one_hot=True) So what this does is it says download the data, save it to the MNIST_data folder, and process it so that data is in one hot encoded format. One hot encoded format means that our data consists of a vector like this with nine entries. [1 0 0 0 0 0] This is not nine, obviously. This is just an example. Where this corresponds to 1, 2, 3, 4, 5, 6, 7, 8, 9, 0 and so that our data will be labeled with a 1 corresponding to the column for that label and then 0 otherwise. 1 2 3 4 5 6 7 8 9 0 [ 1 0 0 0 0 0 0 0 0 0 ] The other way, if one hot was false, then our data would just have the y variable as 1 or 2 or 3, or whatever people do with this. [ 1] [ 2 ] [ 3 ] Fist thing we have to do is we have to create a placeholder variable. x = tf.placeholder(tf.float32, shape=[None, 784]) What this does is this is how we have our data enter the TensorFlow graph. So we do this by calling the tf.placeholder function. The most important arguments here are the type which is tf.float32, indicating that we’re going to be using 32-bit floats to represent our data and that the shape is a tensor where the first dimension is unknown, which is why we say it’s none and that’s going to correspond to the number of examples that we have. Then the second dimension is the size of each image, which in this case is 784. We got 784 because 784 is equal to 28x28 and these are 28 by 28 pixel images. # command line # open the python interpreter # ~ > python # 28 * 28 Full Source Code For Lesson # create-simple-feedforward-network.py # # to run # python numpy-arrays-to-tensorflow-tensors-and-back.py # import tensorflow as tf from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets("MNIST_data/", one_hot=True) x = tf.placeholder(tf.float32, shape=[None, 784])
https://aiworkbox.com/lessons/load-the-mnist-data-set-in-tensorflow-so-that-it-is-in-one-hot-encoded-format
CC-MAIN-2019-51
refinedweb
512
64
ARDeleteVUI Note You can continue to use C APIs to customize your application, but C APIs are not enhanced to support new capabilities provided by Java APIs and REST APIs. Description Deletes the form view (VUI) with the indicated ID from the specified server. Privileges BMC Remedy AR System administrator. Synopsis #include "ar.h" #include "arerrno.h" #include "arextern.h" #include "arstruct.h" int ARDeleteVUI( ARControlStruct *control, ARNameType schema, ARInternalId vuiId, VUI to delete. vuiId The internal ID of the VUI to delete. Return values status A list of zero or more notes, warnings, or errors generated from a call to this function. For a description of all possible values, see Error checking. See also ARCreateVUI, ARDeleteSchema, ARGetVUI, ARGetListVUI, ARSetVUI. See FreeAR for: FreeARStatusList.
https://docs.bmc.com/docs/ars91/en/ardeletevui-609070998.html
CC-MAIN-2019-35
refinedweb
124
53.68
6340 Re: remove entries from database ... Sure. One day someone will write a CLI tool to explore and change the database using this way. ... Everything is stored in RAM, it should be fast. -- manu@... Oct 30#6340 6339 Re: remove entries from database ... If you have a sync address, is it OK to telnet to the port to do the delete? Also, how expensive is it to do mass deletes? I have 148 thousand entries this Jonathan Siegle Oct 30#6339 6338 Re: remove entries from database ... You can stop miler-greylist, and remove the line from the database file (it is text format). You can also set up an ACL to do it, using the flushaddr manu@... Oct 29#6338 Fetching Sponsored Content... 6337 remove entries from database Hi, how do I remove single entries from the database? Cheers Marcus lists-yahoogroups@... Oct 29#6337 6336 Re: Question about whitelist, greylist and dnsrbl ... I've been looking at flushaddr to solve this problem. The notation would look like: acl blacklist dnsrbl "PSU BLACKLIST" msg "You are on the PSU blacklist. Jonathan Siegle Oct 15#6336 6335 Re: missing res_state ... It is in CVS now. -- Emmanuel Dreyfus manu@... Emmanuel Dreyfus Oct 15#6335 6334 Re: missing res_state ... Thanks for the help. Here are the patches. Is it possible to include them into the next development release please? ... +++ configure.ac 2013-10-15 Bruncsak, Attila Oct 15#6334 6333 Re: Milter-greylist and LDAP ... README has a simple example: ldapconf "ldapi:// ldaps://ldap.example.net" ldapcheck "mytest" "ldap://ldap.example.net/o=example?whitelist?sub?mail=%r" racl manu@... Oct 13#6333 6332 Re: libspf2 and IPv6 ... Yes, I told the maintainers in 2009 (two thousand and nine) about this problem. I've even included the necessary patch. Unfortunately it took them until Matthias Scheler Oct 13#6332 6331 libspf2 and IPv6 Hi MG-folks! I just want to note, that if someone is using milter-greylist in conjunction with libspf2 and is thinking about to turn on IPv6 support, the Johann Klasek Oct 13#6331 6330 Re: Milter-greylist and LDAP ... Is it all described in manpage/readme, or is there anything you would add (or rephrase as a how-to) for ldap-based setups of milter-greylist done nearly Jim Klimov Oct 13#6330 6329 Re: Milter-greylist and LDAP ... I have been using LDAP-stored per-user filtering settings for years, it works very well. -- Emmanuel Dreyfus manu@... manu@... Oct 13#6329 6328 Milter-greylist and LDAP Hello all, Our typical configuration involves a number of files (pieces of greylist.conf which are compiled into the actual config file) which include static Jim Klimov Oct 13#6328 6327 Question about whitelist, greylist and dnsrbl Hello all, Currently my MTAs call various filtering routines in such an order that DNS RBL lookup is performed by the MTA, and hosts which are not instantly Jim Klimov Oct 13#6327 6326 Re: missing res_state ... I think I missed that one. ... Keep it simple, we just need to build the thing. -- Emmanuel Dreyfus manu@... manu@... Oct 9#6326 Fetching Sponsored Content... 6325 Re: missing res_state ... I see this is already in CVS :) ... unlike the fix for res_state or my recent fix (preferably the longer one) for the undef-macros in configure.ac. As for Jim Klimov Oct 9#6325 6324 Re: missing res_state ... It all compiles fine with just a warning: milter-greylist.c:36:8: warning: extra tokens at end of #endif directive Bruncsak, Attila Oct 9#6324 6323 Re: missing res_state ... use in this function) ... only once ... Please add in milter-greylist.c: #ifndef PACKAGE_URL #define PACKAGE_URL "" #endif PACKAGE_URL after #include manu@... Oct 9#6323 6322 Re: missing res_state ... Actually looking at the code it is valid the PACKAGE_URL not to be defined. It is the code in the milter-greylist.c which should be prepared to properly Bruncsak, Attila Oct 9#6322 6321 Re: missing res_state ... I do not have defined. Actually this is what I have in it: /* Packaging metadata: distro contact */ /* #undef PACKAGE_URL */ Bruncsak, Attila Oct 9#6321 6320 Re: missing res_state Weirder and weirder... can you look if this macro is defined (and is not ultimately undefined without redefinition) in the config.h which should be generated Jim Klimov Oct 8#6320 6319 Re: missing res_state ... Both the two patches are seemingly fine, I got cleanly the configure script with no errors. I could run nicely the configure script too. On the other hand Bruncsak, Attila Oct 8#6319 6318 Re: missing res_state ... Right, my bad, I did see that earlier but forgot :-\ Did you try today's patch? Did it help? HTH, //Jim Jim Klimov Oct 8#6318 6317 Re: missing res_state Sorry about all the inconvenience, see if this replacement definition (place into configure.ac:94) would help you? m4_define([__AC_UNDEFINE],[echo "#ifdef $1 Jim Klimov Oct 8#6317 6316 Re: missing res_state ... If you look at this thread earlier my problem was just the missing res_state type. Manu has suggested to make a configure check for that with autoconf. Bruncsak, Attila Oct 8#6316 6315 Re: missing res_state ... While this is a pretty old version too (hey, 5 years), and might be said to be the distribution's problem, I do agree now that it is a problem indeed. I'll Jim Klimov Oct 8#6315 6314 Re: missing res_state ... So I went ahead and installed now on CentOS release 6 the autoconf package for testing. Its version is: autoconf (GNU Autoconf) 2.63 Copyright (C) 2008 Bruncsak, Attila Oct 8#6314 Fetching Sponsored Content... 6313 Re: missing res_state ... Mine on Solaris 10 was installed ages ago (from SunFreeWare ports when they were still open), and is barely newer than yours: $ autoconf --version autoconf Jim Klimov Oct 8#6313 6312 Re: missing res_state ... Sorry, CentOS 5 release. Bruncsak, Attila Oct 8#6312 6311 Re: missing res_state ... No, I do not have if defined. My autoconf is actually the latest in the CentOS 6 production release. Version information: autoconf (GNU Autoconf) 2.59 Bruncsak, Attila Oct 8#6311 View First Topic Go to View Last Topic
http://groups.yahoo.com/neo/groups/milter-greylist/conversations/messages
CC-MAIN-2013-48
refinedweb
1,022
76.62
For C++ programs, it will also look in /usr/include/g++-v3, first. In the above, target is the canonical name of the system GCC was configured to compile code for; often but not always the same as the canonical name of the system it runs on. version is the version of GCC in use. You can add to this list with the -Idir command line option. All the directories named by -I are searched, in left-to-right order, before the default directories.. GCC looks for headers requested with #include "file " first in the directory containing the current file, then in the directories as specified by -iquote options, then in the same places it would have looked for a header requested with angle brackets. For example, if /usr/include/sys/stat.h contains #include "types.h", GCC looks for types.h first in /usr/include/sys, then in its usual search path. `#line' (see Line Control) does not change GCC's idea of the directory containing the current file. You may put -I- at any point in your list of -I options. This has two effects. First, directories appearing before the -I- in the list are searched only for headers requested with quote marks. Directories after -I- are searched for all headers. Second, the directory containing the current file is not searched for anything, unless it happens to be one of the directories named by an -I switch. -I- is deprecated, -iquote should be used instead. -I. -I- is not the same as no -I options at all, and does not cause the same behavior for `<>' includes that `""' includes get with no special options. -I. searches the compiler's current working directory for header files. That may or may not be the same as the directory containing the current file. If you need to look for headers in a directory named -, write -I./-. There are several more ways to adjust the header search path. They are generally less useful. See Invocation.
http://gcc.gnu.org/onlinedocs/gcc-4.4.7/cpp/Search-Path.html
CC-MAIN-2014-52
refinedweb
332
74.39
JBoss Messenger upgrade to 1.4.2. GA - MDBs suddenly want MQMichael Hönnig Feb 13, 2009 4:33 AM We are using JBoss Messenger 1.4.0 for a product under development. I tried to upgrade to 1.4.2 today, but all over the sudden: 21:15:57,719 WARN [ServiceController] Problem starting service jboss.j2ee:ear=hsa.ear,jar=hsar.jar,name=QueueStatusReceiver,service=EJB3 javax.management.InstanceNotFoundException: jboss.mq:service=DestinationManager is not registered. Where QueueStatusReceiver is an MDB. But why does it suddenly ask for a JBoss MQ service? It did work with 1.4.0 and there is nothing about MQ in our config at all and neither in our source code. Any idea where I could start checking what's going on? Thanks ... Michael p.s. I was pointed to this forum as a response to the same question on the JBosse Messenger forum 1. Re: JBoss Messenger upgrade to 1.4.2. GA - MDBs suddenly wanjaikiran pai Feb 13, 2009 5:30 AM (in response to Michael Hönnig) Please post more details including the configuration files that you use. Also please post the console logs. If the QueueStatusReceiver is configured through annotations, then post that code or else post the ejb-jar.xml and jboss.xml for that EJB. 2. Re: JBoss Messenger upgrade to 1.4.2. GA - MDBs suddenly wanMichael Hönnig Feb 13, 2009 7:13 AM (in response to Michael Hönnig) I can hardly post all my JBoss configuration. The point is: It DID WORK with JBoss Messenger 1.4.0 and I just upgraded to 1.4.2. I have not changed my other config at all. But anyway: Here my annotations (... is just a placeholder): @MessageDriven(activationConfig= { @ActivationConfigProperty(propertyName="destinationType", propertyValue="javax.jms.Queue"), @ActivationConfigProperty(propertyName="destination", propertyValue="queue/...Status"), @ActivationConfigProperty(propertyName="acknowledgeMode", propertyValue="Auto-acknowledge") } ) public class QueueStatusReceiver implements javax.jms.MessageListener Here the relevant part of boss-messaging.sar/destinations-service.xml: <mbean code="org.jboss.jms.server.destination.QueueService" name="jboss.messaging.destination:service=Queue,name=...Status" xmbean- <depends optional-jboss.messaging:service=ServerPeer jboss.messaging:service=PostOffice Which other of the vast amount of configs would you need? As I said, there is not a single word of MQ in any config file. Not a single! And yet, some JBoss part suddenly is asking for it. Maybe one hint: I am using sslbisocket, but I also used it with 1.4.0 and it did work (just not with PostgreSQL 8.3 anymore - but that's a different problem). 3. Re: JBoss Messenger upgrade to 1.4.2. GA - MDBs suddenly wanMichael Hönnig Feb 13, 2009 7:16 AM (in response to Michael Hönnig) Also the log of startup is HUGE. Any hints what to look for in the log? 4. Re: JBoss Messenger upgrade to 1.4.2. GA - MDBs suddenly wanjaikiran pai Feb 14, 2009 3:44 AM (in response to Michael Hönnig) Its a WARN message. Is it affecting your application in any way? Also are there any ERROR or exceptions? Around 10 lines before that WARN message might give some hints.
https://developer.jboss.org/thread/34747
CC-MAIN-2018-43
refinedweb
528
53.37
The following form allows you to view linux man pages. #include <wchar.h> size_t mbsnrtowcs(wchar_t *dest, const char **src, size_t nms, size_t len, mbstate_t *ps); Feature Test Macro Requirements for glibc (see feature_test_macros(7)): mbsnrtowcs(): Since glibc 2.10: _XOPEN_SOURCE >= 700 || _POSIX_C_SOURCE >= 200809L Before glibc 2.10: _GNU_SOURCE The mbsnrtowcs() function is like the mbsrtowcs(3) function, except that the number of bytes to be converted, starting at *src, is limited to nms. If dest is not NULL, the mbsnrtowcs() function converts at most nms bytes from the multibyte string *src to a wide-character string start- ing charac- ters, exclud- ing the terminating null wide character, is returned. If dest is NULL, len is ignored, and the conversion proceeds as above, except that the converted wide characters are not written out to mem- ory, and that no destination length limit exists. In both of the above cases, if ps is NULL, a static anonymous state known only to the mbsnrtowcs() function is used instead. The programmer must ensure that there is room for at least len wide characters at dest. Passing NULL as ps is not multithread safe. iconv(3), mbrtowc(3) mbsinit(3), mbsrtowcs(3) webmaster@linuxguruz.com
http://www.linuxguruz.com/man-pages/mbsnrtowcs/
CC-MAIN-2019-04
refinedweb
201
60.85
real *nix devs don't test in IE Saturday, 17. February 2007, 17:14:00 // Cross-browser implementation of element.addEventListener() function addEventListener(element, type, expression, bubbling) { bubbling = bubbling || false; if (window.addEventListener) { // Standard element.addEventListener(type, expression, bubbling); return true; } else if (window.attachEvent) { // IE element.attachEvent('on' + type, expression); return true; } else return false; } Whoever wrote that obviously is confused about event bubbling versus event capturing (luckily it defaults to a sensible false!) but the main problem with this code is the line if (window.addEventListener) { // Standard Um, no, what you are seeing there is not the W3C standardised window.addEventListener1. You're actually checking for the existence of this very function - the one we're inside when we hit that statement. Naturally IE chokes on the next line and no event handlers are added. (If you ask, it should read if(element.addEventListener).) So - a slick, good-looking production site that wasn't tested with IE - what a rarity! Edit - note 1: well, actually W3C didn't specify addEventListener for window in the first place, it remains a Gecko extension like I've complained about earlier so the comment "// Standard" is doubly wrong.. EDIT: and if you hover over the feature screenshots, the magnified version that should pop up right where the mouse is actually appears at the very bottom of the page. Too bad if that part is not shown on the screen at the moment. Hope the program works better than their site. By WildEnte, # 17. February 2007, 18:26:07 The feature screenshots problem is what I actually was investigating. The site tries to position the screenshots by reading "x" and "y" properties from the IMG element you hover. I don't know why IMG elements (only?!) have .x and .y in Firefox, it looks like a Netscape 4 feature they've kept for some reason..? By hallvors, # 18. February 2007, 14:07:48 It would have helped in this case if the global object had a more accurate name. It isn't apparent to beginners that objects created in the global namespace are added as properties to an object named "window". By HeroreV, # 18. February 2007, 21:16:03 Originally posted by WildEnte:Same issue as wordpress. Floated links with big margin-bottom. By xErath, # 19. February 2007, 08:41:42 Also note that IE/Mac doesn't support either addEventListener or attachEvent By crisp, # 19. February 2007, 22:59:13 By tarquinwj, # 1. March 2007, 21:48:48
http://my.opera.com/hallvors/blog/show.dml/761482
crawl-001
refinedweb
416
59.6
Version: V1.0 This is XHTML 1.0 Transitional, an XML reformulation of HTML 4.0 Transitional. Copyright 1998-1999 World Wide Web Consortium, (Massachusetts Institute of Technology, Institut National de Recherche en Informatique et en Automatique, Keio University). All Rights Reserved. Permission to use, copy, modify and distribute the XHTML 1. The XHTML 1.0 DTD is an XML variant based on the W3C HTML 4.0 DTD: This is the driver file for version 1.0 of the XHTML Transitional DTD. Please use this formal public identifier to identify it: "-//W3C//DTD XHTML 1.0 Transitional//EN" Please use this URI to identify the default namespace: "" For example, if you are using XHTML 1.0 directly, use the FPI in the DOCTYPE declaration, with the xmlns attribute on the document element to identify the default namespace: <?xml version="1.0" ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "XHTML1-t.dtd" > <html xmlns="" xml: ... </html> No warnings while parsing. No errors while parsing. Last revised: Thu, Apr 1, at 06:02:42 PM PDTCopyright ©1999 Sun Microsystems, Inc. 901 San Antonio Road, Palo Alto, California, 94303, U.S.A. All Rights Reserved.
http://www.w3.org/TR/1999/xhtml-modularization-19990406/DTD/doc/xhtml1-t.html
CC-MAIN-2017-17
refinedweb
196
62.64
Subject: [boost] [mp_math_v02] Bug in assignment operator From: Mikko Vainio (mikko.vainio_at_[hidden]) Date: 2008-10-03 02:57:27 Hi, Referring to the mp_math_v02 library in the Vault, the assignment operator of class mp_int<> does not quite behave the way it should. The following program #include <iostream> #include <boost/mp_math/mp_int.hpp> using namespace std; int main( int argc, char* argv[] ) { boost::mp_math::mp_int<> a, b(-1); a = -1; cout << a << " == " << b << endl; return 0; } produces the output -4294967295 == -1 It seems that the constructor code behaves as expected. I'm using gcc 4.3.0 on x86 Fedora 9. Cheers, Mikko Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2008/10/142963.php
CC-MAIN-2019-26
refinedweb
125
60.92
Is there a way to open a file (if file) from selected text... For example if I highlight a class name, is there a way to open that file in a new tab? "goto anything" pre-populated with the selected text would be awesome. Ditto! If it's for Python or Magento (PHP) code I've something working. Here's a plug-in I wrote a little while back that does that. It's intended to work like Ctrl+Enter in Borland Delphi: open the thing under the cursor as a file. It will try in the current path first, and if not found, it will look in a list of other paths, and will try each of a list of extensions. For example, I have a library called "GenUtils.py" that I keep in a \projects folder, on different drives between my home and work computers. I can click on "import GenUtils" in a piece of code, and this plugin will search for and open the first file matching "GenUtils.py" or "GenUtils.txt" in any folder named in the list. Copy the code below into your plugin folder as OpenFile.py ...\Sublime Text\Data\Packages\User\OpenFile.py Add a keystroke combination to your user keymap file <binding key="ctrl+enter" command="openFile" /> (Ctrl+Enter is initially assigned to "Add Line.sublime-macro") Make sure to edit the "paths" and "exts" lists below to match your preferences. If you have questions or run into something that doesn't work, please let me know. Todd # OpenFile.py # open filename at cursor in SublimeText # Todd Fiske (toddfiske at gmail) # 2010-12-23 12:17:48 first version # 2011-01-14 13:24:33 added MakeName generator to handle common extensions # 2011-04-04 12:59:19 updated for Sublime Text forum import sublime, sublimeplugin import os paths = ".", "h:\\projects", "c:\\projects"] exts = ".py", ".txt"] def MakeName(paths=], name="stub", exts=]): for p in paths: for e in exts: thisName = os.path.join(p, name + e) yield thisName class OpenFileCommand(sublimeplugin.TextCommand): def run(self, view, args): print "OpenFile: cwd = %s" % os.getcwd() for region in view.sel(): if region.empty(): # no selection, get word at cursor savedWordSeps = view.options().get("wordSeparators") view.options().set("wordSeparators", "") word = view.word(region) lineContents = view.substr(word) view.options().set("wordSeparators", savedWordSeps) else: #- else get selection lineContents = view.substr(region) #- remove any leading comment or space characters if len(lineContents) > 0: while (lineContents[0] in "# "): lineContents = lineContents[1:] #- try to open unmodified input first fileName = lineContents print " trying %s]" % fileName found = os.path.exists(fileName) if not found: for fileName in MakeName(paths, fileName, exts): print " trying %s]" % fileName found = os.path.exists(fileName) if found: break if found: print " opening %s" % fileName sublime.activeWindow().openFile(fileName) else: print " no matching file was found" ###
https://forum.sublimetext.com/t/open-file-from-selected-text/1482/4
CC-MAIN-2017-43
refinedweb
470
58.58
Problem Statement In this problem we are s1 = "abc", s2 = "xya" true Explanation: “ayx” is a permutation of s2=”xya” which can break to string “abc” which is a permutation of s1=”abc”. s1 = "abe", s2 = "acd". Approach A simple approach to this problem is to check each permutation of s1 with each permutation of s2 to find if their exist any pair that satisfy above condition. We can do this thing if the size of the string is small. But here length of the string is very large so it is impossible to create all permutations. Going with the problem statement we want one string to completely cover the second string. Covering in sense that for each character position, the character at one string should be greater than equal to character at the second string( according to alphabetical order). This should be followed by all the characters in string. Now the main observation here is if we want all the string characters to be greater in first string than second then we have to compare smaller character in s1 with smaller character in s2. Similarly greater element with greater one. This permutation will be optimum to check if one breaks another or not. Example s1=”abc” and s2=”xya”. After sorting “xya” it will be higher than “abc” at each point. If we able to make s1 greater than s2 for all characters then we return true. In second case if we able to make s2 greater than s1 then also we return true. Otherwise no one can break other. Algorithm: - If length of s1 is not equal to length of s2 then return false. - Sort both the string in ascending or descending order. - Run a loop along the characters of s1. Check for each character if s1[i]>=s2[i]. If all the characters satisfy this condition then return true. - Now run a loop along the characters of s2. Check for each character if s2[i]>=s1[i]. If all the characters satisfy this condition then return true. - Else return false. Implementation C++ Program for Check If a String Can Break Another String Leetcode Solution #include <bits/stdc++.h> using namespace std; bool checkIfCanBreak(string s1, string s2) { if(s1.length() != s2.length()) return false; sort(s1.begin(),s1.end()); sort(s2.begin(),s2.end()); int i=0; while(s1[i]) { if(s1[i]<s2[i]) break; i++; } if(i==s1.length()) return true; i=0; while(s2[i]) { if(s1[i]>s2[i]) break; i++; } if(i==s2.length()) return true; return false; } int main() { string s1 = "abc"; string s2 = "xya"; if( checkIfCanBreak( s1, s2) ) cout<< "true" ; else cout<< "false"; return 0; } true Java Program for Check If a String Can Break Another String Leetcode Solution import java.util.*; class Rextester{ public static boolean checkIfCanBreak(String s1, String s2) { if(s1.length() != s2.length()) return false; char[] c1=s1.toCharArray(); char[] c2=s2.toCharArray(); Arrays.sort(c1); Arrays.sort(c2); int i=0; while(i<s1.length()) { if(c1[i]<c2[i]) break; i++; } if(i==s1.length()) return true; i=0; while(i<s2.length()) { if(c1[i]>c2[i]) break; i++; } if(i==s2.length()) return true; return false; } public static void main(String args[]) { String s1 = "abc"; String s2 = "xya"; System.out.println(checkIfCanBreak( s1, s2) ); } } true Complexity Analysis for Check If a String Can Break Another String Leetcode Solution Time Complexity O(nlog(n)) : where n is the length of the given string. We sorted the given string and traversed it two times linearly. Hence time complexity will be nlogn. Space Complexity O(1) : we did not used any extra memory. Although for some sorting algorithms space complexity can be greater than O(1).
https://www.tutorialcup.com/leetcode-solutions/check-if-a-string-can-break-another-string-leetcode-solution.htm
CC-MAIN-2021-49
refinedweb
621
67.65
I have a listview... I want to print listview checked items... some or all no matter... If checked item total one pages, there is no problem.. but one checked item costs for two pages... problems begin... I cant tell rightly.... i added my code sample... Please help... FILE View Complete Post Hi, I am goint to kill myself.. I hate e.DrawString class... It's so complicate... Here is my code: Dim in As Integer in = 100 Dim AREA As New SizeF(W, H) Static i As Integer While i < lstgundem.CheckedItems.Count Dim CharCount1, CharCount2, CharCount3, CharCount4 as Integer Dim LineCount1, LineCount2, LineCount3, LineCount4 as Integer Dim a As Integer a = Font.Height Dim Header1 As New RectangleF(Left, Top, W, H) e.Graphics.DrawString("HEADER 1", font, Brushes.Black, Header1, Format) Dim Text1 As New RectangleF(Left, Top + a, W, H) e.Graphics.MeasureString((LV.CheckedItems(i).SubItems(1).Text), font, AREA, Format, CharCount1, lineCount1) e.Graphics.DrawString((LV.CheckedItems(i).SubItems(1).Text), font, Brushes.Black, Text1, Format) Dim Header2 As New RectangleF(Left, Top + a + a * LineCount1, W, H) e.Graphics.DrawString("HEADER 2", font, Brushes.Black, Header2, Format) Dim Text2 As New RectangleF(Left, Top + 2 * a + a * LineCount1, W, H) e.Graphics.MeasureString((LV.CheckedItems(i).SubItems(2).Text), font, AREA, Format, CharCount2, lineCount2) e.Graphics.DrawString((LV.CheckedItems(i).SubItems(2).Text), font, Brushes.Black, Text2, Format) Dim Header3 As New RectangleF(Left, Top + 3 * a + a * LineCount1 + a * LineCount2, W, H) e.Graphics.Dra I have got some texts for print... Header1 text1.. Header2 text2... Header3 text3... theese texts sometimes long, sometimes short. I measured texts how many line fitted. text1= 20 lines (measured, chars and line) header1=1 line text2= 30 lines (measured, chars and line) header3=1 line text3= 45 lines (measured, chars and line) 50 lines per page... How skip next page, and begin print last printed line ? Please help. Dear All, I am having problem with Khmer unicode using DrawString. Some of the characters not shown up correctly. Search on the Web shows that GDIplus had no Khmer support, but that was in 2004. May I know if we have any solution now? Thanks in advance. I'm trying to send khmer script(unicode) string to printer using PrintDocument provided by the .NET framework. Unfortunately it seems to me that the Graphics.DrawString() does not render khmer script correctly. Platform: Windows 7 Ultimate IDE: VS 2010 Ultimate + .NET Framework 4 Here is the full sample code: / using System; using System.Windows.Forms; using System.Drawing; using System.Drawing.Printing; namespace PriintKhmerUnicode { static class Program { static PrintDocument printDoc = new PrintDocument(); static Font font = new Font("Khmer UI", 16); static string text = "??????"; /// <summary> /// The main e Hall of Fame Twitter Terms of Service Privacy Policy Contact Us Archives Tell A Friend
http://www.dotnetspark.com/links/42403-vb-net-edrawstring-and-ehasmorepage.aspx
CC-MAIN-2018-13
refinedweb
471
61.73
Java faqs This section contains collection of frequently asked questions(faqs) in interview or viva of Java Design patterns interview questions2 Design patterns interview questions2  ... or a JSP (through a Java Bean). This Controller takes over the common processing.... These are normal java classes which may have different constructors (to fill in the value Java faqs Java faqs Hello Java Developers, I am beginner in Java and trying to find the best java faqs. Where I can find java faqs? Thanks Hi, Please see the thread java faq Thanks Interview Questions J2EE Interview Questions Question: What is J2EE? Answer: J2EE Stands for Java 2... that run on the client. Java servlet and JavaServer Pages (JSP J2EE - Java Interview Questions J2EE How do we call destroy() in service java - JSP-Interview Questions java hi.. snd some JSP interview Q&A and i wnt the JNI(Java Native Interface) concepts matrial thanks krishna Hi friend, Read more information. are the services does the weblogic server provide in project development OR what are the services we take from the weblogic during project development interview questions page1 J2EE interview questions page1 What is J2EE? J2EE is an environment for developing and deploying enterprise applications. The J2EE platform consists of a set j2ee - Java Interview Questions j2ee What is MVC Architecture..How JTable,JButton,......etc and all other swing components follow MVC Architecture. What is the importance of a MVC architecture in a web application,and in struts. Give me Collection of Large Number of Java Interview Questions! Interview Questions - Large Number of Java Interview Questions Here you will find Job Interview Questions for J2EE technologies. Before appearing in Job... Java Interview Questions Technology What is and FAQs Technology What is and FAQs  ... of Java Programming Language API (Application programming interface) that is very useful to many Java platform programs. It is derived from one of the mostsp - Java Interview Questions Need JSP Interview Questions Hi, I need JSP interview questions.Thanks.   J2EE - Java Beginners J2EE I am a non java programmer.I am interested in learning... consentarate on j2ee(jdbc,servlets,jsp,struts).Then you can start XML.And also... knowledge on core java,and also that in future I want to become a XML java interview - JSP-Interview Questions java interview what type of questions would be asked to a 3 years experience person in java? can anyone please provide list of topics or interview questions for 3 years experience in java interview question - Java Interview Questions interview question hello i want technical interview question in current year Hi Friend, Please visit the following links: J2ee Crystal report integration with java web application.Right now i am using jsp to create reports.Developer cost is very high when making report in jsp(Handling dynamic sql queries,page...J2ee Crystal report integration Hai Sir, I am familiar with crystal J2EE Tutorial In this section we will read about Java J2EE. This section is for those who... or large. Requirements To Develop Java J2EE based application... page etc. J2EE versions Major versions of J2EE are as follows : Java EE hint - Java Interview Questions hint Dear roseindia, i want the java interview question... the following link: Here you will get lot of interview questions and their answers. Thanks thanks for your Java - Java Interview Questions Interview interview, Tech are c++, java, Hi friend, Now get more information. you can lean all type interview question by following link link. j2ee - Java Server Faces Questions j2ee Hi iam swathi.I hope u will answer my question. Observe... in work. Jus i saw your answer. Java is the flexible language in the language family... *******////index.jsp////////////****** welcome in java world. j2ee - JSP-Servlet j2ee What is session link - Java Beginners Interview question link Hi, I want JAVA+J2EE interview question and answer please suggesion me link interview question - Servlet Interview Questions interview question What is Servlet? Need interview questions on Java Servlet Servlet is one of the Java technologies which is used... according to your experience.And for collection of Java Servlet Interview Question you JSP - JSP-Interview Questions are the comments in JSP(java server pages)and how many types and what are they.Thanks inadvance. Hi friend, JSP Syntax XML Syntax... A comment marks text or lines that the JSP container Interview Question JSP Interview Question What is JSP? Describe its concept. Java Server Pages (JSP) is a server-side programming technology that enables the creation of dynamic web pages and applications. A JSP is translated J2EE clients J2EE clients What are types of J2EE clients? Following are the types of J2EE clients: Applets. Application clients. Java Web Start-enabled rich clients, powered by Java Web Start technology. Wireless clients, based java - JSP-Interview Questions java program How to write a Java program java - JSP-Interview Questions java 1. why implicit object "Exception" is difference from other implicit objects? 2. what is the meaning of exception page & exception in jsp directive java - JSP-Interview Questions Java database connectivity and accessing the database Please explain, what is Java database connectivity and how to access the database? Java Database Connectivity in short JDBC is an interface used to access database - JSP-Servlet j2ee how upload a file in database and store the file in a separete directory and can read the file when ever we want(SOURCE CODE java - Java Interview Questions /interviewquestions/ Here you will get lot of interview questions...java hello sir this is suraj .i wanna ask u regarding interview questins in java .wat normally v see in interviews (tech apptitude)is one line interview questions - EJB . If you really want to win the interview then follow the steps.Learn core java http...interview questions in Java Need interview questions in Java ...:// Questions: J2EE Tutorial - Session Tracking Example J2EE Tutorial - Session Tracking Example  ...; // THIS IS A JAVA BEAN. import java.util.*; public class...; The following JSP file is invoked by carter.htm J2EE - Java Beginners J2EE Programming What i need to learn to start programming in J2EE - Java Beginners j2ee class one JSP Interview : JSP Interview Questions -2 JSP Interview : JSP Interview Questions -2 Page of the JSP Interview Questions...? Answer: JavaServer Pages (JSP) technology is the Java platform j2ee - JDBC j2ee how to connect jsp to mysql Hi, Thanks for asking question. I will tell you how you can connection to MySQL from JSP page... to connect to MySQL database: Tutorial - Introduction presentation of what & why of J2EE. (It is better for aspiring Java... security.) J2EE is not the Java , that comprises..., the true essence of Java language. For aspiring J2EE professionals Java interview qeastion Java interview qeastion how i can run java class file into jsp program java - JSP-Interview Questions java whats meant by the following terms as applied in java:- 1.object oriented programming. 2.void 3.private 4.protected thanks homey.  .... These are all fairly fundamental questions, try purchasing any introduction to Java Free J2EE Online Training Free J2EE Online Training The Enterprise Edition of Java popularly known as J2EE has many takers, thanks to its wide spread applications. To give... training. The students are also given training on JSP (Java Server Pages about J2EE. - Java Beginners about J2EE. I know only core Java ... what chapter I will be learn to know about J2EE J2EE Tutorial - Running RMI Example J2EE Tutorial - Running RMI Example  ... most carefully without break!) (continuously). >java... so good. But how about the automatic generation of IDL for non-java end J2EE Interviews Question page8,J2EE Interviews Guide,J2EE Interviews ; What is J2EE ? Java 2 Platform, Enterprise Edition.  ... that run on the server. J2EE components are written in the Java programming.... The difference between J2EE components and "standard" Java classes is that J2EE what is the difference in java and j2ee what is the difference in java and j2ee Hi, Please tell me: 1. What is the difference between Java and Core Java and 2. Difference between Java and J2EE j2ee - Java Beginners j2ee why we are using the j2ee? Hi friend, J2EE Stands for Java 2 Enterprise Edition. J2EE is an environment for developing and deploying enterprise applications. J2EE specification is defined by Sun J2EE Interview Questions -2 J2EE Interview Questions -2 Question...: J2EE Connector Architecture (JCA) is a Java-based technology solution.... Question: What do you understand by JTA and JTS? Answer: JTA stands for Java about J2EE. - Java Beginners about J2EE. I know only core Java ... what chapter I will be learn to know about J2EE. Hi Friend, Please visit the following link: Thanks J2EE Online Training J2EE Online Training Due to increasing significance of Java Enterprise Edition or Java 2 EE, the need for J2EE online training has increased manifold... perfection that include JSP(Java Server Pages), Model1 and Model2 Architectures. j2ee - Java Server Faces Questions J2EE EAR Structure What is the structure for J2EE EAR script - JSP-Interview Questions java script i want that my registration page shud be get poped up when i will clik a on a link on my login page....how can i do it using java script or i shuld use html javascript - JSP-Interview Questions ://','mywindow','width=400,height=200')">< j2ee j2ee what is j2ee Java: Method FAQs Java: Method FAQs Q: Do we always have to write a class name in front of static method calls? A: Yes, but Java allows an exception. If the static method..., it's called "pass by value", which is Java's only way of passing parameters J2EE Interviews Question page3,J2EE Interviews Guide,J2EE Interviews J2EE Interviews Question page3  ... is component contract ? The contract between a J2EE component and its container... for the Java programming language, that is, the fields that would be stored Interview Questions - Large Number of Java Interview Questions J2EE Interviews Question page12,J2EE Interviews Guide,J2EE Interviews J2EE Interviews Question page12 What is JMS client ? A Java language program... ? A messaging system that implements the Java Message Service as well as other J2EE Interviews Question page11,J2EE Interviews Guide,J2EE Interviews J2EE Interviews Question page11  ... is JavaServer Pages (JSP) ? An extensible Web technology that uses static data, JSP elements, and server-side Java objects to generate dynamic content for a client J2EE Interviews Question page10,J2EE Interviews Guide,J2EE Interviews J2EE Interviews Question page10 What is Java Message Service (JMS) ? An API...; What is Java Naming and Directory Interface (JNDI) ? An API Ask Questions? If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for. Ask your questions, our development team will try to give answers to your questions.
http://www.roseindia.net/tutorialhelp/comment/19286
CC-MAIN-2013-20
refinedweb
1,764
56.55
Grace and Hope IN THE BIBLe THE Minor Prophets: Anne Oâ&#x20AC;&#x2122;BrieN Bible Studies on Grace and Hope in The Book of Acts Hosea Joel Amos Obadiah Jonah MicHA NAHUM habakkuk zephaniah haggai Zechariah Malachi Grace and Hope in the Minor Prophets THE BOOK OF HOSEA Hosea was a prophet contemporary with Isaiah. His book describes the unfaithfulness of the Jewish people and prophesises about the coming exile as part of God’s judgment. But praise the Lord, it also shows us how God remains faithful and how he is a God of restoration. In this book we see how Hosea was asked to ‘act out’ a parable (literally), in which Hosea and his unfaithful wife represent God and his people. To emphasise parallels we will look at the story piece by piece. Reading Chapter 1v2 ‘’ 1v3 ‘’ 1v6 ‘’ 1v8&9 Chapter 2v5 and v14-16 Chapter 3v1 ‘’ 3v2 Hosea and Gomer Hosea married a prostitute, a parable about God and errant Israel They had a son who was to be called Jezreel – meaning ‘to be scattered’ (paralleling the prophecy that Israel would be scattered. God said call your daughter, Lo- Ruhamah, ‘unloved’ (or ‘no mercy’). A third child, another son, was to be called LoAmmi, ‘not mine’. (This child was probably not fathered by Hosea). He possibly represents the gentiles. Gomer returns to her old life, thinking it will be better. But Hosea pursues her. He still shows his commitment. God and the Israelites The meaning for us as Christians? This marriage God made a new represented the covenant with us – the Covenant that God had gentiles, through Jesus. with the Israelites A loving relationship. The Israelites had not Are there times when followed God’s ways we don’t feel settled? It – there would be a could be that God allows scattering, some to it. Maybe it is the Lord’s Assyria and further way of bringing us back afield. Others to Babylon into his will. (but they would return) Would this be likely to Sadly, God would happen under the New withdraw his loving Covenant? Think of the hand from errant Israel reasons why not. – leaving them to their own devices and to suffer the consequences. Sadly, God was rejecting God extends his grace, even to us as gentiles, Israel. even when we backslide However, in the same – there is a way back to breath he promises him. Read Romans 5v8 future restoration (v11 and 2v1) God cannot be other than faithful – even when our faith fails. He always offers us hope when we put our trust in him. Read 2 Timothy 2v13 Hurt and jealous, Hosea God has a jealous love What grace is shown was to seek out his wife for his people and longs to us, even when we deserve punishment! and show her love again. for them to return to him. He never breaks the Read 1 John 1v9 covenant with them. God promised to redeem We are redeemed, paid Gomer was ‘owned’ by for, with the blood of the other man, so Hosea Israel through the had to buy her back. Messiah. To redeem is to Jesus Christ. What an act Hosea was willing to pay buy back what is already of Grace! What total love! Read 1 Peter 1v18-19 yours. the price. Israel had broken their covenant vows with God. But hope is offered. (Achor = trouble) Trouble will become a door of hope. Page 2 Chapters 4-10 Chapter 6v1-3 And verse 6 Chapter 11 Chapter 14v9 These chapters are not part of Hosea’s parable, but his words of prophesy to Israel about their sins and the judgment of being exiled. Chapter 4v1-3 The Israelites had broken all of the 10 commandments, the basis of the Old Covenant. Think about the 10 commandments – how many of them are being kept by people in our society today? Should we be worried? Will we be judged? The Israelites were full of empty words and promises, merely going through the motions of religion If their faith was worth anything (v6), it would show itself in love to God and love to others. We are to sow righteousness (see note on next page) In verses 3, 4 & 8 we see the heart of God – just how much he loved Israel. He had to exercise ‘tough love’ in order to bring them back to himself. Wisdom is following God’s ways Jesus said the fulfilment of the Law is to love God with all our hearts and to love our neighbours as ourselves. Read Luke 10v25-28 Again, God promises restoration ultimately. (Read chapter14v11) There is always grace, healing and restoration when we turn to him. Read 1 Peter 5v10 Praise God, His Grace is for each one of us. Again and again and again. Read Hebrews 4v16 As Hosea loved Gomer, so God loved Israel. Gomer and her children were restored into the family – once again loved and cherished. Read Chapter 10v12: This verse shows us what God really wants us to live like. The Hebrew gives a fuller meaning. ‘Sow righteousness for yourselves’ Righteousness should read ‘acts of lovingkindness’. Righteousness merely means doing what is right but the Hebrew word ts’dakah means that we should do more than is required. E.g. Loving our neighbour as ourselves; going the extra mile; giving good measure with no thought of return; giving more than our tithe to God; and all to be done with love and grace. A tall order!!! ‘Reap the fruit of unfailing love’ And the verse goes on to say that we can reap the fruit of unfailing love. In other words, we will reap what we sow (also in Galatians 6v7). Getting something back should not be our motivation but when we help others out of love for the Lord, he will reward us. ‘Break up your fallow ground’ Prepare your heart to receive from The Lord. Spend time with Him getting right with him; confess any sin or wrong attitude. ‘It us time to seek the Lord, until he comes and teaches you righteousness’ The time is always now! We need to seek God because he is the root of all righteousness. Being righteous in our own strength is merely doing the right things. The righteousness that God imparts helps us to truly show his loving kindness to all. This was the lesson that Israel had to learn – but surely it applies to us all! The Outcome Israel did not turn away from their sin and they were taken by the Assyrians. The Judean’s were more obedient to God and God stayed his hand for a further hundred years, after which they were exiled to Babylon. God’s promise that he would restore them came about after 70 years of exile, and they were allowed to return to Jerusalem to rebuild. Despite judgment there would always be a remnant who would be restored, because God cannot forsake his people – he has never broken his covenant with them. His name is “Faithful God” – he can be no other way. And he is our source of grace and our source of hope. Page 3 THE BOOK OF JOEL In this year, 2020, a plague of locusts has spread across West Africa and is currently devastating crops in Pakistan. Thousands of people have lost 50% of their crops and their income through loss of food and loss of cotton plants. Many are now starving, and the animals are also dying. These locusts can move at a speed of 100 miles a day, destroying all vegetation in their path. Even a small swarm of locusts (1 square kilometre) can consume as much food in one day as 35,000 people. They work like an army destroying everything in their path. Read Joel chapter 1v1-4 Israel had suffered a plague of locusts as described above. All the crops had been destroyed – completely. What the larger locusts had left the smaller ones had eaten. It was complete devastation. They had questions like, “Why had God not given the harvest? Why had he let them down?” Joel, the prophet (approx. 800 BC before the Assyrian invasion) spoke God’s words in answer to their question. · “Hear this” (v2) They were to listen carefully to what God was saying – and what the locust invasion meant for them. They were to remember it and tell it to their children. In other words, it would be good if they would learn from their mistakes! · “Wake up … and weep” (v5) There’s no food or wine, they cannot make their offerings to God. Their hope and their joy had gone (v12). They would be dependent on other nations because there was no harvest to look forward to. · “Repent … cry out to the Lord” (v13,14) The leaders were called to encourage the people to repent of their backsliding, and to cry out to the Lord for forgiveness and help, because the Lord had allowed this. There was no joy, no food, no sacrifices, and no pasture for the sheep (v17-20). The invasion of locusts was allowed by God, as a warning of what was to come upon Israel if they did not repent. The locusts symbolised Israel’s enemies, who were about to invade. Read Joel chapter 2v6-9 This is a description of Israel’s enemies - the Assyrian empire, and an imminent invasion. These verses (very cleverly written) could describe the locust invasion, but in fact they describe a military takeover of the country which was about to happen because they had not turned back to the Lord. But Joel, despite it being the eleventh hour, pleads with Israel again to repent. Read Joel chapter 2v12-14 Joel knows that God is a God of his word, He is upright and fair and just. He has foretold judgment, but even yet – he could relent if the people repent of their sin. These verses show us how God is just waiting for us to return so that he can pour out his blessings upon us. Q. What does God want to see in the Israelites (and in us)? When was the last time we did any of these things mentioned in verses 12 and 13? Which words remind us of God’s great grace? Read Joel chapter 2v22-26 The promise of eventual restoration: just as nature will restore itself, so God promises to restore those who return to him. The trees bearing fruit are a symbol of joy (9v22). The showers of rain are a symbol of refreshing (v23). The grain and wine are a symbol of communion with God (v24). GRACE is giving us more than we deserve, and it is shown in verses 25,26. We sing a song with the words, “He gives and takes away, my heart will choose to say, blessed be the Name of the Lord.” Sometimes our experience is that God does allow the locusts – he does take away, and we wont always know why. But we can trust that he has a reason, and that he will ultimately repay us for what we have lost (the years the locusts have eaten). For example, Job’s experience was that, at the end of his suffering he was twice as blessed as before. God never “owes us one”. We will see his hand of blessing when we keep close to Him and trust in Him. At this point Joel’s prophecy moves forward to the time of Jesus and the early church. Read Joel chapter 2v28-32 We know that this is Messianic prophecy because Peter quotes it in Acts chapter 2, after the Lord Jesus Page 4 ascended to Heaven and, as promised, poured out his Spirit on those gathered in the Upper Room. But, it wasn’t just for then – it is for all those who love the Lord for all time. The “latter rains” would be for all – Jew and gentile alike. They usher in a great “end time” harvest. Chapter 3v1 and 17&18 The Valley of Jehoshaphat means The Valley of Judgment – where all people of all nations will be judged. Those who have harmed Israel – or any of God’s children will be judged. Those who have not repented will be judged. For those who have lived before the coming of Jesus, their faith will be credited to them as righteousness (Gal 3v6&7). For those who have never heard the gospel, they will be judged according to their conscience (Romans 2v14-16). Read Romans and Galatians and you will see that God judges people according to their hearts, according to their faith and according to how they act towards others. We could never decide how people should be judged but we can put our trust in the all-righteous Lord. Read chapter 3v17,18 Zion is a synonym for Jerusalem – it is where God chooses to dwell. In Psalm 48 it is referred to as... Mount Zion, the city of the Great King and the joy of the whole earth. Why joy? Isaiah 35v10 gives us the answer: And the ransomed of the Lord shall return and come to Zion with singing; everlasting joy shall be upon their heads. Joel 2v32 also explains why: Everyone who calls on the name of the Lord will be saved; for on Mount Zion and in Jerusalem there will be deliverance. Close to Zion, Abraham received the promise of God for all nations. At the end time, Christ will reign from Mount Zion and everyone will have that last opportunity to turn to the Lord, before the judgment. God is not willing that any should perish (2 Peter 3v9). He extends his hand of grace and hope to the very last minute. Praise His name! _____________________________________________________________________ THE BOOK OF AMOS Having read the prophets so far (Isaiah, Jeremiah, Ezekiel, Daniel, Hosea and Joel), you will be getting the picture. From the end of Solomon’s reign, to the beginning of the exile, was a period of about 250 years where God is increasingly disappointed with the Israelites on many counts, but most of all because they have gone away from Him. So, there had been prophet after prophet bringing warning after warning for 6 or 7 generations. But the people would not listen. And so, these prophetic books are God’s words of judgment and love, where we see his sorrow because of the Israelites’ failure to keep the Covenant. Yes, these books are about judgment, but so much more about God’s GRACE and willingness to forgive if they repented and returned to Him. AMOS means ‘burden’ or ‘burden bearer’. God had placed a burden on him for the people of Israel. Sometimes he places a burden on us, to pray for a certain person or people, or even our nation. Chapter 1 Chapter 1 begins with impending judgment on all the nations surrounding Israel. These nations are like the nations today that persecute, torture and kill Christians in the name of their religion or ideology (like Communism). People often say, “Why doesn’t God intervene?”. Here, we see that eventually, he does. Read verses 1&2: The Lord roars from Zion – God was not pleased! And we see his judgment on the surrounding nations: Verses 3-5 ………. SYRIA … because she threshed Gilead (destroyed the harvest). Verses 6-8 ………. PHILISTIA … because she sold captive Israelites to Edom. Verses 9,10 ………. TYRE … because she sold whole communities of captives to Edom Verses 11,12 ………. EDOM … because he bought Israelites and slaughtered them. Verses 13-15 ……… AMMON … he carried out ethnic cleansing and took the land. We can take comfort from the fact that God sees every injustice committed against his people and he will deal with it in his own way and in his perfect timing. Page 5 Chapter 2 The Israelites were no doubt pleased that their enemies were to be judged. It was the furthest thing from their minds. But … now it is their turn! And in this chapter God sets out his case against Israel and Judah. Judah’s sins: Read 4-5 They had rejected and broken God’s Law, on which the covenant was based. They had been led astray by the false gods of the other nations. Israel’s sins: Read verses 6-8 They were guilty of horrendous treatment to the poor and vulnerable. They sold them into slavery for money (v6). They denied them any justice (v7a). They committed perverted sexual acts (v7b). They committed sacrilege and got drunk in the sanctuary of the temple and the altars of God (v8). Q. Is our country any better than this today? How or what can we do about it? Chapter 3 Read verses 7&8 and 10&11 God reminded them of all the warnings that had been given by the prophets. The lion has roared – this would make most of us afraid, and sit up and take notice! Not so Israel. Verse 11 describes the coming invasion of the Assyrian army who will overrun the land (like a plague of locusts, as described in the Book of Joel). The prophecies against Israel were to the rich who had exploited the poor and vulnerable. Read verse 15. It is they who will be judged more severely, for living in their “ivory palaces”, whilst others were starving and destitute. Chapter 4 continues the charges against Israel, and their pending judgment. Chapter 5 Read verses 21-24 – What God wants from a nation Not religion – but true love for God. God is not interested in people who say and do what they think God wants to hear. He is not impressed with church attendance and piety that is not from the heart. He hated the sacrifice of animals in his name, when they were not accompanied by repentance. Being sorry is not enough for God, we ought to show that we want to start behaving in the right and just way. Saying we feel for the poor and vulnerable is not enough. If we have more than they do, then we should be helping them. “Let justice roll on like a river” (v24) Read James 1v26&27 We cannot sit in our “ivory palaces” and comment on the sins of Israel, without first looking at ourselves to see if we pass the test! Chapter 6 Read verses 4-7: To really hammer home the point, Amos continues in the same vein. Fine lotions and fatted calves were more important to Israel than their coming ruin. If we have a nicely decorated house which is warm in the winter, and we have enough food for each day, then we are better off than at least 815 million people, which is 10% of the world’s population. Over half a million people in Britain are now reliant on food banks, just in Britain. God will judge our land for turning away from him, but also by the way we treat the poor and vulnerable. Praise God, most of the food banks and many charities helping the homeless are run by churches. As Christians, it is something we should all be involved in, whether it be giving money, praying, or helping at the point of need. Chapters 7-9 Amos has 5 visions: 7v1-3 Locusts … Judgment pronounced but intercession can make a change! 7v4-5 Fire … Again, intercession can cause God to relent! 7v7-9 Plumb-line … Israel didn’t measure up. Amos could not intercede. 8v1-3 Basket of ripe fruit … Israel was ripe for judgment. 9v1 and 8-10 The ruined Temple … A shaking and a shifting of the people. Amos reminds them that God has the power and reason to bring judgment. Page 6 Israel’s promised restoration – Read verses 11-15 – THE AGE OF GRACE “I will restore through the remnant of David’s line” (i.e. through Jesus and a New Covenant). There will be “new wine” and blessings – the Holy Spirit, for one. There will be restoration for the exiles extended to all people. The day of grace will bring the eventual restoration of Israel’s land ready for the coming of the New Jerusalem. God may very often be disappointed in his people, but he never breaks his side of the covenant. He is forever faithful, waiting to pour out his grace on all who turn to him. _____________________________________________________________________ THE BOOK OF? Why is there no grace shown to Edom? It doesn’t seem to fit with the underlying grace of God in the rest of the Books of the Prophets.. Esau was a forefather of Amalek who was a thorn in the Israelites side for hundreds of years to come (Amalekites). 2. Edom/Esau’s sin was pride in their land and hatred of God’s people – but it was God who created all that it was. (v3&4) 3. When Moses was leading the Israelites out of the wilderness, he asked for permission to cross the land of Edom but was denied. In fact the Edomites ‘ Page 7. ___________________________________________________________________ JONAH This is an unusual book of prophesy in that the book is all about Jonah, rather than what he said. Another striking thing, which reflects God’s grace, is that despite Jonah’s reluctance, everybody he meets (that is, the sailors and the Ninevites) all come to know the Lord God! There are many parallels with our Christian walk with God; not least that God has a plan for each one of us and he is in control of all the circumstances in our lives. Read verses 1-3: Jonah’s Commission God said, “Go to Nineveh and preach against their wickedness”. We don’t know how God spoke to Jonah, but it must have been very real, judging by his reaction. The problem was that usually Jonah was asked to prophesy to Israel – this time it was different. Nineveh was some 700 miles east of Israel and it was not a very nice place. Q. How would you feel if God asked you to go to a communist country or an Islamic country and preach the gospel? Would you be any different? Nineveh At that time Nineveh was an established and significant city, part of the powerful Assyrian Empire which later became a threat to Israel. Nineveh was the very opposite of Israel – its goal was power and wealth at any cost. It was morally and spiritually corrupt (The description in Nahum 3v1-7 makes disturbing reading). It would seem that it was ripe for God’s judgment, and yet ... in his grace, God was willing to give the people of Nineveh an opportunity to repent and be saved. Amazingly, God is doing similar things today in places like Iran, which apparently has ‘the worlds fastest growing underground church’. (Gateway News). Read chapter 1v1&2 So, Jonah rejected God’s commission and ran away! It probably seemed the easiest option at the time! He Page 8 boarded a ship to Tarshish which was certainly in the opposite direction to Nineveh – Jonah could well have got as far as Spain and the Atlantic Ocean in order to encounter a fish as big as a whale. Jonah chose to take himself outside of God’s will. Q. Can we ever really escape the presence of the Lord? (See Psalm 139) Stormy waters In verses 4-7: Following Jonah’s disobedience, we see that the Lord sent a storm, not out of anger or retribution, but in order to bring Jonah to the place where he wanted him. Q. Can you think of any times in your life when God has used a time of difficulty to move you into his will, or to keep you from going out of his will? Everyone’s life was in danger and Jonah knew it was his fault. He tried to ‘opt out’ of the problem by hiding and sleeping. At this point the captain seems to have more faith and sense than Jonah! Q. How was Jonah feeling? How do we feel if we have not been a good witness for The Lord? “They cast lots to find out who was responsible”. Proverbs 16v33 says: The lot is cast ... but its every decision is from the Lord. When the lot fell on Jonah they began to interrogate him. Read verses 8-12: Who was responsible? The sailors recognised a greater power was responsible for the storm. In response to their questions Jonah answered (in a general sense) that he was a Hebrew and a worshipper of God, and that he was running away from God. He took the blame. Q. What effect did Jonah’s answer have on the sailors (v.10)? Jonah’s sacrifice in leaving the ship brought them deliverance. The actions of the sailors (v.15) resulted in a calm sea and an easy passage for them, which resulted in their acknowledgement of God’s Sovereignty, as they sacrificed and made vows to him. Read Matthew 12v40,41 Jesus referred to Jonah, and this shows us several things · This was a true event, not a made-up story · This event portrayed the work of Jesus · Jonah sacrificed his life for the sailors, Jesus also sacrificed his life for all mankind · Jonah possibly died in the great fish (Jonah 2v2) and was resurrected. Jesus was also in the realm of the dead for 3 days before he was resurrected. However, that’s where the parallel with Jesus ends – Jonah still remained stubborn, reluctant and badtempered! It’s ironic that God still worked through Jonah, even in his disobedience! Read Chapter 3 From rebellion to obedience Jonah’s near-death experience had obviously changed him – we can assume he was repentant because he had a real change of heart and direction (literally!). This time God said, “Go” and Jonah obeyed (v3). God won’t call us all to do what Jonah had to do. This was Jonah’s specific calling – to be God’s prophet. God calls us all to different ministries, but the principle of obedience is the same. God will bless our work if we are obedient and acknowledge his sovereignty, rather than wanting the control to do it our way. Q. If God knew what Jonah was like, why do you think God chose him, rather than another prophet? Aren’t you pleased that God, in his grace, gives us second chances to get things right? It is significant that the people’s hearts were ready to hear the Word of the Lord, and on hearing the word they repented and believed (what an amazing miracle - nearly lost through disobedience!). Possibly God had already prepared their hearts in a dream (as is the case when many Muslims accept Christ as Saviour). Even the King believed and prayed for God’s deliverance; almost certain judgment was averted (v10). God was in control of the Ninevites too and had prepared them to hear Jonah’s message. Q. Is there someone God has prepared who is ready to hear his message through you maybe? Chapter 4 Read chapter 3v10 and 4v1: What amazing grace!! Page 9 It seemed wrong to Jonah that Israel’s enemies, who were wicked people (until their repentance) escaped God’s judgment. In fact, it made him absolutely furious! But would we feel the same? It was ironic that God had turned away his own anger, but Jonah’s was building up inside him! Q. How does anger stop us from seeing things as God sees them? Sometimes our world-view (the ideas we form as we grow up) doesn’t synchronise with God’s view and the teaching in the Bible. Jonah knew what God was like – gracious, compassionate, slow to anger, abounding in love – and he didn’t like it! Jonah was stuck in the mindset that evil-doers should be punished and not given another chance. He didn’t seem to (or didn’t want to) understand how repentance could change the past or bring forgiveness. He couldn’t cope with his concept of God and justice being challenged, and so he wanted to die (a bit like a petulant child!). Like many people, he believed we should be judged according to what we have done. But God challenged this view, “Is it right for you to be so angry?” Q. What does this passage tell us about God’s grace – to Nineveh, and to Jonah, and to each of us? Read verses 5&6: Jonah runs away! Once again, Jonah removes himself physically, this time to the east of Nineveh. Now, God could have punished Jonah because of his misplaced anger. Instead he deliberately showed Jonah kindness by providing a plant to shelter him. Why? Was it because God wanted Jonah to experience the same grace that he had shown to the Ninevites? Was it because this story is just as much about Jonah and individuals as it is about Nineveh and wicked cities? Or was it because God wants us to understand more about his grace? Read verses 6-9: In a way God turned the tables on Jonah. If Jonah didn’t think grace was right for the Ninevites, God would take away his grace (symbolically) from Jonah. So he caused the plant to die, resulting in “punishment” for Jonah; and so Jonah suffered for his rebellion. Once again Jonah was in despair, “let me die” he said. Q. We can see that Jonah was being irrational – but can we be the same? Contrasts A contrast is drawn between: · the loss of a plant and the loss of a group of people; · the anger of Jonah and the love of God; · Jonah’s selfish desire for comfort and God’s altruistic love; · Jonah’s stubborn ideas and God’s grace; · Jonah’s ideas about who and what should live or die, and God’s love for all that he created. The phrase “those who don’t know their right hand from their left” is telling us that the Ninevites were ignorant of the things of God and shouldn’t be judged without being given that chance to hear and repent. It’s the same message today. And we should not be judging nations who don’t follow God’s ways; rather, we should be praying for them, giving to missions, and seeking to tell them about Jesus – whatever God is calling us to do. Lord, help us to share the grace you have shown us with others, so that they may come to know you. Amen _____________________________________________________________________ MIC: Page 10 Micah 1v7: Idols, temple bribes, icons and political prostitution were common. Micah 2v2: The people were guilty of coveting, and stealing. Their society was rotten from the top down – politically, religiously and socially. Micah 6v6-8: God did not want their sacrifices (including implied child sacrifices – v7). He wanted them to show love and mercy. Q. This is held to be one of the most significant verses in the Bible. Why? Read Micah 6v5: Micah asks the people to remember two things. Balaam: (Numbers chapter 22-24) Balaam was a prophet (a false prophet – a wanderer who made money from foretelling the future). He cursed military enemies and pronounced blessing on the one who paid him. Using a donkey and an angel God spoke to him so that he could only say the words God gave him, which were, that he (Balaam) could not curse Israel because God had promised to bless them. The apostle Peter also uses Balaam as an example of a false teacher. So Micah says: Remember not to listen to false prophets. God has determined blessing for Israel if they keep covenant with him. Remember, that also means that God has determined blessing for you! Despite judgment, God still wants to bless.. Remember, God has determined our inheritance through Jesus Christ for eternity. Q. How often do we take time out to remember what God has done for us and the promises he has given us? Why is it good to remind ourselves? THE PRESENT Read Micah 6v10-16 and 7v2: Judgment was imminent because the people would not repent. Judgment ultimately came on Israel (within 20 years of Micah’s prophecy, in 722 by Sargon 11) when the Assyrian army took Samaria and carried the people of Israel away as exiles. A further judgment came on Judah about a hundred years later when the Babylonians took Judah captive. Read Micah 7v7: But Israel was not without hope. Micah, Isaiah and those who were upright were the remnant who knew God would hear them and bring them through. Q. When things around are bad, is our response the same as Micah’s? Verse 7 is a good verse to learn! THE FUTURE The remnant would return. There would be a new start. Read Micah 4v10: The Judean exiles to Babylon will return and the Temple would be rebuilt, making possible sacrifice and renewal of vows (see Ezra). The Word of the Lord (Torah) was found and read out to all the people by Ezra. Read Micah 7v11: Despite opposition, there would be rebuilding and restoration (see Nehemiah). First the walls around the City of Jerusalem, and then housing. Read Micah 5v2: 700 years before the coming of Jesus Christ the Saviour/Messiah is promised, and will be born in Bethlehem (the burial place of Jacob/Israel’s beloved wife Rachel – Gen 48v7). This prophecy was confirmed by Matthew in Chapter2v6. Read Micah 4v1-5: In the Last Days God will establish His reign. “The mountain of the Lord (v1) could be interpreted literally or figuratively. Whichever you choose – it is BIG! It is a place of worship for people from all around the world, and therefore inclusive (v2). It is a place where God will speak and where he will judge (v2&3). It will be a place of Peace. Each person will have an inheritance (v4). And it will be for all who walk in the name of the Lord – for ETERNITY. Page 11 Q. Why do we shy away from death when we are promised such a wonderful inheritance? Read Micah 7v18-20: Although God in his infinite justice would bring judgment, he would also forgive and restore the remnant that trust in him. God is faithful throughout the ages to those who are in covenant relationship with him (Abraham, Jacob, Moses etc. and US!). There is grace and hope oozing out of this passage! · Our God is a mighty God like no other · He is a pardoning God, there is hope for the sinner · He delights to show us mercy, grace, forgiveness · He has compassion on us · He gets rid of our sin completely · He is a faithful God throughout the ages We can learn and remember from the past, we can trust God in the present, and we can know we have a secure future in God. NAHUM In a way, the Book of Nahum is a short and disappointing sequel to the Book of Jonah. Because – like Jonah – Nahum prophesies to Assyria. At least 100 or more years before, Jonah had taken God’s word to the Ninevites and they had repented of their sin; and avoided God’s judgment. But the successive generations had gone away from God, and Nahum’s message foretold the fall of Nineveh to the Babylonians (which history tells us, occurred in 612BC). Nahum’s message is about two things: · Bad news – Judgment on Nineveh and Assyria · Good news – Blessing and relief for Israel Nineveh The first mention of Nineveh is in Genesis 10v6-11 when it was called Nimrod. Today it is known as Mosul in Iraq. It has often been a troublesome place. In Nahum’s time we get a picture of the severity of their sin. Read chapter 3v19 Their leaders were unbearably cruel! Asshur-banipal put a dog-chain through the jaw of a defeated king, and made him live in a dog-kennel, like an animal. He also had his defeated foes hanging from the city walls. Chapter 3 is full of Assyria’s crimes: lies, theft, slavery, exploitation, and war against Israel. Read chapter 1v3 God is slow to anger, but his ultimate judgments are always righteous and just. Read chapter 1v8 Nahum predicted a flood would be the end of Nineveh which is what actually happened. · The bad news – For Assyria, was that God defeated them once and for all. The flood meant that the Babylonian army were able to defeat the Assyrians. They engulfed the Assyrian Empire to enlarge their own expanding Empire. Assyria was no more (as prophesied by Daniel). · The Good News – For the Israelites – and for us, is that God always defeats sin and brings freedom and salvation to those who put their trust in him. Read chapter 1v15 A note of hope to finish – the coming of the Messiah – GOD’S ULTIMATE GRACE - was on the horizon. Page 12 HABAKKUK The Babylonians were God’s way of dealing with the people of Nineveh and the Assyrians. But … now Habakkuk is telling us that the Babylonians will also be God’s instrument to bring his judgment on Judah. The ten northern tribes of Israel had already been scattered by the Assyrians and now, approximately 100 years later, Judah would be exiled by the Babylonians. But, Habakkuk has two questions which he asks in the form of complaints. First Complaint – Read chapter 1v2-4 Here Habakkuk is crying out to God for his people, who have allowed their society to become violent and unjust; and his question/complaint is, “How long before you do something, Lord?” How often do we say these words? Maybe, his answer isn’t always what we want to hear!). , God’s Answer – Read chapter 2v3&4 God has the answer to our prayer already determined for a set time. So, we are to wait for it with a sense of trust and assurance, by faith. For it will surely come. Read chapter 3v2 Page 13 Habakkuk’s Response – Read chapter 3v2 Habakkuk stood in awe of God’s deeds, he was determined to trust God. It is an effort of our will. Read chapter 3v17-19 What amazing and encouraging words.. _____________________________________________________________________ ZEPHANIAH The name Zephaniah means “The Lord hides” – in the sense of “protects”. Zephaniah prophesied after the Fall of Samaria (Israel) and before the Fall of Judah in the south. But although Zephaniah’s message is for Judah and the surrounding nations, like most prophecy it also speaks to us today in these uncertain times. Key verse: Chapter 2v3 – Seek the Lord ... A very apt verse for the times in which we find ourselves! Who would be the channel of God’s judgment? The Babylonians were expanding their empire. God would use them to bring judgment, both on Judah and also on the surrounding countries who had helped in Judah’s downfall. As we saw last week, Habakkuk dealt with the whole issue of why God would use something bad (Babylon) to bring about ultimate good for his people. CHAPTER 1 Why Judgment? What was Judah guilty of? Read chapter 1 verses 4-6: There is quite a list:Baal worship (v4), idolatrous priests (v4), astrology (v5), worship of Molek (the Philistine god) (v5), superstitions (v9), rejection of God, no communion with God. Read chapter 1v12: They were complacent. Like most people in our country today they thought the Lord would do nothing, either good or bad. They didn’t believe that judgment would come. They were spiritually dead or at least, very much asleep! Perhaps a catastrophic event would get their attention. But we have already seen, in the Book of Joel, that even a plague of locusts didn’t wake them up to what God was saying to the nation. Q. Why is it easy to become complacent? We start to take things for granted, we have plenty and do not consider others less well off. Some even begin to exploit the foreigner or the poor and vulnerable. (Had any scam phone calls lately?!) Once we are complacent, we don’t even react to the wake up calls that God gives us. For example – a worldwide pestilence/ virus. Read chapter 2v3. Chapter 2 -. What does God call them in verse 1? This is the opposite of glorious – they were meant to reflect God’s glory to the world. The Surrounding countries:. Page 14 CHAPTER 3 – Grace and Hope for Jerusalem? JERUSALEM – Read chapter 3v1-5: Even in the place where God’s presence filled the Holy of Holies in the Temple the people were guilty and the unrighteous “knew no shame”. They were like lions and wolves exploiting the people. Even the prophets and priests were unholy (v4). Q. Should these verses make us, as a country, feel uncomfortable? What would you define as foreign gods? Where was there hope for Judah, and ultimately for us? Our hope is in being humble and not proud. (Zephaniah 2v3) Even when we question God, we are challenging his authority and therefore not giving him the rightful place in our thoughts – not showing absolute trust in Him. Our hope is in God’s covenant-based promises. For Judah these were based in the covenant that God made with Abraham to bless his seed. God would always save a remnant so that he would keep his promise: Abraham, Isaac, Jacob, Joseph, Moses, Judah, David, Mary and Joseph – all trusted in the promises of God which were fulfilled in Jesus Christ. Read chapter 3v12&13. God’s desire is to bless, because he delights in those who trust in him. When we humbly follow the Lord, we don’t avoid his judgment on our country, but we will continue to know his blessing and grace in our lives. Even in exile in Babylon, many of the Jewish believers were not only blessed by God but, used by God in national affairs. God’s ways are far higher than our ways. Read chapter 3v14-20 Here God offers a promise of restoration, and reaffirms his love for us (v17). He promised a time of help and deliverance to the soon-to-be exiles. Those who walked with him would see deliverance and renewal of the covenant when they eventually returned to Jerusalem (Ezra and Nehemiah).? As Christians, we can all intervene in prayer. · We can pray for healing · We can pray for our families and friends and the wider circle of people need to heed the wake-up call. · We can be repentant and pray that others might be humble and repentant also · We can pray for revival and renewal in our land. GOD HEARS AND ANSWERS PRAYER HAGGAI The prophecies of Habakkuk and Zephaniah had come true, just as they had predicted: the Babylonians had taken Judah and exiled its people – and it was a time of great sorrow for God’s people. Read Psalm 137v1-4 o They no longer had a king (The signet ring, referred to in Haggai, was God’s seal of authority on the king’s leadership) Jeremiah 22v24-27 o They no longer had their land o They no longer had the Temple where they could offer worship and sacrifice (it had been sacked by the Babylonians). They had been in exile for 70 years when, once again, God used another world power to work out his purpose. Page 15 King Cyrus, the King of Persia (for the Persian Empire had now superseded the Babylonian Empire) gave them permission to return to their homeland. Read Ezra 1v1&2 (The Bible is factual - You can still see the actual clay cylinder today; it is on display in the British Museum) or look it up on the internet. And so, the Jews returned to their homeland. But Jerusalem was now under a Persian governor. The Jews would no longer have their own king. However, the amazing thing was that, under the Persians, they not only had permission to return, but King Cyrus gave them all the Temple goods that had been confiscated 70 years before. This was nothing short of a miracle.. The people were building their own very fine (v3) houses first. Read Haggai 1v1&2 Haggai’s first sermon The charges against the people: Read verses 3-6 The people lived in “paneled houses” (i.e. luxury homes. Amos called them ivory palaces) while God’s House was still unbuilt. They had finished the foundations and built a temporary altar, but that was all. Their priorities were wrong – they were putting themselves first. And yet, they were still never satisfied (v6). Isn’t this true of us today? We do not appreciate all that we have until it is in short supply. Read verses 7-11 God said that the drought was a result of their failure to put God first. They had their priorities all wrong and were twice told, “Give careful thought to your ways.” Read verses 12-15 The people heard God (really listened to what he had to say); they determined to obey God; and then God stirred them up spiritually; so that, within 3 weeks they were rebuilding the Temple. Q. If this is a parable where the Temple is God’s church today, how can we apply this to ourselves? Think about the period of isolation during the Coronovirus – We are currently in “exile” – isolated from church and our Christian brothers and sisters. What are our priorities? Are we using this time to reflect on what God is saying to us and how he might use us to “build his Temple”? Haggai’s second sermon Encouragement new can be in our hearts and not just in one place. See Heb.3v6 In difficult times, God says to us, “Do not be afraid” (v5) Fear is the opposite of faith, so we overcome fear by exercising our faith, staying close to God, worshipping, reading Scripture, and not being negative. Haggai’s Third Sermon “Give careful thought” – a promise to bless.. They were merely ‘looking good on the outside’ but rotten on the inside. Page 16? (Think about the foundation of your faith – what can God build on?) Haggai’s Fourth Sermon The Lord’s signet ring. Signet rings were used to denote authority and honour when they were bestowed upon a person. By calling Zerubbabel his ‘signet ring‘ God was giving Zerubbabel and Israel a hope of what was yet to come in Jesus – the one who would share all of God’s authority and honour. Everlasting hope is found in Christ alone and through the Grace of God. In his grace, God had not finished with the Israelites. And he is never finished with us – I’m so thankful for that! Lord, As part of your Living Temple, help us to reflect on your grace and glory. Help us not to be afraid. And help us to be willing to be a part of building your church. Thank you for your great love. Amen ZECHARIAH – PART 1 Zechariah was contemporary with Haggai, at a time when the Israelites were returning to Jerusalem after having been exiled in Babylon. It is the longest book of the prophets. The first part centres around 8 visions which came to the prophet in the space of one night, and which speak to the newly returned exiles. To keep it simple we will look at: What Zechariah saw, what was the likely interpretation of the vision, and what it could mean for us today THE FIRST VISION – Read chapter 1v8-16 What did Zechariah see? He saw a man on a red horse, with 3 other horses too; they were in a steep valley with myrtle trees and an angel was the mediator. What did it mean? The Jews had returned home but were not free (trapped as in a ravine) because they were still under Persian governors. The myrtle tree symbolises the beauty and presence of God with them. The horsemen probably represent God’s angelic messengers – God’s eyes on the earth. What could it mean for us? When we are in a valley, feeling hemmed in on all sides, we can remember that God’s presence is with us, and his angels are watching over us. God sees all, he knows all that we are going through, and he has promised never to leave us. Where do you see God’s grace in this? THE SECOND VISION – Read chapter 1v18-21 What did Zechariah see? He saw four horns and four craftsmen. What did it mean? In the Bible the number 4 signifies north, east, south and west – the four corners of the earth. The horns, in Page 17 the Bible, are symbolic of powers and strength. So, the four horns are Israel’s enemies and God was angry with them (ch 1v15). These horns were to be cut off and reshaped by the four craftsmen. God would do it – he is the one who is powerful above all others. He is in control. What could it mean for us? Jesus was the craftsman – he made things out of wood. But, more than that, he can take a person with a damaged life and craft them into something beautiful for God. He even has the power to change countries. He is still the God who is in control of world powers and events over which we have no control. Do you find hope in this thought? THIRD VISION – Read chapter 2v1-5 What did Zechariah see? He saw a man with a tape measure, and a city without walls (which is odd, because it begs the question, “What was he measuring?”) What did it mean? The people had built their own homes but had not rebuilt the Temple. The measure was a symbol of building. They were to rebuild both the Temple and the City walls and then the people would grow and prosper. God had directed Nehemiah to build the walls of the City, so that it could protect itself from enemies. So why were there no walls in the vision? Firstly it was to get the people to see that ultimately their trust should be in God and not in walls. Secondly, the vision looked forward to the day when God’s kingdom would not need walls – there would be no boundaries or hindrance to people coming in – as they put their trust in Jesus. What could it mean for us? We are also asked to build God’s kingdom without walls. God has promised to protect us – we don’t need “walls”. This is God’s promise in verse 5: I will be a wall of fire around it – I will be its glory within. We can know God’s fire (the Holy Spirit) protecting us and bringing glory to himself. the burning fire. And then he re-clothed Joshua in clean garments. This was a new start for the Jews. Remember, Israel had not been able to atone for their sins for over 70 years in exile. They had no Temple, no Ark, no Holy place where the High priest could enter once a year to make atonement. They needed rescue and renewal. What could it mean for us? Joshua was a “type” of Jesus – both names mean Saviour. Jesus, our Great High priest, put on our “dirty garment”, when he took our sin upon himself on the Cross. He made himself unclean so that he deserved our punishment in the flames. But God rescued him and raised him up and exalted him to the highest place, so that he is above all things. Just as the work of Joshua was to bring atonement for Israel – and a new start – so, the work of Jesus brings atonement for us - a good point to pause and praise our Saviour! FIFTH VISION – Read chapter 4v2-4 What did Zechariah see? He saw a 7 branched lampstand – the kind used in the Tabernacle – a reservoir of oil, and two olive trees. What did it mean? Light always characterises the presence of God – I guess that is the reason why they light candles in some churches. More importantly, when God is present there is light. The reservoir of oil speaks of the Holy Spirit, perpetually sustaining the light. Zerubbabel and Joshua (symbolising the ministry of the King and the High priest) were the two olive trees. The promise to the people was that God’s presence, his light and his Spirit, was with them. What could it mean for us? Jesus is our King, and our High Priest, and the one who sends the light of the Holy Spirit into our lives. The promise in v 6 is for us too as we seek to follow our Lord. “Not by might, nor by power, but by my Spirit” says the Lord. Page 18 SIXTH VISION – Read chapter 5v1-4 What did Zechariah see? He saw a flying scroll, 30’x15’ – the exact dimensions of the Holy place in the tabernacle. On one side was written ‘Theft’, and on the other side ‘Lies’. What did it mean? The measure of the Holy Tabernacle represented the standard of absolute purity with which God measures sin. The sin of the people was represented by the two words – Theft and Lies. These two words can be said to sum up the ten commandments What could it mean for us? ‘Theft’ equates to us not giving God his due reverence and service. ‘Lies’ equates to hidden sin that we do not admit to. Praise the Lord that Jesus has taken away our sin, but it is always worth us considering what we might be holding back from God. See Romans 3v23 SEVENTH VISION – Read chapter 5v5-11 What did Zechariah see? He saw a woman (under a heavy lead cover) in a measuring basket, which was carried in the sky with storks wings. What did it mean? The measuring basket meant that the people had been weighed and found wanting. The woman was the personification of Israel’s sin (so bad it had to be held down). The destination was Babylon, synonymous with sin in the Bible. Babylon, mentioned over 350 times in the Bible, begins with Babel in Genesis and ends in Revelation where it is representative of the Antichrist. What could it mean for us? When we ask God to forgive us, he will literally take away our sin. It has been dealt with on the Cross. Babylon’s sin will be dealt with at the End Times – see Revelation chapters 17&18. EIGHTH VISION – Read chapter 6v1-5 What did Zechariah see? He saw 4 chariots, drawn by different coloured horses, coming between two mountains of bronze. What did it mean? It is difficult to say. It may not have happened yet – possibly apocalyptic and referring to the valley of Armageddon. What could it mean for us? It reminds me that God is Sovereign over all the earth. One day there will be judgment tempered with grace. Our hope is in the Branch (Read verse 12) – Jesus our Saviour, who will build his Living Temple and will have the ultimate victory. Praise his name! ZECHARIAH – PART 2 Grace and Hope in Zechariah – Part 2 Some two years after Zechariah’s visions, God gives him important messages for Israel’s future. They refer to a time after the rebuilding of the Temple. They are prophecies which refer to Jesus the Messiah and The End Times. Chapter 7 J. Pelican once wrote: “Tradition is the living faith of the dead – but traditionalism is the dead faith of the living.” That is: going through the outward motions with no engagement of our spirit. In this chapter, the people ask about mourning and fasting (v3), but God replies that he is not looking for outward show (traditionalism), but for love and justice which come from the heart. Read verses 8-10 Q. This is a timeless request, how can we apply it to ourselves? How does God use the difficult times to test the reality of our faith? Page 19 Chapter 8 This chapter deals with the restoration of Jerusalem, when God would again be with his people. Read verse 3. Those who were scattered by their enemies would return (and be known as the remnant). Read verses 8-12. At a time when Israel had been tested (no wages, no safety), God promises restoration and renewal and bountiful crops. Notice, verse 8 says, “I will bring them back”. At the time of writing, we are being tested by the Covoid-9 virus. This is a good time to ask the Lord to bring back all those who have backslidden from the faith. Chapter 9 This chapter clearly speaks of Jesus. Read verses 9&10. This is a prophecy of how Jesus did in fact enter Jerusalem on a foal of a donkey (Palm Sunday) – and Matthew made a point of recording it in his gospel – as if to say, “Look, this prophecy is coming true.” Matt 21v5. And at that point many Jews recognised Jesus as their Messiah. And by doing the impossible (riding on an unbroken colt), Jesus was also showing his Lordship over all creation. Q. Does Jesus have Lordship over the Covoid-9 virus? Chapter 10 Read verses 4-9. These verses are about God giving his people strength and hope. Jesus is the cornerstone of verse 4, meaning: · He is the stone that provides the standard for the building and keeps it strong. When we trust in him we have a sure foundation that will never let us go. · Just as the cornerstone joins walls going off at two different angles, Jesus joins together believers from the Old Covenant and the New. · The cornerstone speaks of immovable strength and will stay forever. Q. When we cannot literally attend church, what is the cornerstone that upholds our faith? Chapter 11 Read verses 7-14. Zechariah speaks in the first person, as if he is doing the things, but he is representative of Jesus. And so Zechariah draws a picture of the good shepherd who got rid of bad shepherds. This good shepherd had two staves (shepherd’s rods) called “Favour” and “Union”. The breaking of these rods became symbolic: · The rod called Favour was broken, because when Jesus came, Israel’s favoured position came to an end. The act of Judas (see verse 12) was representative of the value that the Jews put on Jesus’ life – so, not surprising that they lost favour with God. · The rod called Union was broken to show that when Jesus came the Jews would no longer automatically be in union with God – they, along with the gentiles, must come to Him for salvation through Jesus Christ. There is sadness in these verses – and I am sure, in God’s heart, over the lack of faith by his chosen people whom he loves so much. Chapter 12 Read verses 10&11. Possibly, one of the saddest verses in the Bible -There will come a day when the Jews will realise the responsibility they bear for “the one they pierced”, and they will be devastated and grieve bitterly in deep mourning and repentance. But they will find on that day (Verse 3 suggests Armageddon) that God’s grace is still there for them: “I will pour out a Spirit of Grace”. Oh, the overwhelming, never ending grace of God – how wonderful is He? There is no-one who cannot receive God’s grace if they turn to him in repentance. His grace never runs out, because God never ceases to love us. Chapter 13 Read verse 1. It is not begrudging grace, but a FOUNTAIN of grace pouring out with forgiveness and cleansing. On that Day, the Jews will understand the full meaning of the Temple and the blood sacrifices, when they see how they are fulfilled in Jesus. They will be sanctified and purified as they re-covenant themselves to God (see verse 9). Page 20 We already have this knowledge and understanding, and know God’s grace, as he works in our lives – Praise His Name. Chapter 14 The Last Days: · The lord will fight against the nations (v3) · He will stand on the Mount of Olives which will split in two, to form a valley (v4) · It will be an everlasting day (v7) suggesting the end of the world as we know it and the beginning of a new era. · Living water will flow continuously out of Jerusalem (v8) · The LORD will be king over the whole earth (v9) · Jerusalem will be inhabited and secure (v10,11) · A terrible fate will afflict those who rebel against God (suggestive of a nuclear bomb’s effect) (v12-14) · All of Jerusalem and Judah will be Holy to the Lord God Almighty (v the temple, but all things and all people in this New Living Temple. Every aspect of life will be holy and consecrated to the Lord. God has great plans for us! He longs to make us holy and he longs to pour his grace into our lives. What a wonderful Saviour we have! MALACHI It would be nice to finish the last Book in the Old Testament on a “high”, in the knowledge that God’s people were committed to bringing honour to the Lord. But the Bible is not a mere story book – it’s an account of man’s condition. More than anything it is true and it is real. Sadly, despite the rebuilding of the Temple and many ‘second chances’, God’s people were still found wanting. What is amazing is that God doesn’t wash his hands of them, but – in his grace - he gives them a way forward. He challenges them to think about 6 issues. And he promises another Elijah (John the Baptist) who will proclaim the coming of Jesus. · First Issue – God’s Love - Read Chapter 1v1-5 When God declared his love, the people replied, “How have you loved us?” Their answer was unkind. But we all know people today who might say that. Jacob represented the special covenant that God had made with his people, calling Jacob by the name of Israel as the head of the Israelites. Esau had already rejected his birthright (his inheritance), he had no desire to follow God’s ways. When we become sons and daughters of God, we are born again into God’s family and we are given a birthright – the right to share in his inheritance, like Jacob. God wants to show us his love and give us the assurance of eternal life with him. Note: When verse 3 uses the word ‘hate’ this is Hebrew hyperbole, God does not literally hate anyone – although he might hate what they do. It is like this: Just because my husband has a unique relationship with me in marriage and loves me, it does not follow that he hates all the other women in the church! God had a special love for Israel – but he still loves everyone. God is Love. He can be nothing other than love. If, like the Israelites, we are not feeling his love we need to come back to the covenant he made with us through the sacrifice of Jesus – a Covenant made in grace and love. Q. It begs the question – How much do we love God? · Second Issue – Respect for God’s Name – Read chapter 1v6-11 The lack of blessing was not, as we have seen, a lack of love on God’s part – but a lack of respect on the part of Page 21 God’s people. They were proud and contemptuous, even to the point of offering impure, maimed sacrifices. They were carrying out rituals, going through the motions – but without love and respect for the Lord. Verse 11 leads us to the time when Jesus would make it possible for all men to find salvation in Him – and ultimately when every knee will bow before the Lord. This should lead us to consider how we define our worship of God? Q. Do we give God our very best, or second best? Are we always honouring to His Holy Name? What is our purpose behind going to a worship service? · Third Issue – Faithfulness in relationships Read chapter 2v10-16 Basically – and it is also true today in many cases – the people could not see what their attitude to sex and marriage had to do with their relationship with God. Many types of relationships in Britain today are common practice and even lawful – but they remain un-Biblical. Why does the inconsistency exist? Well, God created us to be in a three-fold relationship with Himself - man, wife and The Lord – a perfect environment for the raising of children in a godly environment. How should we reflect on this? If we have not obeyed the Lord’s word in these things, is it possible to make things right with God again? God, in his grace, is willing to forgive when we come in repentance to him. 1 John 1v9 Q. If God wants us to be like him, how does that impact on our faithfulness in our relationships? Are we kind and faithful in the way that God is kind and faithful to us? · Fourth Issue – Testing God’s patience - Read chapter 2v17 to 3v4 How had the Israelites tested God’s patience? They were saying that wrong is right, sin is not sin at all, anything goes as long as no-one gets hurt (sound familiar?). What could God do for people who had effectively made up their own religion? Would he wash his hands of them? Would he bring punishment on them? How could he do that if he was a God of love and faithfulness? What he did was to provide them with a Saviour – a new chance. But not only for them, but for the whole world. “I will send a messenger” and “The Lord will come to his Temple” (3v1) And that is exactly what Jesus did, as Jesus revealed himself in the Temple. Read Luke 2v25-32; Luke 2v41-52; John 7v28-29; John 8v42-47 The question posed here is, “Who will stand at the day of his coming?” (3v2&3) Those who bring offerings in righteousness will be the ones who stand – those who have considered the previous verses in Malachi and are following the Lord in true love and worship – those who are made righteous through the blood of Jesus. Aren’t we so blessed to be living in this Age of Grace? · Fifth Issue – Tithing, giving freely to God – Read Malachi 3v6-12 The people asked God, “How do we return?” (v7). This passage is about tithing, but it is also about giving of our all to God – not holding anything back, because in doing so we would be robbing God. God makes 2 promises in this passage: · If you return to me I will return to you (v7) · If you are faithful in tithing, I will pour out blessings on you Tithes were, and are, important because. They are a Biblical principle (Deut 14v22-29, 2 Cor. 8v7-12, Matt 23v23) They are to be part of worship, in fellowship with each other in God’s name. They are significant of obedience and sacrifice. They are for the support and encouragement of the fellowship. They are for the support of God’s workers. They are for the storehouse (v10), now considered to be the local church. They are a symbol of our attitude towards giving to God. · Sixth Issue - God’s Desire Read chapter 3v17 and 4v2 We see the loving, father-heart of God. Israel will be spared – and because we have been grafted in – we can be spared too. With the coming of Jesus, all people could have the opportunity for forgiveness and reconciliation and healing from the God who loves them. In his grace God was to provide a way forward. Chapter 1v9 tells us to plead with God for his grace. Q. Should this not be our prayer for the world today in the grip of Coronovirus? God alone is the answer to our every need and he is waiting to pour out his grace on us. Page 23
https://issuu.com/ashingdonelim/docs/20200715_graceandhope_minorprophets
CC-MAIN-2022-27
refinedweb
11,406
80.72
In the previous article, we have discussed about C++11 : Start thread by member function with arguments. Let us learn how to Multithreading – Part 8 in C++ Program. Multithreading – Part 8: std::future, std::promise and Returning values from Thread The std :: future item can be used with asych, std :: packaged_task and std :: promise. By considering this article we will focus on using std :: future with std ::promise object . Many times we encounter a situation where we want the cord to return the result. Now the question is how can you do this? Let’s take an example, Suppose in our app we created a thread that will compress a given folder and we want this thread to retrieve a new zip file name and its size as a result. Now to do this we have 2 ways, Old Way : Share data among threads using pointer : Transfer the pointer to a new thread and this thread will set the data on it. Until there is a big thread keep waiting using the flexibility. When a new thread sets the data and signs the status quo, a large cable will wake up and retrieve the data from that identifier. To make it easier we use variable, mutex and pointer i.e. 3 items to hold the returned value. Now let’s say we want this thread to bring back 3 different values at a different time when the problem will be more severe. Could there be a simple solution to recover value from threads. The answer is yes using the std :: future, checkout solution following it. C++11 Way : Using std::future and std::promise std :: future A category template and its object saves the future value. Now what the hell is this future value. In fact the std :: object of the future inside holds the value to be allocated in the future and also provides a way to achieve that value i.e. using the get member function (). But if someone tries to reach this corresponding future value by using the get () function before it is available, then the get () function will block until the value is not found. std :: promise is also a class template and its object promises to set a value in the future. Each std ::promise item has a std ::future compatible item that will provide a value once set by the std :: promise:. The object of std :: promise is sharing the details with its corresponding std ::future . Let’s see step by step, Create a std ::promise object that promises to Thread1. std::promise<int> promiseObj; Currently this promise item has no corresponding value. But it does promise that someone will put a value on it again if its set we can get that value by using the corresponding object std :: future. But now let’s say that Thread 1 created this promise thing and passed it on to Thread 2. Now how can Thread 1 know that when Thread 2 will set a price on this promise thing? Answer of this statement is using std::future object. Everything std :: promise has a corresponding object called std :: future, where others can claim the value set by the promise. Therefore, string 1 will create std ::promise object and download std :: future to it before transferring std ”” which promises string 2 i.e. std::future<int> futureObj = promiseObj.get_future(); Now thread 1 will pass the promise Object to Thread 2. After that Thread 1 will fetch the value set by Thread 2 to std :: promise by std :: future’s get function. int ob = futureObj.get(); But if the value is not set by string 2 then this call will be blocked until string 2 sets the value in the promised item i.e. promiseObj.set_value(51); See full flow in the given drawing, Let’s see a complete example of std :: future and std ::promise, #include <iostream> #include <thread> #include <future> void demo(std::promise<int> * promItem) { std::cout<<"The inside Thread is:"<<std::endl; promItem->set_value(51); } int main() { std::promise<int> promiseItem; std::future<int> futureItem = promiseItem.get_future(); std::thread th(demo, &promiseItem); std::cout<<futureItem.get()<<std::endl; th.join(); return 0; } Output : The inside Thread is: 51 If the std ::promise object is used before setting the value of the call function get () in relation to the std ::future object the next item will discard the alternative. The part from this, if you want your thread to retrieve multiple values at a different time then simply transfer a lot of std :: promise items and download a lot of retrieval from many std :: future related items.
https://btechgeeks.com/cpp11-multithreading-part-8-stdfuture-stdpromise-and-returning-values-from-thread/
CC-MAIN-2022-27
refinedweb
772
70.02
Hi Andreas After deploying the app on a device, I looked in the settings and the app size reported by iOS was less than 100MB. But I did not know if this was caused by compression or not. It is good to know that the linker should pick only the necessary symbols and architectures. And it seems to work, judging by the final app size on the device. Best regards, Wilhelm Hi Wilhelm, The final link step should remove all unnecessary symbols and architectures, leading to a much more expected app size. But we keep investigating how much we can remove before building the .nupkg. Best regards, Andreas NuGet ios happend error: "Package com.wikitude.xamarin.component 7.2.0 is not compatible with xamarinios10 (Xamarin.iOS,Version=v1.0). Package com.wikitude.xamarin.component 7.2.0 supports: monoandroid80 (MonoAndroid,Version=v8.0)" Best regards, Wilhelm Hi Wilhelm, With our latest release this week, we also provide NuGet packages for Xamarin. Unfortunately they are not available from our download page at the moment so here are download links for iOS and Android. You should be able to install them from your local file system if you define a new, 'local' NuGet source. Since they are new, please let me know if you experience any problems with them. Best regards, Andreas Wilhelm Oks Hello. I am trying to use the Wikitude Xamarin SDK for iOS but could not get it to work. The main issue seems to be that the SDK is a Xamarin component with the file extension .xam. That was used to install components before Xamarin has been bought by Microsoft and has been integrated into Visual Studio. And before Xamarin Studio on Mac has been replaced by Visual Studio for mac. So i could not add the sdk directly. Not on Windows and not on Mac. I also tried to unzip the content of the .xam File and add the ios dll to the Project as a reference, but that did now word either because the dll has a yellow exclamation mark on it, indicating that something is wrong with ist and the "using Wikitude.Architect;" can not resolve the Wikitude namespace. I tried nuget but Wikitude but no results for "Wikitude" on nuget. I tried to install it as an old Xamarin component by creating a Xamarin account but the old Xamarin Service seems to not exist anymore because I get an Error 503 from visual studio when trying to log in to the Xamarin Components Manager. I also tried the Example project inside the .xam archive but it cannot run because it is 32bit. Any hints on if and how it works on current environments? Best regards, Wilhelm
https://support.wikitude.com/support/discussions/topics/5000086443?sort=popularity
CC-MAIN-2020-29
refinedweb
453
64.71
#include <SoftwareSerial.h>SoftwareSerial btSerial(2, 3); // RX, TX PINString bt_rx;int ledpin = 11;int data = 0;int writeData = 0;int received = 0;void setup() { Serial.begin(9600); pinMode(ledpin, OUTPUT); btSerial.begin(9600);}void loop() { if (btSerial.available()) { bt_rx = btSerial.readStringUntil("*"); //read signal up to delimiter Serial.println("Received:"); received = bt_rx.toInt(); //convert to int data = bt_rx.toInt()%256; //separate the data from the pin number writeData = (data*data)/255; //scale the data to get smoother fading Serial.println(bt_rx); Serial.println(writeData); } analogWrite((received - data)/256 + 9, writeData);} ledpin = (received - data)/256 + 9; //calculate the pin number analogWrite(ledpin, writeData); Received:37151Received:36143Received:35840Received:34833Received:34027 Received:74*79*82*89*94*110*120*138*145*163*178*184*21 analogWrite((received - data)/256 + 9, writeData); When a slider of the app moves, it sends a string like 74*76*82*84*94*102*105*112*148*150*153*156*where the numbers indicate the positions of the slider until it stops moving Can you please tell/show me with an example data/string item (that is coming from your BT) how does the 1st argument (received - data)/256 + 9) of the following code (taken from your sketch) evaluate to Dpin-11? Either wait until the slider stops and then send the value for that slider position ORsend values at pre-determined time intervals - for example every 200 millisecs. I'll try to explain: "received" is the int derived from the string sent before the first *. "data" is defined as received % 256, so if I subtract received - data, I get either 0 (first fader), 256 (second fader) or 512 (third fader). If I divide that by 256 and add 9, I get the pin number that I want. Oh, I see what you mean! I actually forgot to declare the other two pins (9+10) in setup, and after doing that, the sketch works in the second version as well, even without using "float". I'm afraid I still don't understand what exactly is going on here and why it worked in the other version without declaring the other two pins. But for now I feel I can safely build on that and continue my project. Thanks a lot!
https://forum.arduino.cc/index.php?topic=605313.0;prev_next=next
CC-MAIN-2020-40
refinedweb
372
54.52
Hey everyone, I feel like I've asked this question before, and I searched through all my posts and couldnt find it, so I may just be going crazy, and if not I apologize for the redundancy. So, I'm trying to use nested for loops to created a pyramid of X's, as in: ......X ....XXX ..XXXXX XXXXXXX etc...the dots are there to be abe to format the triangle, I don't need them in the program... and I have the following code, but I have absolutely no idea how to insert the X's and I was looking for a push in the right direction NOT THE ACTUAL CODE...thanks a lot! <Helpless Chap<Helpless Chap Code://Chapter 3 //Exercise 5 #include <iostream> using namespace std; int main() { const char X = 'X'; const char SPACE = '.'; unsigned int i; for (i = 0; i < 21; i++) { for (int spaces = 0; spaces < 21; spaces++) { cout << SPACE; if (spaces == 10) { cout << X; //I want to use a for loop to print X to screen } } cout << endl; } return 0; }
https://cboard.cprogramming.com/cplusplus-programming/62854-nested-loops.html
CC-MAIN-2018-05
refinedweb
177
75.54
Now you know all necessary things about GLCD from introduction post, lets start working on it. As described earlier GLCD requires 14 IO connections named as follows; DB0-DB7: Data Pins CS1 – CS2: Chip Select RST: Reset GLCD E: Enable R/W: Read-Write Selection RS: Register Select These are the data and control pins which are connected to the processing device such as microcontroller. Other than these are; VCC: Power Input GND: Ground VO: Contrast Input -Vout: -ve Voltage for Contrast LED+: Backlight +ve Power Input LED-: Ground Step 1 This step included the hardware interfacing of GLCD with PIC16F876. We will follow the connections as given below; GLCD – PIC16F876A DB0 -> RB0 . . DB7 -> RB7 CS1 -> RA0 CS2 -> RA1 RS -> RA2 RW-> RA3 E -> RA4 RST -> RA5 Pin RA4 is an open-collector output in PIC16F876, which means it will not output Logic high but instead will go to high-z state. So as to utilize the pic, we will use a pull-up resistor for its connection with GLCD. You must have noticed that 4 connections are still left which are LED+, LED-, VO and -Vout. LED+ and LED- are the Anode and Cathode of backlight led. A series resistor fo 10 Ohms to 22 Ohms is usually used. -Vout is used for contrast setting and it outputs -ve voltage. VO (or Vee) is the input for contrast voltage. A potentiometer is used here to vary contrast. All the connections are shown below. Note that VO is referred as Vee The hardware interfacing is complete and now move on to coding. Step 2 The following program will display Creative Electron in the extreme left corner of the display. The code is as follows; #include <16F876A.h> #include <CE_GLCD.c> #include <graphics.c> #fuses HS,NOWDT,NOPROTECT,NOLVP #use delay(clock=20000000) char text[]=”Creative Electron“; void main () { glcd_init(ON); delay_ms(10); glcd_text57(0, 0, text, 1, ON); } // End of Program There are two new files included in the program. The file “CE_GLCD.c” contains the pin configuration information of the GLCD along with the timings and data output functions. The “Graphics.c” includes the functions used for making fonts, characters and graphics to be used in GLCD. “glcd_init(ON)” initializes the GLCD and clears the screen. After initializing we need to output the string with the “glcd_text57″ function. Its syntax is glcd_text57(x,y,string,size,color). The x and y are the start coordinates for text. “string” is the data you want to display. “size” is the fontsize while color means whether you need to ON the pixels or OFF. A detailed help of the GLCD functions will be posted later. In the meantime you can view list of functions and their usage in “CE_GLCD.c” file. After running the program in Proteus the output will be as following ; Step 4 Download the ZIP file which contains ; Source Code: GLCD01.C (CCS C Compiler) GLCD Driver: CE_GLCD.C (CCS C Compiler) Hex File: GLCD01.HEX Proteus Simulation: GLCD01.DSN (Proteus ISIS) Download: Source Files – Interfacing 128×64 Pixel GLCD with PIC16F876 Filesize: 20 kB Step 5 Check other functions listed in CE_GLCD.C driver file, to draw graphics such as rectangles, lines and circles. It also has many other useful functions. Step 6 Enjoy GLCD InterfacING =)
http://creativeelectron.net/blog/2009/09/glcd-interfacing-128x64-pixel-glcd-ks0108-with-pic16f876/
CC-MAIN-2020-10
refinedweb
548
74.59
Contents go/dynamic_suite_codelab The Chrome OS version of Autotest introduces a new type of suite, known as a dynamic suite. Dynamic suites allow for the jobs in a suite to be sharded over a pool of DUTs, and the dynamic suite infrastructure takes care of all of the device imaging and test scheduling/sharding details. Different tests in a suite may require different specific features of a DUT (for instance a certain type of cellular modem, or an attached servo board). These requirements can be specified as test DEPENDENCIES, so that the test in question will only be scheduled on DUTs that have the required labels. In addition, DEPENDENCIES can be specified at the suite level, causing all tests invoked through the suite to inherit any additional suite level DEPENDENCIES. DEPENDENCIES In this codelab, we will: git repo cbuildbot This codelab will involve touching or changing code in two git repositories within the ChromiumOS repo. Our suite will be named peaches, so we will start by creating a repo branch named peaches, associated with the two git repos we will be modifying: user@host:~/chromiumos$ repo start peaches src/third_party/autotest/files src/third_party/chromiumos-overlay If this succeeds, then you should be able to see your newly created branch. user@host:~/chromiumos$ repo branch * peaches | in: src/third_party/autotest/files src/third_party/chromiumos-overlay Test suites are defined by Autotest control files (Made up of Python with some meta variables), similar to the control files used to define tests themselves. The suite control files live in the Chromium OS source tree it the src/third_party/autotest/files/test_suites directory. Poke around and take a look at some of them to see their basic structure. src/third_party/autotest/files/test_suites Once you are satisfied, create a new file in this directory named control.peaches, with the contents given below. Caution: copy-pasting from Google Docs has been known to convert consecutive whitespace characters into unicode characters, which will break your control file. Using CTRL-C + CTRL-V is safer than using middle-click pasting on Linux. control.peaches # Use of this source code is governed by a BSD-style license that can be # found in the LICENSE file. AUTHOR = "Chrome OS Team" NAME = "peaches" PURPOSE = "A simple example suite." CRITERIA = "All tests with SUITE=peaches must pass." TIME = "SHORT" TEST_CATEGORY = "General" TEST_CLASS = "suite" TEST_TYPE = "Server" DOC = """ This is an example of a dynamic test suite. @param build: The name of the image to test. Ex: x86-mario-release/R17-1412.33.0-a1-b29 @param board: The board to test on. Ex: x86-mario @param pool: The pool of machines to utilize for scheduling. If pool=None board is used. @param check_hosts: require appropriate live hosts to exist in the lab. @param SKIP_IMAGE: (optional) If present and True, don't re-image devices. @param file_bugs: If True your suite will file bugs on failures. @param max_run_time: Amount of time each test shoud run in minutes. """ import common from autotest_lib.server.cros.dynamic_suite import dynamic_suite dynamic_suite.reimage_and_run( build=build, board=board, name=NAME, job=job, pool=pool, check_hosts=check_hosts, add_experimental=True, num=num, file_bugs=file_bugs, skip_reimage=dynamic_suite.skip_reimage(globals())) The suite control file’s TEST_TYPE is Server. This indicates simply that the suite control file is meant to run server side. This restriction does not apply to tests contained in the suite, the suite can contain both Client and Server side tests regardless of this line in the suite control file. TEST_TYPE Server Tests can declare themselves to be part of any number of suites. This is done by listing the suite in the test control file’s SUITE variable. To put a test into multiple suites, simply use a comma separated list. In this codelab, we will add two existing control files and two new control files to our suite. Let's start with two new dummy tests control files. Create the file src/third_party/autotest/files/client/site_tests/peaches_DummyPass/control with the contents below. Caution: copy-pasting from Google Docs has been known to convert consecutive whitespace characters into unicode characters, which will break your control file. Using CTRL-C + CTRL-V is safer than using middle-click pasting on Linux. src/third_party/autotest/files/client/site_tests/peaches_DummyPass/control # Use of this source code is governed by a BSD-style license that can be # found in the LICENSE file. AUTHOR = "Chrome OS Team" NAME = "peaches_DummyPass" PURPOSE = "Dummy test that passes immediately." SUITE = "peaches" TIME = "SHORT" TEST_CATEGORY = "General" TEST_CLASS = "peaches" TEST_TYPE = "client" DOC = """ Example test for peaches suite. """ job.run_test('dummy_Pass') In the same directory, create another file, control.bluetooth with the same contents, but with the test NAME changed to peaches_DummyPass_BT and with a line added near the other declarations at the top specifying: control.bluetooth NAME peaches_DummyPass_BT . . . DEPENDENCIES = "bluetooth" . . . This label tells the dynamic suite scheduler that this job may only run on DUTs with the bluetooth label. Let's also add some existing tests to the peaches suite. For instance, edit the SUITE lines of src/third_party/autotest/files/client/site_tests/login_LoginSuccess/control and .../login_BadAuthentication/control files to include peaches. If you need to add a test to multiple suites to accomplish this, you can use a comma separated list of the form SUITE = "suite1, suite2, suite3" SUITE src/third_party/autotest/files/client/site_tests/login_LoginSuccess/control .../login_BadAuthentication/control peaches SUITE = "suite1, suite2, suite3" To verify that we have added these 4 tests to our suite, we can use the suite_enumerator utility, as follows: suite_enumerator user@host:~/chromiumos/src/third_party/autotest/files$ site_utils/suite_enumerator.py peaches -a . ./client/site_tests/login_BadAuthentication/control ./client/site_tests/peaches_DummyPass/control ./client/site_tests/login_LoginSuccess/control ./client/site_tests/peaches_DummyPass/control.bluetooth Earlier, we added two new test control files for client-side tests. In order for the new test files to be available to the DUT at test time, they must be included in the appropriate overlay ebuild file. This procedure is explained in more detail in a separate codelab -- writing a client side test (not yet published). Open the file src/third_party/chromiumos-overlay/chromeos-base/autotest-tests/autotest-tests-9999.ebuild. Near the bottom of the file, at the bottom of the long list of IUSE_TESTS entries, add the following: src/third_party/chromiumos-overlay/chromeos-base/autotest-tests/autotest-tests-9999.ebuild IUSE_TESTS . . . +tests_peaches_DummyPass +tests_peaches_DummyPass_BT . . . We need to commit our changes to two separate git repositories. The changes that we made to the ebuild are required in order for the suite to run properly on a DUT, so we need to make the Autotest repo changes depend on the ebuild changes. First, from the chromiumos-overlay directory, create a commit with git commit -a. Write a commit message that suits your fancy. git commit -a Find the Change-ID for the commit you just created, using git show --stat. This will be some string similar in form to I515b9c4775f518b7b000f964a00df9845ed0c6f6, in the commit message for the commit you just created. You should also see in the output of that command that you have changed 1 file. git show --stat Change directory to the Autotest repository, and create another commit, but this time including in your commit message a line CQ-DEPEND=CL:*****, pasting in the Change-ID of the first commit. This tells the build system that in order to apply our patch to the Autotest repository, it must first apply our patch to the chromiumos-overlay repository. You should see that 5 files have changed. If not, you may have forgotten to add your new control files to the git repo! Run git add . and git commit -a --amend to fix that. CQ-DEPEND=CL:***** git add . git commit -a --amend Now, upload both your changes to gerrit with repo upload . -d from each directory, or run repo upload --br=<branch> -d and uncomment the directories to upload. The -d flag here marks our upload as a draft, so no prying eyes will see our dirty hacking. repo upload . -d repo upload --br=<branch> -d -d Once repo upload has finished its work, you will see links to your two new changes on chromium-review.googlesource.com. Determine the Change-ID for your Autotest changes. Then, submit your patch to be built remotely: user@host:~/chromiumos/chromite/bin$ ./cbuildbot --remote -g ***** lumpy-release where you have pasted in the Change-ID of the changes to the Autotest repository. The output of this command should give you a buildbot link where you can follow your build progress. The build will take about 90 minutes, so now is a good time to go have lunch. Once the building step in the previous section has concluded, you should receive an email to this effect from cros.tryserver@chromium.org. Follow the link in this email to your build results page, then drill down to the "Report stdio" link, and pull out the build number (which will be a string similar to "trybot-lumpy-release/R26-3556.0.0-b683"). Now, run your suite with the command user@host:~/chromiumos/src/third_party/autotest/files/site_utils$ ./run_suite.py -s peaches -b lumpy -i ***** -p try-bot where ***** is the build number you just extracted. Point your browser at, and you should soon see your suite job appear in the job list. After the suite job has started to run, it will spawn sub-jobs for all the individual tests. Note that different tests may end up running on different DUTs. One of the tests we created in the codelab, peaches_DummyPass_BT, made use of a DEPENDENCY to require that the test could only run on DUTs with the bluetooth label. In addition to specifying dependencies at the test level, they can also be specified at the suite level. When a suite with suite level dependencies is run, all the jobs kicked off by the suite will have any suite dependencies added in addition to the test level dependencies. To add a suite level dependency, edit the control file for the suite. Add a named argument to the call to dynamic_suite.reimage_and_run, of the form suite_dependencies=’servo’, for example. Now, when the suite is run, all jobs will inherit an additional dependency on servo. The string can contain multiple dependencies as a comma separated list. Suite-level dependencies can be useful when you want to run several closely related suites consisting of the same tests, but with slightly different dependencies. For example, if you want to run a suite focused on network3g connectivity, but separately on devices configured for different cellular carriers. See for instance Suites can be scheduled to run in the test lab automatically, either triggered by build events or at regular timed intervals. To add your suite to the schedule, edit suite_scheduler.ini in the root directory of the Autotest repo. Following in the footsteps of the other suites already in the file, it should be easy to add your suite. To add peaches as a suite that runs nightly, add the following to suite_scheduler.ini . . . [PeachesDaily] run_on: nightly suite: peaches branch_specs: >=R21 pool: suites num: 2 . . . The fields above specify when suite runs should be triggered, which suite should be run, which branches should trigger the suite to run, which machine pool the suite should be assigned to, and the number of DUTs that the suite should attempt to use. For more information on what pool to select refere to What pool should I select. If you have added a new suite to suite_scheduler.ini, one for which a suite control file did not exist before, you need to pay attention to the branch_specs attribute. Suite control files are picked up from the build artifacts (unlike other server-side control files). You can either backport your new suite control file to older maintained branches, or avoid scheduling this suite against those branches by using branch_specs to set a cutoff. There are some subtleties in the num parameter, with respect to test dependencies. You must ensure that the num parameter is greater than or equal to the number of unique dependency sets over all the jobs in your suite. So, for instance, if you have a suite (like peaches) that has some jobs with no dependencies, and some jobs with 1 dependency (bluetooth), you must make use num >= 2, otherwise the suite will fail immediately on running. There’s a handy sanity check script to make sure you’re satisfied this: ./site_utils/suite_scheduler/suite_scheduler.py --sanity This sanity check will also run as a pre-submit hook, so even if you forget to run it yourself, you will be warned on repo upload that you have not fulfilled the num criteria.
http://www.chromium.org/chromium-os/testing/test-code-labs/dynamic-suite-codelab
CC-MAIN-2014-35
refinedweb
2,104
54.73
Dartle A simple build system (or task runner, really) written in Dart. Dartle is designed to integrate well with pub and Dart's own build system, but help with automation tasks not covered by other tools. It is inspired by Gradle and, loosely, Make. How to use Add dartle to your dev_dependencies: pubspec.yaml dev_dependencies: dartle: Write a dartle build file dartle.dart import 'package:dartle/dartle.dart'; final allTasks = [ Task(hello, argsValidator: const ArgsCount.range(min: 0, max: 1)), Task(bye, dependsOn: const {'hello'}), Task(clean), ]; main(List<String> args) async => run(args, tasks: allTasks.toSet(), defaultTasks: {allTasks[0]}); /// To pass an argument to a task, use a ':' prefix, e.g.: /// dartle hello :joe hello(List<String> args) => print("Hello ${args.isEmpty ? 'World' : args[0]}!"); /// If no arguments are expected, use `_` as the function parameter. bye(_) => print("Bye!"); clean(_) => deleteOutputs(allTasks); Run your build! In dev mode, use dart to run the build file directly: dart dartle.dart Notice that all dev_dependenciescan be used in your build! And all Dart tools work with it, including the Observatory and debugger, after all this is just plain Dart! Once you're done making changes to the build file (at least for a while), run it with dartle instead: - Activate dartle(only first time) pub global activate dartle - Run the build dartle This will execute the default tasks in the build file, dartle.dart, which should be located in the working directory, after compiling it to native using dart2native (if available, otherwise it will use dart --snapshot) whenever necessary (i.e. every time a change is made to the build file or pubspec.yaml). Selecting tasks In the examples above, the defaultTasks ran because no argument was provided to Dartle. To run specific task(s), give them as arguments when invoking dartle: dartle hello bye Output: 2020-02-06 20:53:26.917795 - dartle[main] - INFO - Executing 2 tasks out of a total of 4 tasks: 2 tasks selected, 0 due to dependencies 2020-02-06 20:53:26.918155 - dartle[main] - INFO - Running task 'hello' Hello World! 2020-02-06 20:53:26.918440 - dartle[main] - INFO - Running task 'bye' Bye! ✔ Build succeeded in 3 ms Notice that the dartleexecutable will cache resources to make builds run faster. It uses the .dartle_tool/directory, in the working directory, to manage the cache. You should not commit the .dartle_tool/directory into source control. To provide arguments to a task, provide the argument immediately following the task invocation, prefixing it with :: dartle hello :Joe Prints: 2020-02-06 20:55:00.502056 - dartle[main] - INFO - Executing 1 task out of a total of 4 tasks: 1 task selected, 0 due to dependencies 2020-02-06 20:55:00.502270 - dartle[main] - INFO - Running task 'hello' Hello Joe! ✔ Build succeeded in 1 ms Declaring tasks The preferred way to declare a task is by wrapping a top-level function, as shown in the example above. Basically: import 'package:dartle/dartle.dart'; final allTasks = {Task(hello)}; main(List<String> args) async => run(args, tasks: allTasks); hello(_) => print("Hello Dartle!"); This allows the task to run in parallel with other tasks on different Isolates (potentially on different CPU cores). If that's not important, a lambda can be used, but in such case the task's name must be provided explicitly (because lambdas have no name): import 'package:dartle/dartle.dart'; final allTasks = {Task((_) => print("Hello Dartle!"), name: 'hello')}; main(List<String> args) async => run(args, tasks: allTasks); A Task's function should only take arguments if it declares an ArgsValidator, as shown in the example: Task(hello, argsValidator: const ArgsCount.range(min: 0, max: 1)) ... hello(List<String> args) => ... A Task will not be executed if its argsValidator is not satisfied (Dartle will fail the build if that happens). Task dependencies and run conditions A Task can depend on other task(s), so that whenever it runs, its dependencies also run (as long as they are not up-to-date). In the example above, the bye task depends on the hello task: Task(bye, dependsOn: const {'hello'}) This means that whenever bye runs, hello runs first. Notice that tasks that have no dependencies between themselves can run at the same time - either on the same Isolateor in separate Isolates(use the -pflag to indicate that tasks may run in different Isolates when possible, i.e. when their action is a top-level function and there's no dependencies with the other tasks). A task may be skipped if it's up-to-date according to its RunCondition. The example Dart file demonstrates that: Task(encodeBase64, description: 'Encodes input.txt in base64, writing to output.txt', runCondition: RunOnChanges( inputs: file('input.txt'), outputs: file('output.txt'), )) The above task only runs if at least one of these conditions is true: output.txtdoes not yet exist. - either input.txtor output.txtchanged since last time this task ran. - the -for --force-tasksflag is used. If a RunCondition is not provided, the task is always considered out-of-date. To force all tasks to run, use the -zor --reset-cacheflag. For more help, run dartle -h. Proper documentation is going to be available soon! Libraries - dartle - A simple build system written in Dart. [...] - dartle_cache - A library exposing the mechanism used by dartle to cache resources and intelligently determine which tasks must run, and which tasks may be skipped. [...]
https://pub.dev/documentation/dartle/latest/
CC-MAIN-2020-45
refinedweb
903
57.98
Separating Design from UI Elements How can I create an architecture similar to that of CSS where I can skin my application fairly easily and change the look/feel with a simple global attribute file? Thanks, Kyle - DenisKormalev You can create component with bunch of properties and use them in all places where you need customization. So you can change all style-aware things in one place. There is no CSS like file yet? Wouldn't that be difficult to set up menu-wise? I'm not sure there is a need for a css concept. Like Denis said , you create the components in separate files, and if you want to change the look and feel you just change that file. For example you have a file Button.qml and then in you application you create Button {} components. If you want to change the look of the button you just change the Button.qml file. According to the Components talk on the dev days, this is a work in progress. It will probably not take the shape of using CSS, but of a set of modifyable components that keep their API but allow you to change the appearance. For me, that is unfortunate really. While the QML way to do things seems quite powerful, it is yet another way of styling to learn. I thought the style sheet approach that has been introduced into Qt is quite powerful, though still a bit incomplete here and there. I think a CSS for QML would be very nice, and would fit the declarative aproach quite well. However, I do see the difficulties associated with it. On a second thinking I think it will be a great idea, specially when Qt components will be introduced. because you will probably have a QMLPushButton and you will need something like this @QMLPushButton{ class: redClass }@ and @QMLPushButton{ class: blueClass }@ and then define the classes separate from the Components I think an idea that would be interesting for QML is the approach used by EFL (from Enlightenment) with its Edje library (the first declarative language for design of applications' interface I've known). They have a program (edje_cc) that takes an interface file written in Edje and all image files referenced by it, and generates a single binary file (a .edj file) that you can distribute independently of the application itself, import in your application, and of course change from one .edj file to another, easily changing your application's appearance. With QML, on the other hand, if our application's interface is spplited among several .qml files and we want our application to be skinnable, we have to deliver every single .qml file and every single image file, and tell the application which .qml file is the main file to be loaded. So, I thing the Edje approach is better in this sense of delivering one single binary file. I've added a (very basic) "QML Styling": wiki page with some of the common approaches to styling in QML, if you are interested in having a look. - Fuzzbender Excellent starting point, mbrasser! Minor nag: last example has a typo on line 19 (syle vs. style). :) [quote author="Fuzzbender" date="1288252672"]Minor nag: last example has a typo on line 19 (syle vs. style). :)[/quote] Thanks, fixed now. Michael Thanks for that page, mbrasser! Another small typo: Approach 3 instead of Approach 2 ;) [quote author="skolibri" date="1288359094"]Another small typo: Approach 3 instead of Approach 2 ;)[/quote] Also fixed now, thanks! Michael [quote author="mbrasser" date="1288139056"]I've added a (very basic) "QML Styling": wiki page with some of the common approaches to styling in QML, if you are interested in having a look.[/quote] Hi Michael, Thank you for your wiki entry on styling. Is there a way to dynamicly change styling with this methode? author="mbrasser" date="1294965822"]] Yes, one could use a C++ class interface with a provided method that is called from the QML code if you want to change your style (or similar implementation in javascript). Another approach, which i've been thinking of, is to overload your custom components in your application by changing the current path to the component location. For example, you have in ../components/Button.qml: @Rectangle { id: rect property alias buttonColor: rect.color property alias buttonLabel: label.text property alias buttonWidth: rect.width property alias buttonHeight: rect.height property alias buttonRadius: rect.radius color: "grey" Text { id: label anchors.centerIn: parent } @ and you have in ../styles/blue/Button.qml @ import "../../components" Button { buttonColor: "blue" buttonRadius: 5 } @ in use: @ import "current" Rectangle { width: 320 height: 120 Button { id: button1 buttonWidth: 140 buttonHeight: 48 buttonLabel: "Button1" } @ Suppose the "current" from the import statement points somehow to the ../components. If you "bend" it to the ../style/blue you will get a styled button. However, you have to restart your application to make the changes visible... In this way you could have several different styles of existing components defined in another location and just "point" the current import statement to the required one. Michael, what do you think? would appriciate your opinion (and others as well, of course :))
https://forum.qt.io/topic/1432/separating-design-from-ui-elements
CC-MAIN-2017-34
refinedweb
859
64.51
There are 365 days in a year, 366 in leap year, and most of them are crap and should be put on QVFD. However, some of these days are absolutely golden, especially for piss taking. However, not only admins can make these front page mods, you can too with the super good magical powers of Uncyclopedia! Haven't played Black Ops but heard the same cotpmainls from FPS war game type connoisseurs who were hoping for more game play through real-life conflicts in history instead of some gimmicky future crap. Step Two Anyway, I don't want all of these sections to be too short, so I'll just remind you of stuff. Remember to choose your theme well. Uncyclopedia being a satire site, try to make it as satirical as possible. On this step, you should think of an article title. These don't work in the format of "Subject X", put them in the Babel namespace, so you get Babel:XX, like Babel:Aa for AAAAAAAAAAAAA day. Step Three For more in-depth reskins, copy and paste the templates used on the Main Page and edit as appropriate. Step Four There's not much left to do now, just edit away! Remember to not only theme it, but make it satirical. You can't get away with an unfunny Uncyclopedia article. Before you can actually put it on the front page, you have to ask an admin because they are the only ones who can edit the source of the front page. Unfunny articles will NOT get away. How it All Works NOTE: The next section of this HowTo is for advanced users. If you didn't understand the instructions above, then don't bother reading this bit. On AAAAAAAA day, Uncyclopedia saw its first ever CSS reskin. You probably didn't notice much of a change, but there was one. Splarka used a little JavaScript hack to change the logo to this for the day by using this handy bit of JavaScript: if((document.title.indexOf("AAAAAAAAA!") == 0)||(document.title.indexOf("Babel:Aa - ") == 0)) { document.write('<style type="text/css">/*<![CDATA[*/ @import "/index.php?title=User:Splaka/aaaa.css&action=raw&ctype=text/css"; /*]]>*/</style>'); } and this: ( function() { var xPathResult = document.evaluate ( './/text()[normalize-space(.) != ""]', document.body, null, XPathResult.ORDERED_NODE_SNAPSHOT_TYPE, null ); var textNode; for (var i = 0, l = xPathResult.snapshotLength; i < l; i++) { textNode = xPathResult.snapshotItem(i); textNode.data = textNode.data.replace(/[a-zA-Z0-9]/g, 'A'); } } )() When {{Aa:}} was put on the main page (when admins reskin the main page, they don't use the source code of the reskin, they use it like a template), the Main Page started using aaaa.css (now a redlink) in addition to Monobook.css (Uncyclopedia.css since the MediaWiki 1.5 upgrade). Other good examples of reskins for anniversaries are Memory Alpha's Birthday, Plain Text Day and Nintendorulez's unbanning. Now, the system has changed. Now the skin's CSS is (generally) kept at MediaWiki:Skin/Name_Space:Page_Name.css, so for the Wookieepedia reskinning, the bit in the Javascript looked like this (obviously, since then more have been added): /* New reskin parser Instructions: 1) Add the page title and namespace exactly ("Name space:Page name") as new skin, use spaces *NOT* underscores: ("Main Page": "", should be first line). The next parameter is optionally an existing "MediaWiki:Skin/"-prefixed file (in which case you can skip step 2). 2) Create MediaWiki:Skin/Name_Space:Page_Name.css and place reskin CSS content there. */ skin = { "Main Page": "", "Babel:Gbs": "" } var re = RegExp("(.*) - Uncyclopedia"); var matches = re.exec(document.title); var skinName; if (matches) { if (skin[matches[1]] != undefined) { skinName = (skin[matches[1]].length > 0) ? skin[matches[1]] : matches[1] + '.css'; document.write('<style type="text/css">/*<![CDATA[*/ @import "/index.php?title=MediaWiki:Skin/' + skinName + '&action=raw&ctype=text/css"; /*]]>*/</style>'); } } So the CSS for the page would be at MediaWiki:Skin/Babel:Gbs.css. This makes all site-wide css files editable only to to Administrators. However, a regular user can use the above code to build a .css for a reskin in their userspace. For example: if(document.title.indexOf("User:You/Blah") == 0) { document.write('<style type="text/css">/*<![CDATA[*/ @import "/index.php?title=User:You/blah.css&action=raw&ctype=text/css"; /*]]>*/</style>'); } Then you simply need to create User:You/Blah and User:You/blah.css with the appropriate code. Once complete you can ask an administrator to apply it (by copying it to MediaWiki:Skin/pagename.css) to all users for that one page.
http://uncyclopedia.wikia.com/wiki/HowTo:Reskin_Uncyclopedia?diff=next&oldid=5081890
CC-MAIN-2014-52
refinedweb
759
59.3
The wcsftime() function is defined in <cwchar> header file. wcsftime() prototype size_t wcsftime( wchar_t* str, size_t count, const wchar_t* format, const tm* time ); The wcsftime() function takes 4 arguments: str, count, format and time. The date and time information pointed to by time is converted to a null-terminated wide character based on the value of format and is stored in the wide array pointed to by str. At most count bytes are written. wcsftime() Parameters - str: Pointer to the first element of the wide character array to store the result. - count: Maximum number of wide character to write. - format: Pointer to a null-terminated wide character string specifying the format of conversion. The format string consists of conversion specifier (beginning with % and optionally followed by E or O) and other ordinary wide characters. The ordinary wide characters including the terminating null wide character are copied as it is to the output wide string. - time: The date and time information to convert. wcsftime() Return value - On success, the wcsftime() function returns the number of wide character written into the wide character array pointed to by str not including the terminating L'\0'. - If count was reached before the entire string could be stored, 0 is returned and the contents are undefined. Example: How wcsftime() function works? #include <ctime> #include <cwchar> #include <iostream> using namespace std; int main() { time_t curr_time; tm * curr_tm; wchar_t date_string[100]; wchar_t time_string[100]; time(&curr_time); curr_tm = localtime(&curr_time); wcsftime(date_string, 50, L"Today is %B %d, %Y", curr_tm); wcsftime(time_string, 50, L"Current time is %T", curr_tm); wcout << date_string << endl; wcout << time_string << endl; return 0; } When you run the program, the output will be: Today is April 21, 2017 Current time is 14:42:45
https://cdn.programiz.com/cpp-programming/library-function/cwchar/wcsftime
CC-MAIN-2021-04
refinedweb
289
51.48
On 15 September 2007, Michael Niedermayer wrote: > Hi > >. Anyway, that 50%/50% estimation was used just as an example, it does not change the fact that branches are very slow and multiplications are fast (just checking a value and branching over the some piece of code takes 4 cycles, the same number of cycles could be used to do 4 multiplications instead). Multiple branches just over small chunks of code here and there are not an option. And as I already said before, I'll check if splitting code into 'idct_row_full' (row[0] to row[7] are all nonzero) and 'idct_row_partial' (rows[4] to row[7] are zero) will prove to be useful. Though alternative paths for these two cases really do increase code size somewhat. > > > > +/* > > > > + * Enforce 8 byte stack alignment if it is not provided by ABI. Used > > > > at the beginning + * of global functions. If stack is not properly > > > > aligned, real return address is + * pushed to stack (thus fixing > > > > stack alignment) and lr register is set to a thunk + * function > > > > 'unaligned_return_thunk_armv5te' which is responsible for providing + > > > > * correct return from the function in this case. > > > > + */ > > > > + .macro idct_stackalign_armv5te > > > > +#ifndef DWORD_ALIGNED_STACK > > > > + tst sp, #4 > > > > + strne lr, [sp, #-4]! > > > > + adrne lr, unaligned_return_thunk_armv5te > > > > #endif > > > > + .endm > > > > > > the compiler has to maintain an properly aligned stack and if needed > > > has to align it on entry to libavcodec (avcodec_decode_video() and > > > such) its not acceptable to realign the stack in the inner loop calling > > > the idct > > > > The compiler has to maintain ABI specified stack alignment (either OABI > > or EABI for ARM). If ABI specifies 4-byte alignment (OABI case), we have > > to either insert some code to manually align stack, or not use this > > function at all. Stack alignment at 8-byte boundary can be performed > > really fast with 3 instructions (3 cycles) on entry and one instruction > > at exit (5 cycles) in the case when stack alignment was needed. The > > overhead of manual stack alignment for OABI is 3+5/2 on average and is > > equal to 5.5 cycles (ARM9E). That's quite cheap. > > as ive said, this code does NOT belong in the inner loop,)? I have tracked this mailing list for a long time already and have seen quite a number of posts about windows related improper stack alignment breakages. I must say, I would not like to see something like this happening to ARM as well. In addition, you seem to care about portability. A random compiler X does not have to provide an option to support arbitrary stack alignment specified by user. There is a reason why such a thing as ABI was invented. >.
http://ffmpeg.org/pipermail/ffmpeg-devel/2007-September/039131.html
CC-MAIN-2014-41
refinedweb
432
69.01
This chapter describes the key concepts that are related to the software components of the Sun Cluster environment. The information in this chapter is directed primarily to system administrators and application developers who use the Sun Cluster API and SDK. Cluster administrators can use this information in preparation for installing, configuring, and administering cluster software. Application developers can use the information to understand the cluster environment in which they work. This chapter covers the following topics: Administrative Interfaces High-Availability Framework Quorum and Quorum Devices Developing New Data Services Using the Cluster Interconnect for Data Service Traffic Resources, Resource Groups, and Resource Types Support for Solaris Zones on Sun Cluster Nodes Service Management Facility Data Service Project Configuration Public Network Adapters and Internet Protocol (IP) Network Multipathing SPARC: Dynamic Reconfiguration Support You can choose how you install, configure, and administer the Sun Cluster software from several user interfaces. You can accomplish system administration tasks either through the Sun Cluster Manager, formerly SunPlexTM Manager, graphical user interface (GUI), or through the command-line interface. On top of the command-line interface are some utilities, such as scinstall and clsetup, to simplify selected installation and configuration tasks. The Sun Cluster software. Time between all nodes in a cluster must be synchronized. Whether you synchronize the cluster nodes with any outside time source is not important to cluster operation. The Sun Cluster software employs the Network Time Protocol (NTP) to synchronize the clocks between nodes. node, you have an opportunity to change the default time and date setting for the node. node), that establishes a peer relationship between all cluster nodes. One node is designated the “preferred” node. Nodes are identified by their private host names and time synchronization occurs across the cluster interconnect. For instructions about how to configure the cluster for NTP, see Chapter 2, Installing Software on the Cluster, 8, Administering the Cluster, in Sun Cluster System Administration Guide for Solaris OS. The Sun Cluster software makes all components on the “path” between users and data highly available, including network interfaces, the applications themselves, the file system, and the multihost devices. In general, a cluster component is highly available if it survives any single (software or hardware) failure in the system. The following table shows the kinds of Sun Cluster component failures (both hardware and software) and the kinds of recovery that are built into the high-availability framework.Table 3–1 Levels of Sun Cluster Failure Detection and Recovery Sun Cluster software's high-availability framework detects a node or zone failure quickly and creates a new equivalent server for the framework resources on a remaining node or zone in the cluster. At no time are all framework resources unavailable. Framework resources that are unaffected by a crashed node or zone are fully available during recovery. Furthermore, framework resources of the failed node or zone or zone failure. The applications cannot detect that the framework resource server has been moved to another node. Failure of a single node is completely transparent to programs on remaining nodes by using the files, devices, and disk volumes that are attached to this node. This transparency exists if an alternative hardware path exists to the disks from another node. An example is the use of multihost devices that have ports to multiple nodes. Sun Cluster software also tracks zone membership by detecting when a zone boots up or halts. These changes also trigger a reconfiguration. A reconfiguration can redistribute cluster resources among the nodes and zones in the cluster.. See About Failure Fencing for more information about how the cluster protects itself from partitioning into multiple separate clusters. The failfast mechanism detects a critical problem in either the global zone or in a non-global zone on a node. The action that Sun Cluster takes when failfast detects a problem depends on whether the problem occurs in the global zone or a non-global zone. If the critical problem is located in the global zone, Sun Cluster forcibly shuts down the node. Sun Cluster then removes the node from cluster membership. If the critical problem is located in a non-global zone, Sun Cluster reboots that non-global zone. If a node loses connectivity with other nodes, the node attempts to form a cluster with the nodes with which communication is possible. If that set of nodes does not form a quorum, Sun Cluster software halts the node and “fences” the node from shared storage. See About Failure Fencing for details about this use of failfast. If one or more cluster-specific daemons die, Sun Cluster software declares that a critical problem has occurred. Sun Cluster software runs cluster-specific daemons in both the global zone and in non-global zones. If a critical problem occurs, Sun Cluster either shuts down and removes the node or reboots the non-global zone where the problem occurred. When a cluster-specific daemon that runs in a non-global zone fails, a message similar to the following is displayed on the console. When a cluster-specific daemon that runs in the global zone fails and the node panics, a message similar to the following is displayed on the console. After the panic, the node might reboot and attempt to rejoin the cluster. Alternatively, if the cluster is composed of SPARC based systems, the node might remain at the OpenBootTM PROM (OBP) prompt. The next action of the node is determined by the setting of the auto-boot? parameter. You can set auto-boot? with the eeprom command, at the OpenBoot PROM ok prompt. See the eeprom(1M) man page. The yourself. Each file contains a checksum record to ensure consistency between nodes. Updating CCR files yourself. The Sun Cluster software. The Sun Cluster software then redirects the access to that path. Sun Cluster global devices include disks, CD-ROMs, and tapes. However, the only multiported global devices that Sun devices, assigns each device a unique major and a minor number that are consistent on all nodes of the cluster. Access to the global devices is performed by using the unique device ID instead of the traditional Solaris device IDs, identify a multihost disk as c1t2d0, and Node2 might identify the same disk completely differently, as c3t2d0. The DID driver assigns a global name, such as d10, that the nodes use instead, giving each node a consistent mapping to the multihost disk. You update and administer device IDs with the cldevice command. See the cldevice(1CL) man page.. This section describes device group properties that enable you to balance performance and availability in a multiported disk configuration. Sun or highest in priority on the node list becomes secondary. The desired number of secondary nodes can be set to any integer between one and the number of operational nonprimary provider nodes in the device group. If you are using Solaris Volume Manager, you must create the device group before you can set the numsecondaries property to a number other than the default. The default desired number of secondaries for device services is one. The actual number of secondary providers that is maintained by the replica framework is the desired number, unless the number of operational non. The node for each metadevice or volume in that disk set or disk group. In the Sun Cluster software, each device node in the local volume manager namespace is replaced by a symbolic link to a device node. The cluster file system has the following features: File access locations are transparent. A process can open a file that is located anywhere in the system. Processes on all nodes can use the same path name to locate a file. When the cluster file system reads files, it does not update the access time on those files. Coherency protocols are used to preserve the UNIX file access semantics even if the file is accessed concurrently from multiple nodes. Extensive caching is used along with zero-copy bulk I/O movement to move file data efficiently. The cluster file system provides highly available, advisory file-locking functionality by using the fcntl command interfaces. Applications that run on multiple cluster nodes can synchronize access to data by using advisory file locking on a cluster file system. File locks are recovered immediately from nodes that leave the cluster, and from applications that fail while holding locks. Continuous access to data is ensured, even when failures occur. Applications are not affected by failures if a path to disks is still operational. This guarantee is maintained for raw disk access and all file system operations. Cluster file systems are independent from the underlying file system and volume management software. Cluster file systems make any supported on-disk file system global. You can mount a file system on a global device globally with mount -g or locally with mount. Programs can access a file in a cluster file system from any node in the cluster through the same file name (for example, /global/foo). A cluster file system is mounted on all cluster members. You cannot mount a cluster file system on a subset of cluster members. A cluster file system is not a distinct file system type. Clients verify the underlying file system (for example, UFS). In the Sun Cluster software, all multihost disks are placed into device groups, which can be Solaris Volume Manager disk sets,. You can mount cluster file systems as you would mount file systems: about how to use the HAStoragePlus resource type. You can also use the HAStoragePlus resource type to synchronize the startup of resources and device groups on which the resources depend. For more information, see Resources, Resource Groups, and Resource Types. You can use the syncdir mount option for cluster file systems that use UFS as the underlying file system. However, performance significantly improves if you do not specify syncdir. If you specify syncdir, the writes are guaranteed to be POSIX compliant. If you do not specify syncdir, you experience the same behavior as in NFS file systems. For example, without syncdir, you might not discover an out of space condition until you close a file. With syncdir (and POSIX behavior), the out-of-space condition would have been discovered during the write operation. The cases in which you might have problems if you do not specify syncdir are rare.. The current release of Sun Cluster software supports disk path monitoring (DPM). This section provides conceptual information about DPM, the DPM daemon, and administration tools that you use to monitor disk paths. Refer to Sun Cluster System Administration Guide for Solaris OS node or to all nodes in the cluster. See the cldevice(1CL) man page for more information about command-line options. command. See the syslogd(1M) man page. All errors that are related to the daemon are reported by pmfd. All the functions from the API return 0 on success and -1 for any failure. The DPM daemon monitors the availability of the logical path that is visible through multipath drivers such as Sun StorEdge Traffic Manager, Sun StorEdge 9900 Dynamic Link Sun Cluster Manager, formerly SunPlex Manager, graphical user interface (GUI). Sun Cluster Manager provides a topological view of the monitored disk paths in your cluster. The view is updated every 10 minutes to provide information about the number of failed pings. Use the information that is provided by the Sun Cluster Manager GUI in conjunction with the cldevice command to administer disk paths. See Chapter 12, Administering Sun Cluster With the Graphical User Interfaces, in Sun Cluster System Administration Guide for Solaris OS for information about Sun. Sun Cluster Manager enables you to perform the following basic DPM administration tasks: Monitor a disk path Unmonitor a disk path View the status of all monitored disk paths in the cluster Enable or disable the automatic rebooting of a node when all monitored disk paths fail The Sun Cluster Manager online help provides procedural information about how to administer disk paths. You use the clnode set command to enable and disable the automatic rebooting of a node when all monitored disk paths fail. You can also use Sun Cluster Manager to perform these tasks. This in the other partition. Amnesia occurs when the cluster restarts after a shutdown with cluster configuration data older than. Reservations. device from Network Appliance, Incorporated A quorum server process that runs on the quorum server machine A replicated device is not supported as a quorum device. In a two–node configuration, you must configure at least one quorum device to ensure that a single node can continue if the other node. The that the clustered application might run faster and is more highly available.. The Sun Cluster software supplies a set of service management methods. These methods run under the control of the Resource Group Manager (RGM), which uses them to start, stop, and monitor the application on the cluster nodes or in the zones. These methods, along with the cluster framework software and multihost devices, enable applications to become failover or scalable data services. The RGM also manages resources in the cluster, including instances of an application and network resources (logical host names and shared addresses). In addition to Sun Cluster software-supplied methods, the Sun Cluster software also supplies an API and several data service development tools. These tools enable application developers to develop the data service methods that are required to make other applications run as highly available data services with the Sun Cluster software. If the node or zone on which the data service is running (the primary node) fails, the service is migrated to another working node or zone without user intervention. Failover services use a failover resource group, which is a container for application instance resources and network resources (logical host names). Logical host names are IP addresses that can be configured on one node or zone, and at a later time, automatically configured down on the original node or zone and configured on another node or zone. For failover data services, application instances run only on a single node or zone. If the fault monitor detects an error, it either attempts to restart the instance on the same node or zone, or to start the instance on another node or zone (failover). The outcome depends on how you have configured the data service. The scalable data service has the potential for active instances on multiple nodes or zones. or zones within the cluster. This shared address enables these scalable services to scale on those nodes or zones. A cluster can have multiple shared addresses, and a service can be bound to multiple shared addresses. A scalable resource group can be online on multiple nodes or zones simultaneously. As a result, multiple instances of the service can be running at once. However, a scalable resource group that uses a shared address to balance the service load between nodes can be online in only one zone per physical node. All scalable resource groups use load balancing. All nodes or zones that host a scalable service use the same shared address to host the service. The failover resource group that hosts the shared address is online on only one node or zone at a time. Service requests enter the cluster through a single network interface (the global interface). These requests are distributed to the nodes or zones, based on one of several predefined algorithms that are set by the load-balancing policy. The cluster can use the load-balancing policy to balance the service load between several nodes or zones. Multiple global interfaces can exist on different nodes or zones that host other shared addresses. For scalable services, application instances run on several nodes or zones simultaneously. If the node or zone that hosts the global interface fails, the global interface fails over to another node or zone. If an application instance that is running fails, the instance attempts to restart on the same node or zone. If an application instance cannot be restarted on the same node or zone, and another unused node or zone is configured to run the service, the service fails over to the unused node or zone. Otherwise, the service continues to run on the remaining nodes or zones, Sun or zone or zone. Sun as follows: First, such a service is composed of one or more server instances. Each instance runs on a different node or zone. Two or more instances of the same service cannot run on the same node or zone. Sun and lockf to achieve the synchronization that you want. software provides the following to make applications highly available: Data services that are supplied as part of the Sun Cluster software A data service API A development library API for data services A “generic” data service The Sun Cluster Data Services Planning and Administration Guide for Solaris OS describes how to install and configure the data services that are supplied with the Sun Cluster software. software provides a “generic” data service. Use this generic data service to quickly generate an application's required start and stop methods and to implement the data service as a failover or scalable service. A cluster must have multiple network connections between nodes, forming the cluster interconnect. Sun or zones, an application must use the private host names that you configured during the Sun Cluster installation. For example, if the private host name for node1 is clusternode1-priv, use this name to communicate with node Sun Cluster installation, the cluster interconnect uses any name that you choose at that time. To determine the actual name, use the scha_cluster_get command with the scha_privatelink_hostname_node argument. See the scha_cluster_get(1HA) man page. Each node or zone is also assigned a fixed per-node address. This per-node address is plumbed on the clprivnet driver. The IP address maps to the private host name for the node or zone: clusternode1-priv. See the clprivnet(7) man page. If your application requires consistent IP addresses at all points, configure the application to bind to the per-node address on both the client and the server. All connections appear then to originate from and return to the per-node address. Sun Cluster Data Services Planning and Administration Guide for Solaris OS. Sun Cluster Data Services Planning and Administration Guide for Solaris OS for more information about this process. Appendix B, Standard Properties, in Sun Cluster Data Services Planning and Administration Guide for Solaris OS. The RGM extension properties provide information such as the location of application binaries and configuration files. You modify extension properties as you configure your data services. The set of extension properties is described in the individual guide for the data service. Solaris zones provide a means of creating virtualized operating system environments within an instance of the Solaris 10 OS. Solaris zones enable one or more applications to run in isolation from other activity on your system. The Solaris zones facility is described in Part II, Zones, in System Administration Guide: Solaris Containers-Resource Management and Solaris Zones. When you run Sun Cluster software on the Solaris 10 OS, you can create any number of zones on a physical node. You can use Sun Cluster software to manage the availability and scalability of applications that are running in Solaris non-global zones on cluster nodes. Sun Cluster software provides support for applications that are running in non-global zones as follows: Directly through the RGM Through the Sun Cluster HA for Solaris Containers data service On a cluster where the Solaris 10 OS is running, you can configure a resource group to run in the global zone or in non-global zones. The RGM manages each zone as a switchover target. If a non-global zone is specified in the node list of a resource group, the RGM brings the resource group online in the specified zone on the node. Figure 3–8 illustrates the failover of resource groups between zones on Sun Cluster nodes in a two-node cluster. In this example, identical zones are configured on each node to simplify the administration of the cluster. A failover resource group can fail over to a zone on another node or on the same node. However, if the node fails, the failing over of this resource group to a zone on the same node does not provide high availability. Nonetheless, you might find this failing over of a resource group to a zone on the same node useful in testing or prototyping. You can configure a scalable resource group (which uses network load balancing) to run in a non-global zone as well. However, do not configure a scalable resource group to run in multiple zones on the same node. In Sun Cluster commands, you specify a zone by appending the name of the zone to the name of the physical node, and separating them with a colon, for example: phys-schost-1:zoneA You can specify a zone with several Sun Cluster commands, for example: clreslogicalhostname(1CL). For information about how to configure support for Solaris zones directly through the RGM, see the following documentation: Guidelines for Non-Global Zones in a Cluster in Sun Cluster Software Installation Guide for Solaris OS Zone Names in Sun Cluster Software Installation Guide for Solaris OS Configuring a Non-Global Zone on a Cluster Node in Sun Cluster Software Installation Guide for Solaris OS Sun Cluster Data Services Planning and Administration Guide for Solaris OS Individual data service guides The Sun Cluster HA for Solaris Containers data service manages each zone as a resource that is controlled by the RGM. Use the Sun Cluster HA for Solaris Containers data service if any of following criteria is met: You require delegated root access. The application is not supported in a cluster. You require affinities between resource groups that are to run in different zones on the same node. If you plan to use the Sun Cluster HA for Solaris Containers data service for an application, ensure that the following requirements are met: The application is supported to run in non-global zones. The application is integrated with the Solaris OS through a script, a run-level script, or a Solaris Service Management Facility (SMF) manifest. The additional failover time that is required to boot a zone is acceptable. Some downtime during maintenance is acceptable. For information about how to use the Sun Cluster HA for Solaris Containers data service, see Sun Cluster Data Service for Solaris Containers Guide. The Solaris Service Management Facility (SMF) enables you to run and administer applications as highly available and scalable resources. Like the Resource Group Manager (RGM), the SMF provides high availability and scalability, but for the Solaris Operating System. Sun node. The SMF uses the callback method execution model to run services. The SMF also provides a set of administrative interfaces for monitoring and controlling services. These interfaces enable you to integrate your own SMF-controlled services into Sun. Sun-configured policies. The services that are specified for an SMF proxy resource can reside in a global zone or in a non-global zone. However, all the services that are specified for the same SMF proxy resource must be located in the same zone. SMF proxy resources work in any zone. System resources include aspects of CPU usage, memory usage, swap usage, and disk and network throughput. Sun Cluster enables you to monitor how much of a specific system resource is being used by an object type. An object type includes a node, zone, disk, network interface, or resource group. Sun Cluster also enables you to control the CPU that is available to a resource group. Monitoring and controlling system resource usage can be part of your resource management policy. The cost and complexity of managing numerous machines encourages the consolidation of several applications on larger servers. Instead of running each workload on separate systems, with full access to each system's resources, you use resource management to segregate workloads within the system. Resource management enables you to lower overall total cost of ownership by running and controlling several applications on a single Sun Sun Cluster. For information about configuring these services, see Chapter 9, Configuring Control of CPU Usage, in Sun Cluster System Administration Guide for Solaris OS. or zones that have the necessary resources and choose the node or zone, Sun Cluster monitors this telemetry attribute on all objects of that type in the cluster. Sun, Sun Cluster changes the severity level of the threshold to the severity level that you choose. Each application and service that is running on a cluster has specific CPU needs. Table 3–4 lists the CPU control activities that are available on different versions of the Solaris OS.Table 3–4 CPU Control If you want to apply CPU shares, you must specify the Fair Share Scheduler (FFS) as the default scheduler in the cluster. Controlling the CPU that is assigned to a resource group in a dedicated processor set in a non-global zone offers the strictest level of control. If you reserve CPU for a resource group, this CPU is not available to other resource groups. You can view system resource data and CPU assignments by using the command line or through Sun.. Data OS to manage workloads and consumption within your cluster. You can perform this configuration if you are using Sun Cluster on the Solaris 9 OS or on the Solaris 10 OS. Using the Solaris management functionality in a Sun Cluster environment enables you to ensure that your most important applications are given priority when sharing a node or zone with other applications. Applications might share a node or zone if you have consolidated services or because applications have failed over. Use of the management functionality described herein might improve availability of a critical application by preventing lower-priority applications from overconsuming system supplies such as CPU time. The Solaris documentation for this feature describes CPU time, processes, tasks and similar components as “resources”. Meanwhile, Sun Cluster documentation uses the term “resources” to describe entities that are under the control of the RGM. The following section uses the term “resource” to refer to Sun Cluster entities that are under the control of the RGM. The section uses the term “supplies” to refer to CPU time, processes, and tasks. This section provides a conceptual description of configuring data services to launch processes on a specified Solaris OS. Bringing the resource group online. To configure the standard Resource_project_name or RG_project_name properties to associate the Solaris project ID with the resource or resource group, use the -p option with the clresource set and the clresourcegroup set command. Set the property values to the resource or to the resource group. See Appendix B, Standard Properties, in Sun Cluster Data Services Planning and Administration Guide for Solaris OS for property definitions. you configure data services to use the controls provided by Solaris in a Sun Sun Cluster Data Services Planning and Administration Guide for Solaris OS. Multiported Device Groups. For procedural information, see “How To Change Disk Device Properties” in Administering Device Groups in Sun Cluster System Administration Guide for Solaris OS. For conceptual information about node configuration and the behavior of failover and scalable data services, see Sun Cluster System Hardware and Software Components. If you configure all cluster nodes or zones identically, usage limits are enforced identically on primary and secondary nodes or zones. or a zone on phys-schost-1 but could potentially be switched over or failed over to phys-schost-2 or phys-schost-3 or a zone on either of these nodes. The project that is associated with Application 1 must be accessible on all three nodes (phys-schost-1, phys-schost-2, and phys-schost-3) or zones on these nodes. Project database information can be a local /etc/project database file or can be stored in the NIS map or the LDAP directory service. The Solaris Operating System enables for flexible configuration of usage parameters, and few restrictions are imposed by Sun Sun or zones. Identical limits can cause a ping-pong effect when an application reaches its memory limit and fails over to a secondary node or zone with an identical memory limit. Set the memory limit slightly higher on the secondary node or zone. or zone. In the following examples the resources are not shown explicitly. Assume that each resource has only one application. Failover occurs in the order in which nodes or zones or in the zone and the number of shares that are assigned to each active application. In these scenarios, assume the following configurations. All applications are configured under a common project. Each resource has only one application. The applications are the only active processes on the nodes or in the zones. The projects databases are configured the same on each node of the cluster or in each zone. You can configure two applications on a two-node physical host (phys-schost-1) as the default master of one application. You can configure the second physical host (phys-schost-2) as the default master for the remaining two applications. Assume the following example projects database file on every node. or zone. Meanwhile, the default master is running in the cluster. During failover, the application that fails over is allocated resources as specified in the configuration file on the secondary node or zone. In this example, the project database files on the primary and secondary nodes. Clients make data requests to the cluster through the public network. Each cluster node cluster node has its own Internet Protocol (IP) Network Multipathing configuration, which can be different from the configuration on other cluster nodes. node. nodes. The same multipathing group on a node can host any number of logical host name or shared address resources. For more information about logical host name and shared address resources, see the Sun Cluster Data Services Planning and Administration Guide for Solaris OS. The design of the Internet Protocol . Sun Cluster 3.2 support for the dynamic reconfiguration (DR) software feature is being developed in incremental phases. This section describes concepts and considerations for Sun Cluster 3.2 support of the DR feature. All the requirements, procedures, and restrictions that are documented for the Solaris DR feature also apply to Sun Cluster DR support (except for the operating environment quiescence operation). Therefore, review the documentation for the Solaris DR feature before by using the DR feature with Sun Cluster software. You should review in particular the issues that affect nonnetwork IO devices during a DR detach operation. The Sun Enterprise 10000 Dynamic Reconfiguration User Guide and the Sun Enterprise 10000 Dynamic Reconfiguration Reference Manual (from the Solaris 10 on Sun Hardware collection) are both available for download from.. Sun. Sun. If. If. If.
http://docs.oracle.com/cd/E19787-01/819-2969/6n57kl13e/index.html
CC-MAIN-2015-32
refinedweb
5,133
52.9
Contents - Versions, Patches, and Support - Compiler Compatibility - Coding and Diagnostics - Library Compatibility - Compile-Time Performance - Run-Time Performance and libCrun.so.1? - How can I dependably identify the C++ compiler in each new release? Every compiler predefines some macros that identify it. Compiler vendors tend to keep these predefined macros stable from release to release, and we in particular document them as a stable public interface. A good way to find out what compiler you have is to write a small program that tests for predefined macros and outputs a string suitable for your intended use. You can also write a pseudo-program and compile it with -E (or the equivalent for other compilers). See 'macros' in the index of the C++ User's Guide for a list of predefined C++ compiler macros. In particular, the value of __SUNPRO_CC, which is a three-digit hex number. The first digit is the major release. The second digit is the minor release. The third digit is the micro release. For example, C++ 5.9 is 0x590. (R) architecture #endif #ifdef __sparcv9 generate code for 64-bit SPARC architecture #endif #ifdef __i386 generate code for x86 architecture #endif #ifdef __amd64 generate code for x64 - How can I tell which C++ compiler versions are compatible? First, a definition: "Upward compatible" means that object code compiled with an older compiler can be linked with code from a later compiler, as long as the compiler that is used in the final link is the latest compiler in the mix. The C++ 4.0, 4.1, and 4.2 compilers are upward compatible. (There are some "name mangling" issues among the compiler versions that are documented in the C++ 4.2 manuals.) The current C++ compiler through the 5.0 compiler are upward compatible with the 4.2 compiler in compatibility mode (-compat) . The actual object code from the 4.2 compiler is fully compatible with the object code from the current version through version 5.0, but debugging information (stabs) emitted by later compilers is not compatible with earlier debuggers. Code compiled in the default standard mode by the current C++ compiler through version 5.0 as listed here at. headers and library have overloads for types float and long double as well. To avoid an ambiguous call you might need to add explicit casts when calling these functions with integer arguments. For example:#include <math.h> extern int x; double z1 = sin(x); // now ambiguous double z2 = sin( (double)x ); // OK float z3 = sin( (float)x ); // OK long double z4 = sin( (long double)x ); // OK The Solaris patches listed below provide full ANSI C++ <cmath> and <math.h> library support as implemented in the libm patch for Solaris 8 and 9. - Solaris 9 sparc (PatchId 111722-04) - Solaris 9 i386 (PatchId 111728-03) - Solaris 8 sparc (PatchId 111721-04) - Solaris 8 i386 (PatchId 112757-01) -), you can use the -xlang={f90|f95|f77} option. This option tells the driver to figure out exactly which libraries need to be on the link line and to figure out the order in which they need to appear. The -xlang option is not available for the C compiler. To mix C and Fortran routines, you must compile them with cc and link them using the Fortran linker. C. Coding and Diagnostics - Why do I get errors and warnings involving file foo.cc when I'm not compiling or including foo.cc in my program? - Why do I get "duplicate definition" error messages when I compile the foo.i file that is generated from the -P preprocessing option? - Why am I getting an error when I link a SPARC V9 archive library into a dynamic library? It worked in Sun Studio 8. - What causes this message: "SunWS_cache: Error: Lock attempt failed for SunWS_cache"? - Why do I get the following warning from the linker: "ld: warning: symbol 'clog' has differing types"? - Why does my multi-threaded program using STLport crash when I compile it with -xarch=v8plus or -xarch=v8plusa? - Why does the compiler now say that a call to abs() is ambiguous? - When do temporary objects get destroyed? - Why does the compiler report an ambiguity for the standard exception class? - Why does C++ 5.3 and later emit errors about throw specifications on my derived virtual functions? - Why do template instances turn up missing when I link my program? The instances seem to be in the template cache. - Why do I get a warning about a function not being expanded when I use +w2 and not when I use +w2 +d? - Can I use the -ptr option to have multiple template repositories, or to share repositories among different projects? If not, what can I do? - Why does fprintf("%s",NULL) cause a segmentation fault? - Depending on how I call sqrt(), I get different signs for the imaginary part of the square root of a complex number. What's the reason for this? - A friend function in a class template does not get instantiated and I get link-time errors. This worked with C++ 5.0, why doesn't it work now? - Why does the compiler say a member of an enclosing class is not accessible from a nested class? - What causes the "pure virtual function call" message at run time? - Why does the compiler say that a derived-class virtual function hides a base-class virtual function with a different signature? My other compiler doesn't complain about the code. - Why do I get errors and warnings involving file foo.cc when I'm not compiling or including foo.cc in my program? Searching" for details.. However, that option disables all searches for separate template definitions. The C++ standard library implementation relies on the compiler finding the separate definitions. You must then include all template definitions explicitly in your code so you cannot use the definitions-separate model. Refer to the C++ User's Guide sections 5.2.1 and 5.2.2 for further discussion of the template definitions model or refer to the index of the C++ User's Guide for pointers to descriptions of the definitions separate and definitions included models. - Why do I get "duplicate definition" error messages when I compile the foo.i file that is generated from the -P preprocessing option? code model is not usable within dynamic libraries. There are two solutions to the problem. - Recompile the object files with -xcode=pic13 or -xcode=32. This method is preferred, and nearly always the right thing. - Recompile the object files with -xcode=abs64. This method results in dynamic libraries that are not sharable. Each process must rewrite the library as it is copied into separate areas of memory. The method is useful for applications that run for a very long time under tight performance constraints and low system sharing. - What causes this message: "SunWS_cache: Error: Lock attempt failed for SunWS_cache"? There are two main causes for the "lock attempt failed" error message about the template cache: - Sometimes a compilation aborts or is killed in such a way that it does not release the lock it is holding on the cache. This situation could occur in old compiler versions. Newer versions and current patches to older compilers ensure that the lock is released no matter how the compiler exits. You could remove just the lock file, but the cache is probably corrupted, and will cause further problems. The safest fix is to delete the entire template cache. The template cache must be writable by the compiler process. Refer to the umask(1) man page for more information. In particular, you must be sure that the umask of a process that creates the cache or files in it allows writing by other processes that need to access the same cache. If the directory is mounted on an NFS file system, the system must be mounted for read/write. - Why do I get the following warning from the linker: "ld: warning: symbol 'clog' has differing types"? by specifying -library=iostream, you get the buffered standard error stream 'clog' in the global namespace. (Standard iostreams does not have this conflicting symbol.) We have adjusted headers and libraries to silently rename each of these 'clog' symbols so that you can use both in one program. However, we must retain the original symbol spellings as weak symbols in each of the libraries, so that old binaries looking for the original symbols can continue to link. Be sure to get iostream and math declarations by including the appropriate system headers rather than declaring any of these entities yourself. - Why does either the latest version of the compiler or with a patched earlier release before being linked into new programs. - Why does the compiler now say that a call to abs() is ambiguous? The C++ Standard in section 26.5 requires the following overloads of the abs function: - In <stdlib.h> and <cstdlib>int abs(int); long abs(long); <math.h> and <cmath>float abs(float); double abs(double); long double abs(long double); Solaris headers and libraries now comply with the C++ standard regarding math functions. If you include, for example, <math.h> but not <stdlib.h>, and invoke abs with an integer argument, the compiler must choose among the three floating-point versions of the functions. An integer value can be converted to any of the floating-point types, and neither conversion is preferred over the others. Reference: C++ standard section 13.3.3. The function call is therefore ambiguous. You will get an ambiguity error using any compiler that conforms to the C++ Standard. If you invoke the abs function with integer arguments, you should include standard header <stdlib.h> or <cstdlib> to be sure you get the correct declarations for it. If you invoke abs with floating-point values, you should also include <math.h> or <cmath>. Here's a simple recommended programming practice: if you include <math.h> or <cmath>, also include <stdlib.h> or <cstdlib>. Similar considerations apply to other math functions, like cos or sqrt. Solaris headers and libraries now comply with the C++ Standard, supplying float, double, and long double overloaded versions of the functions. If you invoke, for example, sqrt with an integer value, the code formerly compiled because only one version of sqrt was available. With three floating-point versions available, you must cast the integer value to the floating-point type that you want.double root_2 = sqrt(2); // error double root_2 = sqrt(2.0); // OK double x = sqrt(int_value); // error double x = sqrt(double(int_value)); // OK - When do temporary objects get destroyed? The compiler creates a temporary object sometimes for convenience, and sometimes because the language rules require it. For example, a value returned by a function is a temporary object, and the result of a type conversion is a temporary object. The original C++ rule was that the temporary object ("temp") could be destroyed at any time up until the end of the block in which it was created. Sun C++ compilers. - Why does the compiler report an ambiguity for the standard exception class? On Solaris, standard header <math.h> has a declaration for a struct "exception", as required by standard Unix. If you bring the C++ standard exception class into global scope with a using-declaration or using-directive, it creates a conflict.// Example 1 #include <math.h> #include <exception> using namespace std; // using-declaration exception E; // error, exception is ambiguous // Example 2: #include <math.h> #include <exception> using std::exception; // using-directive exception E; // error, multiple declaration for exception Name resolution is slightly different for using-declarations compared to using-directives, so the error messages are not quite the same. Workarounds: - Use <cmath> instead of <math.h>. On Solaris, <cmath> contains only the declarations specified by the C and C++ standards. If you need Unix-specific features of <math.h>, you can't use this workaround. - Don't write using std::exception; when you also use <math.h>. Write std::exception explicitly, or use a typedef, to access the standard exception class as in this example:#include <math.h> #include <exception> std::exception E; // OK typedef std::exception stdException; // OK stdException F; // OK - Don't write using namespace std;. The C++ namespace std contains so many names that you are likely to have conflicts with application code or third-party libraries when you use this directive in real-world code. (Books and articles about C++ programming sometimes have this using-directive to reduce the size of small examples.) Use individual using-declarations or explicitly qualify names. - Why does C++ 5.3 and later emit errors about throw specifications on my derived virtual functions? A C++ rule newly enforced by the C++ compiler since version 5.3 is that a virtual function in a derived class can allow only the exceptions that are allowed by the function it overrides. The overriding function can be more restrictive, but not less restrictive. Consider the following example:class Base { public: // might throw an int exception, but no others virtual void f() throw(int); }; class Der1 : public Base { public: virtual void f() throw(int); // ok, same specification }; class Der2 : public Base { public: virtual void f() throw(); // ok, more restrictive }; class Der3 : public Base { public: virtual void f() throw(int, long); // error, can't allow long }; class Der4 : public Base { public: virtual void f() throw(char*); // error, can't allow char* }; class Der5 : public Base { public: virtual void f(); // error, allows any exception }; This code shows the reason for the C++ rule:#include "base.h" // declares class Base void foo(Base* bp) throw() { try { bp->f(); } catch(int) { } } Since Base::f() is declared to throw only an int exception, function foo can catch int exceptions, and declare that it allows no exceptions to escape. Suppose someone later declared class Der5, where the overriding function could throw any exception, and passed a Der5 pointer to foo. Function foo would become invalid, even though nothing is wrong with the code visible when function foo is compiled. - Why do template instances turn up missing when I link my program? The instances seem to be in the template cache. Starting with C++ 5.5, the compiler does not use a template cache by default. Therefore we recommend that when you upgrade to C++ 5.5 or newer, you recompile your code by specifying -instances=global in order to use the default template compilation model. The template cache maintains a list of dependencies between the object files that the compiler generates, and the template instances in the cache. Note, however, that the compiler now only uses the template cache when you specify -instances=extern. If you move or rename object files, or combine object files into a library, you lose the connection to the cache. Here are two alternatives: - The link lines also need -instances=extern. - Generate object files directly into the final directory. The template cache will be in that same directory. Do not do this:example% CC -c -instances=extern f1.cc example% mv f1.o /new/location/for/files Do this instead:example% CC -c -instances=extern f1.cc -o /new/location/for/files/f1.o You can encapsulate the process in makefile macros. - You can create intermediate archive (.a) files using CC -xar. Each archive then contains all the template instances used by the objects in the archive. You then link those archives into the final program. Some template instances are duplicated in different archives, but the linker keeps only one of each.example% CC -c -instances=extern f1.cc f2.cc f3.cc example% CC -xar f1.o f2.o f3.o -o temp1.a example% CC -c -instances=extern f4.cc f5.cc f6.cc example% CC -xar f4.o f5.0 f6.0 -o temp2.a example% CC -c -instances=extern main.cc example% CC main.o temp1.a temp2.a -o main - Why do I get a warning about a function not being expanded when I use +w2 and not when I use +w2 +d? The C++ compiler has two kinds of inlining: C++ inline function inlining, which is done by the parser, and optimization inlining, which is done by the code generator. The C and Fortran compilers have only optimization inlining. (The same code generator is used for all compilers on a platform.) The C++ compiler's parser attempts to expand inline any function that is declared implicitly or explicitly as inline. If the function is too large, the parser emits a warning only when you use the +w2 option. The +d option prevents the parser from attempting to inline any function. This is why the warning disappears when you use +d. (The -g option also turns off the inlining of C++ inline functions.) The -xO options do not affect this type of inlining. The optimization inlining does not depend on the programming language. When you select an optimization level of -xO4 or higher, the code generator examines all functions, independent of how they were declared in source code, and replaces function calls with inline code wherever it thinks the replacement will be beneficial. No messages are emitted about optimization inlining (or its failure to inline functions). The +d option does not affect optimization inlining. - Can I use the -ptr option to have multiple template repositories, or to share repositories among different projects? If not, what can I do? The -ptr option is not supported in versions 5.0 through the current version.. - Why does fprintf("%s",NULL) cause a segmentation fault? Some applications erroneously assume that a null character pointer should be treated the same as a pointer to a null string. A segmentation violation occurs in these applications when a null character pointer is accessed. There are several reasons for not having the *printf() family of functions check for null pointers. These include, but are not limited to the following reasons: - Doing so provides a false sense of security. It makes programmers think that passing null pointers to printf() is OK. - It encourages programmers to write non-portable code. ANSI C, XPG3, XPG4, SVID2, and SVID3 say that printf("%s", pointer) needs to have pointer point to a null terminated array of characters. - It makes debugging harder. If the programmer passes a null pointer to printf() and the program drops core, it is easy to use a debugger to find which printf() call gave the bad pointer. However, if printf() hid the bug by printing "(null pointer)," then other programs in a pipeline are likely to try interpreting "(null pointer)" when they are expecting some real data. At that point it may be impossible to determine where the real problem is hidden. If you have an application that passes null pointers to *printf, you can use a special shared object /usr/lib/0@0.so.1 that provides a mechanism for establishing a value of 0 at location 0. Because this library masks all errors involving the dereference of a null pointer of any type, you should use this library only as a temporary workaround until you can correct the code. - Depending on how I call sqrt(), I get different signs for the imaginary part of the square root of a complex number. What's the reason for this? The implementation of this function is aligned with the C99 csqrt Annex G specification. For example, here's the output from the following code example : complex sqrt (3.87267e-17, 0.632456) float sqrt (3.87267e-17, -0.632456) - Example using libcomplex in compatibility mode:#include <iostream.h> #include <math.h> #include <complex.h> int main () { complex ctemp(-0.4,0.0); complex c1(1.0,0.0); double dtemp(-0.4); cout<< "complex sqrt "<< sqrt(ctemp)<<endl; cout<< "float sqrt "<< sqrt(c1*dtemp)<<endl; } - Example using libCstd in standard mode:#include <iostream> #include <math.h> #include <complex> using namespace std; int main () { complex<double> ctemp(-0.4,0.0); complex<double> c1(1.0,0.0); double dtemp(-0.4); cout<< "complex sqrt "<< sqrt(ctemp)<<endl; cout<< "float sqrt "<< sqrt(c1*dtemp)<<endl; } - The sqrt function for complex is implemented using atan2. The following example illustrates the problem by using atan2. The output of this program is:c=-0.000000 b=-0.400000 atan2(c, b)=-3.141593 a=0.000000 b=-0.400000 atan2(a, b)=3.141593 In one case, the output of atan2 is negative and in the other case it's positive. It depends on whether -0.0 or 0.0 gets passed as the first argument.#include <stdio.h> #include <math.h> int main() { double a = 0.0; double b = -0.4; double c = a*b; double d = atan2(c, b); double e = atan2(a, b); printf("c=%f b=%f atan2(c, b)=%f\n", c, b, d); printf("a=%f b=%f atan2(a, b)=%f\n", a, b, e); } -? The C++ compiler versions 5.7 and newer, in default standard mode,(-compat=5) allow nested classes to access private members of the enclosing class. Compilers prior to version 5.7 follow the standard and nested classes have no special access to the enclosing class. Consider the following exampel } }; }; - What causes the "pure virtual function call" message at run time? A "pure virtual function called" message always arises because of an error in the program. The error occurs in either of the following two ways: - You can cause this error by passing the "this" parameter from a constructor or destructor of an abstract class to an outside function. During construction and destruction, "this" has the type of the constructor's or destructor's own class, not the type of the class ultimately being constructed. You can then wind up trying to call a pure virtual function. Consider the following example:class Abstract; void f(Abstract*); class Abstract { public: virtual void m() = 0; // pure virtual function Abstract() { f(this); } // constructor passes "this" }; void f(Abstract* p) { p->m(); } When f is called from the Abstract constructor, "this" has the type "Abstract*", and function f attempts to call the pure virtual function m. - You can also cause this error by trying to call a pure virtual function that has been defined without using explicit qualification. You can provide a body for a pure virtual function, but it can be called only by qualifying the name at the point of the call, bypassing the virtual-call mechanism.class Abstract { public: virtual void m() = 0; // body provided later void g(); }; void Abstract::m() { ... } // definition of m void Abstract::g() { m(); // error, tries to call pure virtual m Abstract::m(); // OK, call is fully qualified } - Why does the compiler say that a derived-class virtual function hides a base-class virtual function with a different signature? My other compiler doesn't complain about the code. The C++ rule is that overloading occurs only within one scope, never across scopes. A base class is considered to be in a scope that surrounds the scope of a derived class. Any name declared in a derived class therefore hides, and cannot overload, any function in a base class. This fundamental C++ rule predates the ARM. If another compiler does not complain, it is doing you a disservice, because the code will not behave as you probably expect. Our compiler issues a warning while accepting the code. (The code is legal, but probably does not do what you want.) If you wish to include base-class functions in an overloaded set, you must do something to bring the base-class functions into the current scope. - What library configuration macros can I modify to get different features from the C++ compiler runtime libraries? - When do I need to use -I and -L options? -? - What library configuration macros can I modify to get different features from the C++ compiler runtime libraries? Do not attempt to define, undefine, or modify any of the library configuration macros. The library headers must match the way the libraries were built. Otherwise, your program might not compile, might not link, and probably will not run correctly. - When do I need to use -I and -L options? Specify the -I option to point to directories that contain your project header files when these header files are not in the same directory as the files that include them, or to point to directories that contain header files for third-party libraries that you acquire. Specify the -L options to point to directories that contain libraries that you build, or to third-party libraries that you acquire. Never use -I to point into /usr/include or into the compiler installation area. Never use -L to point into /lib, /usr/lib, or into the compiler installation area. The CC compiler driver knows the location of the system headers and libraries and follows the correct search order. You can cause the compiler to find the wrong headers or libraries by using -I or -L options that point into system directories. -. - What standard library functionality is missing from libCstd? The standard library was originally (in C++ 5.0) built without support for features which required member template and partial specialization in the compiler. Although these features have been available since C++ 5.1, they cannot be turned on in the standard library because they would compromise backward compatibilitiy. The following is a list of missing functionality for each disabled feature. - Disabled feature: member template functions - In class complex in <complex>:template <class X> complex<T>& operator= (const complex<X>& rhs) template <class X> complex<T>& operator+= (const complex<X>& rhs) template <class X> complex<T>& operator-= (const complex<X>& rhs) template <class X> complex<T>& operator*= (const complex<X>& rhs) template <class X> complex<T>& operator/= (const complex<X>&) - In class pair in <utility>: template<class U, class V> pair(const pair<U, V> &p); - In class locale in <locale>: template <class Facet> locale combine(const locale& other); - In class auto_Ptr in <memory>: auto_ptr(auto_ptr<Y>&); auto_ptr<Y>& operator =(auto_ptr<Y>&); template <class Y> operator auto_ptr_ref<Y>(); template <class Y> operator auto_ptr<Y>(); - In class list in <list>: Member template sort. - In most template classes: Template constructors. - Disabled feature: member template classes In class auto_ptr in <memory>: template <class Y> class auto_ptr_ref{}; auto_ptr(auto_ptr(ref<X>&); - Disabled feature: overloading of function template arguments that are partial specializations In <deque>, <map>, <set>, <string>, <vector> and <iterator> the following template functions (non-member) are not supported: - For classes map, multimap, set, multiset, basic_string, vector, reverse_iterator, and istream_iterator: bool operator!= () - For classes map, multimap, set, multiset, basic_string, vector and reverse_iterator: bool operator> () bool operator>= () bool operator<= () - For classes map, multimap, set, multiset, basic_string, and vector: void swap() - Disabled feature: partial specialization of template classes with default parameters In <algorithm>, the following template functions (non-member) are not supported: count(), count_if() In <iterator>, the following templates are not supported: template <class Iterator> struct iterator_traits {} template <class T> struct iterator_traits<T*> {} template <class T> struct iterator_traits<const T*>{} template typename iterator_traits ::difference_type distance(InputIterator first, InputIterator last); - What are the consequences of the missing standard library functionality? Some code that is valid according to the C++ standard will not compile. The most common example is creating maps where the first element of the pair could be const but isn't declared that way. The member constructor template would convert pair<T, U> to pair<const T, U> implicitly when needed. Because that constructor is missing, you get compilation errors instead. Since you are not allowed to change the first member of a pair in a map anyway, the simplest fix is to use an explicit const when creating the pair type. For example, instead of pair<int, T> use pair<const int, T>; instead of map<int, T> use map<const int, T>. - Is there a version of tools7 library that works with standard streams? Will there be a tools8 available soon? Beginning with C++ 5.3, we supply a version of Tools.h++ v7 that works with libCstd. Use the option -library=rwtools7_std to compile and link with this library. Note: You cannot use -library=iostream with rwtools7_std. RogueWave now supplies Tools.h++ only as part of their SourcePro product. There is no Tools.h++ version 8. E. Compile-Time Performance - I recently upgraded to the Sun Open Network Environment (Sun ONE) Studio 8 C++ 5.5 compiler and noticed a significant increase in link-time during compilation. Why did this happen and is there a workaround? -? - Can a single compilation process be distributed onto multiple processors? More generally, does a multiprocessor (MP) system always have better compile-time performance? -. - How come I'm not seeing an improvement in compile time even though I've started using the precompiled header facility of the C++ compiler? Using precompiled headers does not guarantee faster compile times. Precompiled headers impose some overhead that is not present when you compile files directly. To gain a performance advantage, the precompiled headers must have some redundancy that precompilation can eliminate. For example, a program that is highly likely to benefit from precompilation is one that includes many system headers, iostreams, STL headers, and project headers. Those files contain conditionally-compiled code. Some headers are included multiple times, and the compiler must scan over the entire file if only to discover there is nothing to do in the redundant includes. System headers typically have hundreds of macros to expand. Using a precompiled header means opening one file instead of dozens. The multiple includes that do nothing are eliminated, as are comments and extra white space. The macros in the headers are pre-expanded. Typically, these savings add up to a significant reduction in compile time. - Why does a large file take so much longer to compile than a shorter one? The size of the file is probably not the issue so here are three likely causes for the delay. - The size of functions in the file and the level of optimization Large functions at high optimizations take a long time to process, and can require lots of memory. If the code uses large macros extensively, a function that looks small might become very large after macro expansion. Try compiling without any optimization (no -xO? or -O? option). If the compilation completes quickly, the problem is probably one or more very large functions in the file, and the time and memory necessary to optimize it. In addition, make sure the computer used for compilation has plenty of physical memory for the compilation run. If you don't have enough memory, the optimization phase can cause thrashing. - Inline functions Inline functions (in C and C++) act like macros where compilation time is concerned. When a function call is expanded inline, it can turn into a lot of code. The compiler then is dealing with one large function instead of 2 or more small functions. Compilations often proceed more quickly when you disable function inlining. Of course, the resulting code will probably run more slowly. See the description of -xinline and "Using Inline Functions" in the C++ User's Guide for more information. C++ class templates C++ templates cause the compiler to generate code based on the templates invoked. One line of source code can require the compiler to generate one or more template functions. It's not that templates themselves slow down compilation significantly, but that the compiler has more code to process than is apparent by looking at the original source code. For example, if it were not for the standard library already having the functions, this line through the current version. current compiler through version 5.2 have added further improvements. In many cases, the binary decreases from 25% to over 50% in size. The improvements show up mostly for code using namespaces, templates, and class hierarchies with many levels of inheritance. - Can a single compilation process be distributed onto multiple processors? More generally, does a multiprocessor (MP) system always have better compile-time performance? The compiler itself is not multithreaded. You can expect better performance with MP systems, because the computer always has many other processes running at the same time as any one compilation. If you use dmake (one of the tools that ships with the compiler), you can run multiple compilations simultaneously. C++ compiler option -sync_stdio=no at link time to fix this problem or add a call to the sync_with_stdio(false) function and recompile. The major performance problem with stdlib 2.1.1 is that it synchronizes C stdio with C++ streams by default. Each output to cout is flushed immediately. If your program does a lot of output to cout but not to stdout, the excess buffer flushing can add significantly to the run-time performance of the program. The C++ standard requires this behavior, but not all implementations meet the standard. The following program demonstrates the synchronization problem. It must print "Hello beautiful world" followed by a newline:#include <iostream> #include <stdio.h> int main() { std::cout << "Hello "; printf("beautiful "); std::cout << "world"; printf("\n"); }If cout and stdout are independently buffered, the output could be scrambled. If you cannot recompile the executable, specify the new C++ compiler option -sync_stdio=no at link time. This option causes sync_with_stdio( ) to be called at program initialization before any program output can occur. If you can recompile, add a call to the sync_with_stdio(false) function before any program output thereby specifying that the output does not need to be synchronized. Here is a sample call:#include <iostream> int main(int argc, char** argv) { std::ios::sync_with_stdio(false); } The call to sync_with_stdio should be the first one in your program. See the C++ User's Guide or the C++ man page CC(1) for more information on -sync_stdio. - Does C++ always inline functions marked with inline keyword? Why didn't I see functions inlined even though I wrote them that way? Fundamentally, the compiler treats the inline declaration as a guidance and attempts to inline the function. Compiler versions 5.1 through the current version utilize a revamped inlining algorithm to make it understand more constructs. However, there are still cases where it will not succeed. The restrictions are: - Some rarely executed function calls are not expanded in the current compiler through version 5.2. This change helps achieve a better balance of compilation speed, output code size, and run-time speed. For example, expressions used in static variable initialization are only executed once and thus function calls in those expressions are not expanded. Note that the inline function func might not be expanded when called in an initialization expression of static variables, it could still be inlined in other places. Similarly, function calls in exception handlers might not be expanded, because those code is rarely executed. - Recursive functions are inlined only to the first call level. The compiler cannot inline recursive function calls indefinitely. The current implementation stops at the first call to any function that is being inlined. - Sometimes even calls to small functions are not inlined. The reason for this is that the total expanded size may be too large. For example, func1 calls func2, and func2 calls func3, and so forth. Even if each of these functions is small and there are no recursive calls, the combined expanded size could be too large for the compiler to expand all of them. Many standard template functions are small, but have deep call chains. In those cases, only a few levels of calls are expanded. - C++ inline functions that contain goto statements, loops, and try/catch statements are not inlined by the compiler. However, they might be inlined by the optimizer at the -xO4 level. - The compiler does not inline large functions. Both the compiler and the optimizer of the C++ compiler place a limit on the size of inlined functions. This limitation is our general recommendation. For special cases, please consult with technical support to learn about the internal options that raise or lower this size limitation. - A virtual function cannot be inlined, even though it is never redefined in subclasses. The reason is that the compiler can not know whether a different compilation unit contains a subclass and a redefinition of the virtual function. Note that in some previous versions, functions with complicated if-statements and return-statements could not be inlined. This limitation has been removed. Also, the default limitation on inline function size has been raised. With some programs, these changes will cause more functions to be inlined and can result in slower compilations and more code generation. To completely eliminate the inlining of C++ inline functions, use the +d option. Separately, the optimizer inlines functions at higher optimization levels (-xO4) based on the results of control flow and so forth. This inlining is automatic and is done irrespective of whether you declare a function "inline" or not. Copyright © 2007 Sun Microsystems, Inc., All rights reserved. Use is subject to license terms.
http://developers.sun.com/sunstudio/documentation/ss12/mr/READMEs/c++_faq.html
crawl-001
refinedweb
6,136
57.16
.Net Framework Source Code to be Available with Visual Studio 2008 Along with the release of Visual Studio 2008 later this year, Microsoft is making the source code of parts of of the .NET Framework Libraries under the Microsoft Reference License. The set of libraries initially includes the Base Class Libraries (System namespace, IO, Text, Collections, CodeDom, Regular Expressions, etc), ASP.NET, WinForms, and WPF. Microsoft will add to this list as time goes on. When Visual Studio 2008 ships, developers will be able to accept to the terms of the agreement, and download a package of the shared libraries. Additionally, Visual Studio 2008 Integration supports debugging the framework source code using symbols published via a web server. [via: Scott Guthrie's Blog] Enjoy!
http://blogs.microsoft.co.il/bursteg/2007/10/03/net-framework-source-code-to-be-available-with-visual-studio-2008/
CC-MAIN-2014-15
refinedweb
124
64.41
How to capture package version from SCM: git¶ The Git() helper from tools, can be used to capture data from the git repo where the conanfile.py recipe lives, and use it to define the version of the conan package. from conans import ConanFile, tools def get_version(): git = tools.Git() try: return "%s_%s" % (git.get_branch(), git.get_revision()) except: return None class HelloConan(ConanFile): name = "Hello" version = get_version() def build(self): ... In this example, the package created with conan create will be called Hello/branch_commit@user/channel. Note that the get_version() returns None if it is not able to get the git data. This is necessary, when the recipe is already in the conan cache, and the git repository might not be there, a None value makes conan get the version from the metadata. How).
https://docs.conan.io/en/1.6/howtos/capture_version.html
CC-MAIN-2021-39
refinedweb
135
64.1
Concerning Marshalling Discussion in 'Ruby' started by christophe.poucet@gmail Chunking Marshalling with JAXB, Apr 13, 2006, in forum: Java - Replies: - 0 - Views: - 607 Re: Misuse of XML namespaces; call for help in marshalling argumentsPeter Flynn, Aug 6, 2004, in forum: XML - Replies: - 2 - Views: - 454 - Peter Flynn - Aug 9, 2004 Help Needed!! Marshalling MessageTerence, Nov 11, 2003, in forum: C++ - Replies: - 2 - Views: - 371 - Frank Schmitt - Nov 12, 2003 Marshalling in COMdev_chandok, Oct 26, 2004, in forum: C++ - Replies: - 1 - Views: - 718 - red floyd - Oct 26, 2004 what does marshalling means ?gk, May 23, 2006, in forum: Java - Replies: - 8 - Views: - 10,458 - Kent Paul Dolan - May 24, 2006
http://www.thecodingforums.com/threads/concerning-marshalling.824683/
CC-MAIN-2014-52
refinedweb
110
69.65
As cool as neural networks are, the first time that I felt like I was building true AI was not when working on image classification or regression problems, but when I started working on deep reinforcement learning. In this article I would like to share that experience with you. By the end of the tutorial, you will have a working PyTorch reinforcement learning agent that can make it through the first level of Super Mario Bros (NES). This tutorial is broken into 3 parts: - Introduction to reinforcement learning - The Super Mario Bros (NES) environment - Building an agent that can get through this environment You can run the code for free on the ML Showcase. By the end of this tutorial, you will have built an agent that can do this: Pre-requisites - Have a working knowledge of deep learning and convolutional neural networks - Have Python 3+ and a Jupyter Notebook - Optional: be comfortable with PyTorch Bring this project to life What is reinforcement learning? Reinforcement learning is the family of learning algorithms in which an agent learns from its environment by interacting with it. What does it learn? Informally, an agent learns to take actions that bring it from its current state to the best (optimal) reachable state. I find that examples always help. Examine the following 3×3 grid: This grid is our agent's environment. Each square in this environment is called a state, and an environment will always have a start and end state which you can see highlighted in green and red, respectively. Much like a human, our agent will learn from repetition in a process called an episode. At the start of an episode an agent will begin at the start state, and it will keep making actions until it arrives at the end state. Once the agent makes it to the end state the episode will terminate, and a new one will begin with the agent once again beginning from the start state. Here I've just given you a grid, but you can imagine more realistic examples. Imagine you're in a grocery store and you look at your shopping list: you need to buy dried rosemary. Your start state would be your location when you enter the store. The first time you try to find rosemary you might be a bit lost, and you probably won't move the most direct way through the store to find the "Herbs and Spices" aisle. But on each subsequent visit you'll get better and better at finding it, until you reach the point where you can move directly to the correct aisle when you walk in. When an agent lands on a state it accumulates the reward associated with that state, and a good agent wants to maximize the accumulated discounted reward along an episode (I'll explain what discounted means later). Suppose our agent can move vertically, horizontally, and diagonally. Based on this information, you can see that the best way for the agent to make it to the end state is to move diagonally (directly towards it), since it would accumulate a reward of -1 + 0 = -1. If the agent would move in any other way towards the end state, it would accumulate a reward less than -1. For example, if the agent were to move right, right, up, and then up once more, it would get a reward of -1 + (-1) + (-1) + 0 = -3, which is less than -1. Moving diagonally is therefore called the optimal policy π*, where π is a function which takes in a state and outputs the action the agent will take from that given state. You can logically deduce the best policy for this simple grid, but how would we solve this problem using reinforcement learning? Since this article is about deep q-learning, we first need to understand state-action values. Q-learning We mentioned that for every state in the above grid problem, the agent can move to any state that is touching the current state; so our set of actions for every state is vertical, horizontal, and diagonal. A state-action value is the quality of being on a particular state and taking a particular action off of that state. Every single state and action pair, except for the end state, should have a value. We denote these state-action values as Q(s, a) (the quality of the state-action pair), and all state-action values together form something called a Q-table. Once the Q-table is learned, if an agent is on a particular state s, it will take the action a from s such that Q(s, a) has the highest value. Mathematically, if an agent is on state s, it will take argmaxaQ(s, a). But how are these values learned? Q-learning uses a variation of the Bellman-update equation, and is more specifically a type of temporal difference learning. The Q-learning update equation is: Q(st, at)←Q(st, at) + α(rt+1 + γmaxaQ(st+1, a) - Q(st, at)) Essentially, this equation says that the quality of being on state st and taking action at is not just defined by the immediate reward that you get from taking that action, but also by the best possible move you can take on the state st+1 after landing on it (the maxaQ(st+1, a) term). The γ parameter is called the discount factor, and is a value between 0 and 1 which defines how important future states should be. The value α is called the learning rate, and tells us how large to make our Q-updates. This should make you recall when I mentioned that the goal of a reinforcement learning agent is to maximize the accumulated discounted reward. If we re-rewrite this equation as: Q(st, at)←Q(st, at) + αδ You will notice that when δ≈0 the algorithm converges, since we are no longer updating Q(st, at). This value, δ, is known as the temporal difference error, and the job of Q-learning is to make that value go to 0. Now, let's use Q-learning to solve the grid problem in Python. The main function to solve the grid problem is: def train_agent(): num_episodes = 2000 agent = Agent() env = Grid() rewards = [] for _ in range(num_episodes): state = env.reset() episode_reward = 0 while True: action_id, action = agent.act(state) next_state, reward, terminal = env.step(action) episode_reward += reward agent.q_update(state, action_id, reward, next_state, terminal) state = next_state if terminal: break rewards.append(episode_reward) plt.plot(rewards) plt.show() return agent.best_policy() print(train_agent()) The idea is really simple. Given a state, the agent takes the action that has the highest value, and after taking that action, the Q-table gets updated using the Bellman equation above. The next state then becomes the current state, and the agent continues using this pattern. If the agent lands on the terminal state then a new episode starts. The Q-table update method is simply: class Agent(): ... def q_update(self, state, action_id, reward, next_state, terminal): ... if terminal: target = reward else: target = reward + self.gamma*max(self.q_table[next_state]) td_error = target - self.q_table[state, action_id] self.q_table[state, action_id] = self.q_table[state, action_id] + self.alpha*td_error ... After running for 1000 episodes, I got the final policy to be: You may notice that some of the arrows don't make sense. For example, if you were at the top left of the grid, shouldn't the agent want to move right instead of down? Just remember, Q-learning is a greedy algorithm; the agent does not make it to the top left enough times to figure out what the best policy is from that position. What matters is that from the start state, it figured out the best policy is to move diagonally. Double Q-Learning There is one major issue with Q-learning that we need to deal with: over-estimation bias, which means that the Q-values learned are actually higher than they should be. Mathematically, maxaQ(st+1, a) converges to E(maxaQ(st+1, a)), which is higher than maxa(E(Q(st+1, a)), the true Q-value (I won't prove that here). To get more accurate Q-values, we use something called double Q-learning. In double Q-learning we have two Q-tables: one which we use for taking actions, and another specifically for use in the Q-update equation. The double Q-learning update equation is: Q*(st, at)←Q*(st, at) + α(rt+1 + γmaxaQT(st+1, a) - Q*(st, at)) where Q* is the Q table that gets updated, and QT is the target table. QT copies the values of Q* every n steps. Below are a few code snippets to demonstrate the changes: class AgentDoubleQ(): ... def q_update(self, state, action_id, reward, next_state, terminal): state = state[0]*3 + state[1] next_state = next_state[0]*3 + next_state[1] if terminal: target = reward else: target = reward + self.gamma*max(self.q_target[next_state]) td_error = target - self.q_table[state, action_id] self.q_table[state, action_id] = self.q_table[state, action_id] + self.alpha*td_error def copy(self): self.q_target = copy.deepcopy(self.q_table) ... def train_agent_doubleq(): ... while True: action_id, action = agent.act(state) next_state, reward, terminal = env.step(action) num_steps += 1 if num_steps % agent.copy_steps == 0: agent.copy() episode_reward += reward agent.q_update(state, action_id, reward, next_state, terminal) state = next_state if terminal: break ... Below is a plot of normalized rolling average reward over 1000 episodes. From this plot, it might be hard to tell what the advantage of double Q-learning is over Q-learning, but that's because our state space is really small (only 9 states). When we get much larger state spaces, double Q-learning really helps to speed up convergence. Super Mario Bros (NES) Now that you have a brief overview of reinforcement learning, let's build our agent that can make it through the first level of Super Mario Bros (NES). We will be using the gym-super-mario-bros library, built on top of the OpenAI gym. For those not familiar with gym, it is an extremely popular Python library that provides ML enthusiasts with a set of environments for reinforcement learning. Below is the code snippet to instantiate our environment and view the size of each state, as well as the action space: import gym_super_mario_bros env = gym_super_mario_bros.make('SuperMarioBros-1-1-v0') print(env.observation_space.shape) # Dimensions of a frame print(env.action_space.n) # Number of actions our agent can take You will see that the observation space shape is 240 × 256 × 3 (240 and 256 represent the height and width respectively, and 3 represents the 3 color channels). The agent can take 256 different possible actions. In double deep Q-learning, reducing the state and action space sizes speeds up convergence of our model. A nice part of gym is that we can use gym's Wrapper class to change the default settings originally given to us. Below I have defined a few classes that will help our agent learn faster. def make_env(env): env = MaxAndSkipEnv(env) env = ProcessFrame84(env) env = ImageToPyTorch(env) env = BufferWrapper(env, 4) env = ScaledFloatFrame(env) return JoypadSpace(env, RIGHT_ONLY) This function applies 6 different transformations to our environment: - Every action the agent makes is repeated over 4 frames - The size of each frame is reduced to 84×84 - The frames are converted to PyTorch tensors - Only every fourth frame is collected by the buffer - The frames are normalized so that pixel values are between 0 and 1 - The number of actions is reduced to 5 (such that the agent can only move right) Building an agent for Super Mario Bros (NES) Let's finally get to what makes deep Q-learning "deep". From the way we've set up our environment, a state is a list of 4 contiguous 84×84 pixel frames, and we have 5 possible actions. If we were to make a Q-table for this environment, the table would have 5×25684×84×4 values, since there are 5 possible actions for each state, each pixel has intensities between 0 and 255, and there are 84×84×4 pixels in a state. Clearly, storing a Q-table that large is impossible, so we have to resort to function approximation in which we use a neural network to approximate the Q-table; that is, we will use a neural network to map a state to its state-action values. In tabular (table-based) double Q-learning, recall that the update equation is: Q*(st, at)←Q*(st, at) + α(rt+1 + γmaxaQθ(st+1, a) - Q*(st, at)). rt+1 + γmaxaQθ(st+1, a) is considered our target, and Q*(st, at) is the value predicted by our network. Using some type of distance-based loss function (mean-squared error, Huber loss, etc.), we can optimize the weights of our deep Q-networks using gradient descent. Before getting to the details of how we train our agent, let's first build the DQN architecture that we will use as a function approximator. class DQNSolver(nn.Module):) Our DQN is a convolutional neural net with 3 convolutional layers and two linear layers. It takes two arguments: input_shape and n_actions. Of course, the input shape we will provide is 4×84×84, and there are 5 actions. We have chosen to use a convolutional neural net because they are ideal for image-based regression. Now that we have our neural net architecture set up, let's go through the "main function" of our code. def run(): env = gym_super_mario_bros.make('SuperMarioBros-1-1-v0') env =, exploration_max=0.02, exploration_min=0.02, exploration_decay=0.99) num_episodes = 10000 env.reset() total_rewards = [] for ep_num in tqdm(range(num_episodes)): state = env.reset() state = torch.Tensor([state]) total_reward = 0 while True: action = agent.act(state) state_next, reward, terminal, info = env.step(int(action[0])) total_reward += reward state_next = torch.Tensor([state_next]) reward = torch.tensor([reward]).unsqueeze(0) terminal = torch.tensor([int(terminal)]).unsqueeze(0) agent.remember(state, action, reward, state_next, terminal) agent.experience_replay() state = state_next if terminal: break total_rewards.append(total_reward) print("Total reward after episode {} is {}".format(ep_num + 1, total_rewards[-1])) num_episodes += 1 It looks almost identical to the main function of the grid problem, right? The only differences you might see are the remember and experience_replay methods. In typical supervised learning, a neural network uses batches of data to update its weights. In deep Q-learning the idea is the same, except these batches of data are called batches of experiences, where an experience is a (state, action, reward, next_state, terminal) tuple. Instead of throwing away experiences like we did in the grid problem, we can store them in a buffer to use later with the experience_replay method, the agent just has to sample a batch of experiences and use the double Q-update equation to update the network weights. Now, let's go over the most important methods of our agent: remember, recall, and experience_replay. class DQNAgent: ... def remember(self, state, action, reward, state2, recall(self): # Randomly sample 'batch size' experiences idx = random.choices(range(self.num_in_queue), k=self.memory_sample_size) STATE = self.STATE_MEM[idx].to(self.device) ACTION = self.ACTION_MEM[idx].to(self.device) REWARD = self.REWARD_MEM[idx].to(self.device) STATE2 = self.STATE2_MEM[idx].to(self.device) DONE = self.DONE_MEM[idx].to(self.device) return STATE, ACTION, REWARD, STATE2, DONE def experience_replay(self): if self.step % self.copy == 0: self.copy_model() if self.memory_sample_size > self.num_in_queue: return STATE, ACTION, REWARD, STATE2, DONE = self.recall() self.optimizer.zero_grad() # loss = self.l1(current, target) loss.backward() # Compute gradients self.optimizer.step() # Backpropagate error ... ... So what's going on here? In the remember method, we just push an experience onto the buffer so that we can use that data for later. The buffer has a fixed size, so it has a deque data structure. The recall method just samples a batch of experiences from memory. In the experience_replay method, you will notice that we have two Q-networks: target net and local net. This is analogous to how we had the target and local Q-tables for the grid problem. We copy the local weights to be the target weights, sample from our memory buffer, and just apply the Double Q-learning update equation. This method is what will allow our agent to learn. Running the code You can run the full code for free on the ML Showcase. The code has options to allow the user to run either deep Q-learning or double deep Q-learning, however for comparison, here are a few plots that compare the DQN performance to the DDQN performance: You will notice that the DQN at 10,000 episodes has the same performance as the DDQN in just 1,000 episodes (look at the average reward plot on the left). The code for the single DQN is available just for educational purposes; I highly recommend you stick to training the DDQN. Conclusion Congratulations! If you're anything like me, seeing Mario consistently make it through the level will give you such a rush that you'll want to jump to new deep reinforcement learning projects. There are topics that I did not cover in this post, such as value iteration, on- vs off-policy learning, Markov Decision Processes, and much, much more. However, this article was meant to get people excited about the awesome opportunities that deep reinforcement learning has to offer. Take care, and see you in my next post! Add speed and simplicity to your Machine Learning workflow today
https://blog.paperspace.com/building-double-deep-q-network-super-mario-bros/
CC-MAIN-2022-27
refinedweb
2,931
63.09
doctests allow the use of directives. One "powerful" directive is the ELLIPSIS directive. Quoting from the documentation: When specified, an ellipsis marker (...) in the expected output can match any substring in the actual output. This includes substrings that span line boundaries, and empty substrings, so it's best to keep usage of this simple. Complicated uses can lead to the same kinds of "oops, it matched too much!" surprises that .* is prone to in regular expressions.Unfortunately, I encountered a case where the ellipsis marker did not allow enough matching! Consider the following situation: I have a program (Crunchy!) that saves the user's preferences (including the language) in a configuration file each time its value is changed. It also gives some feedback to the user whenever this happens. At the end of the test, I want to restore the original value.At the end of the test, I want to restore the original value. >>> original_value = crunchy.language >>> set_language('en') # setting this value for some standardized tests Language has been set to English >>> set_language(original_value) #doctest: +ELLIPSISHere I want the ellipsis (...) to match the string that is going to be printed out in the original language as I have no idea what this string will look like. The problem is that the ellipsis in this case is thought to be a Python (continuation) prompt and not a string that is "matched". One workaround that I had been using was to modify set_language to add a parameter ("verbose") that was set to True by default but that I could turn off when running tests. While this is simple enough that it surely would never (!) introduce spurious bugs, it does not feel right; one should not modify functions only for the purpose of making them satisfy unit tests. ... According to the documentation, register_optionflag(name)') This is great ... except that I want to used doctest.testfile() which does not allow me to specify a subclass of OutputChecker to use instead of the default. Also, I wanted to use as much of possible of the existing doctest module, with as little new code as possible. This is where monkeypatching comes in. After a bit of work, I came up with the following solution: from doctest import OutputChecker original_check_output = OutputChecker.check_output import doctest IGNORE_ERROR = doctest.register_optionflag("IGNORE_ERROR") class MyOutputChecker(doctest.OutputChecker): def check_output(self, want, got, optionflags): if optionflags & IGNORE_ERROR: return True return original_check_output(self, want, got, optionflags) doctest.OutputChecker = MyOutputChecker failure, nb_tests = doctest.testfile("test_doctest.rst") print "%d failures in %d tests" % (failure, nb_tests) And here's the content of test_doctest.rst Test of the new flag: >>> print 42 42 >>> print 2 # doctest: +IGNORE_ERROR SPAM! This yields a test with no failures. There might be a more elegant way of doing this; if so, I would be very interested in hearing about it.
https://aroberge.blogspot.com/2008_06_01_archive.html
CC-MAIN-2017-09
refinedweb
469
57.77
A grunt task to compile your Hull.io widgets This is a grunt task that allows to build Hull.io widgets from a source directory. src: The root path for the widgets. Defaults to widgets dest: The root path for the built widgets namespace: The namespace in which the templates will be registered. It will be located at runtime as Hull.templates.%namespace%. The default value is _default before: An array of tasks to be executed before the compilation begins. These tasks will occur between the cleaning of destand the first internal task of bu after: An array of tasks to be executed after the building is done. ##License MIT
https://www.npmjs.com/package/grunt-hull-widgets
CC-MAIN-2015-22
refinedweb
110
66.44
Hi all, trying out Dynamo to solve some issues again. As the title states, I’m having difficulty extracting only the element ID from a list with null values and element IDs. I’ve tried the element ID code block but it doesn’t produce the results I want because of null values. In the list, I want to retain the position of the null value in the list but still extract the element IDs from the list. Is there a way? I’ve been googling but I can’t seem to find a way. Does anyone have a link or guide where i can refer to? It’ll really help me to understand what codes i should use. import clr clr.AddReference(‘ProtoGeometry’) from Autodesk.DesignScript.Geometry import * Import RevitAPI clr.AddReference(‘RevitAPI’) import Autodesk #The inputs to this node will be stored as a list in the IN variable. dataEnteringNode = IN elements = [] for i in IN[0]: elements.append(UnwrapElement(i)) elementIds, idString, guid = [], [], [] for i in elements: elementIds.append(i.Id) idString.append(i.Id.ToString()) guid.append(i.UniqueId) #Assign your output to the OUT variable OUT = elementIds, idString, guid Thanks alot!
https://forum.dynamobim.com/t/retrieving-element-id-from-a-list-with-null-values/26468
CC-MAIN-2022-21
refinedweb
196
60.01
I have seen people writing Code Behind a lot while they are working in the MVVM framework. Huge lines of code in their xaml.cs file actually creates problem to the designers to change anything present in the XAML. Why is this so? Because, lots of people don’t know how to use Triggers to call the MVVM methods to complete their business need. Michael Washington has a nice article on DataTrigger to reduce the Code behind file. It definitely reduces the problem of MVVM that world faces generally. Also, we need to do some additional things to reduce the code behind.. Let us modify the View with some simple controls inside it. Let’s add one TextBox, where we can insert Employee name and one button which will have a click event to show some message. Don’t add any click event there right now. We will add them later. TextBox Employee Once your above view is ready, jump into the ViewModel to add some properties. Let’s add an EmployeeName of type string. Add one simple method to show a MessageBox with some text. EmployeeName string MessageBox Here is our sample ViewModel: using System.Windows; namespace MVVMEventTriggerDemo.ViewModels { public class MainViewModel { public string EmployeeName { get; set; } public string Country { get; set; } public void HandleShowMessage() { MessageBox.Show("Hello " + EmployeeName + ", Welcome to EventTrigger for MVVM."); } } } Once your View & ViewModel is ready, build & run the application to see the UI that we are going to demonstrate. Import the Samples Interaction inside the Event Trigger and you will see, there are plenty of methods available to do several operations inside the XAML itself. You can call DataMethod inside a ViewModel, you can change the state of the FrameworkElement to another valid state, you can invoke DataCommand, pause media, Show Message, etc. Later, I will share the whole code for you to copy. Also, the whole solution is attached for you to download. Ok, come into the actual topic. Open your code behind file. There you will see the file is totally empty. (confused!!!) Yes, it is totally empty. It has only the constructor and a call to InitializeComponent() method, which is always necessary for any Silverlight page. Hence, we can consider it as empty class. You will see that there is no extra code written to raise and implement the Button Click event. InitializeComponent() A very neat & clean code behind file, right? Now build the solution and run the application by pressing F5. You will see the application loaded inside the browser window. Enter a name inside the TextBox and click on the button “Show Message”. OMG!!! The button is firing the event to the ViewModel and the MessageBox has been. Click “OK”. Woo, another message box!!! Yes, this is the message box that we added just now in the XAML page with the exact caption and message string. The message box is not present in our code behind nor in the viewmodel. It is the default one provided by the library with customized text. viewmodel So, what do you think? We can only call a data method for MVVM using EventTrigger!!! If we want to change some property of the UI Element, how can we do that? In such case, do we have to write inside the CodeBehind or do we have to create a property inside view model and bind it to the UI? behind!!! So simple right? Then why are you writing code in the xaml.cs file? Stop it immediately and move into the proper MVVM pattern. Here is the whole XAML code for your reference: <UserControl x:Class="MVVMEventTriggerDemo.Views.MainView" xmlns="" xmlns:x="" xmlns:viewModel="clr-namespace:MVVMEventTriggerDemo.ViewModels" xmlns:i="clr-namespace:System.Windows.Interactivity; assembly=System.Windows.Interactivity" xmlns:si="clr-namespace:Expression.Samples.Interactivity; assembly=Expression.Samples.Interactivity" Height="132" Width="250"> <UserControl.Resources> <viewModel:MainViewModel x: </UserControl.Resources> <Grid x: <TextBox Text="{Binding EmployeeName, Mode=TwoWay}" Width="200" Height="30" Margin="28,24,22,78" /> <Button Content="Show Message" Width="100" Height="25" Margin="128,70,22,37"> <i:Interaction.Triggers> <i:EventTrigger <si:CallDataMethod <si:ShowMessageBox <si:SetProperty </i:EventTrigger> </i:Interaction.Triggers> </Button> </Grid> </UserControl> The whole solution is also available as a downloadable zip file. Try it on your end and create some samples by yourself. After doing this, you will be familiar with the MVVM pattern and then stop writing code inside your xaml.cs file. Move all to ViewModel and use the properties to bind the data to the view. Call the viewmodel method right from the XAML and also raise necessary events from the view. Hope this will help you to understand the event triggering in MVVM way. Please don’t stop to share your feedback, Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/Articles/125188/Using-EventTrigger-in-XAML-for-MVVM-No-Code-Behind?fid=1595334&df=90&mpp=10&sort=Position&select=3851438&noise=3&prof=True&view=Expanded
CC-MAIN-2014-35
refinedweb
812
67.45
There are many application areas for data science and analytics in the retail space. Their application in this sector has enabled players in the sector to serve their customers better as well as increase profit. In this piece, we’ll look at some of these application areas. At the tail end of this writing, we’ll comb through a dataset to see some of the application areas. Product Recommendation E-commerce has taken the world by storm. What this means is that these online retailers have the purchasing history of their customers. Using this data, online retailers are increasing their sales by recommending new products to customers. That’s why you’ll see recommendations such as customers who viewed this also viewed or customers who bought this also bought. For example, by determining the similarity index of customers, they can recommend certain terms to one customer when the other customer buys them. Market Basket Analysis In this analysis, retailers work to figure out the relationship between different items that are purchased by their customers. This is done using association rules. By determining which items are frequently bought together, the retailer can make informed decisions on merchandising. This informs the layout of the store by, for example, placing items that are frequently bought together close to each other. Delivery Online retailers such as Amazon are using drones to make delivery to their customers quicker. These drones run a bunch of machine learning models in order to ensure the safety of the drone. The drones have autonomous flight systems that enable them to land at the customers’ locations. Review Analysis Negative sentiment spreads like wildfire, especially if it’s not addressed immediately. However, with the rise of social media, it has become extremely difficult to determine negative and positive sentiment from the sea of reviews that are left on them. Retailers are using social media monitoring tools coupled with sentiment analysis to determine a review’s polarity. Negative reviews can be discovered faster and dealt with immediately. Stock Prediction Most of the items in retail stores and pharmacies have near-term expiration dates. Overstocking them would mean that they would lose money because they can’t sell expired products. Understocking would lead to customers visiting the stores and not getting the items they need. Therefore, an optimal stock capacity is crucial. This can be achieved by weaving state of the art machine models with the retailers’ sales history to determine the best stock capacity. Shopping Assistants Finding a shop or an item in a large shopping mall or a large retail store is not a walk in the park. Retail stores are using shopping assistants to enable their customers in navigating their stores. They are also using chatbot applications to help customers get answers to frequently answered questions. Augmented Reality With augmented reality, online retailers are able to allow customers to visualize items before they purchase them. For example, using the visual fitting rooms, one can visualize how different clothes would fit them. One can also visualize how a piece of furniture would look in their living room before making a purchase. Warehouse Robots Online retailers use warehouses to store their products. The items can become so plentiful in the warehouse that it becomes close to impossible for a human being to move from one section to another looking for an item. Robots are being deployed in these warehouses because they can quickly determine an item’s location in the warehouse. They then pick up the item in preparation for shipping. This can obviously work only in the combination of proper recording of all items and their location in the warehouse. Theft Prevention Computer vision can be used to detect faces of known shoplifters when they enter a store. This will definitely help the physical stores reduce losses that result from shoplifting. It’ll also make work easier for staff — in the event of well-known shoplifters — because they don’t have to constantly monitor the security cameras. Sales Projection By determining how long a customer will stay with a certain retailer, the retailer can determine the customer’s lifetime value. Using this information, they can predict how much profit they are likely to make from an individual customer during their lifetime. This can then be used for projecting sales which help the retailer in future planning. Store Location Analyzing the population and income levels of an area can help in determining the best place to open a new store. It can also help in price optimization as well as determining the type of products to stock in the store. Analyzing the population and their living standards is very crucial because physical stores rely majorly on customer walk-ins. Let’s now see how we can do sales forecasting for a retail store using Prophet. Prophet is a library built by Facebook for forecasting time series data. It works well with shifts in trend, outliers, and missing data. The dataset we’ll use is available on the UCI Machine Learning Portal. It contains sales from 01/12/2010 and 09/12/2011 for a UK-based and registered non-store online retail. If you don’t already have Prophet installed, you’ll start by installing it. pip install fbprophet Next, we import Prophet and Pandas. from fbprophet import Prophet import pandas as pd Let’s now import the dataset and check its head. df = pd.read_csv(‘Online Retail.csv’) df.head() Prophet expects us to have two columns. y for the item to be projected and ds for the timeframe. We, therefore, have to transform the data frame to be in that format. We start by computing the y the column. df[‘y’] = df[‘Quantity’] * df[‘UnitPrice’] After this, we compute the date column. We start by splitting the date column in order to remove the time. Let’s write a simple function to do that. def getDate(date): x = date.split(‘ ‘) return x[0] The function splits the date column by the space and returns the first part of the split list. We then use it to create the new ds column. df[‘ds’] = df[‘InvoiceDate’].apply(getDate) We can now select the two columns that interest us and save them in a new data frame. sales = df[[‘ds’,’y’]] sales.head() However, since the dates are repeated, we’ll aggregate them in order to get the total sales per day. sales = sales.groupby(‘ds’)[‘y’].sum().reset_index() Now we are ready to fit Prophet to the dataset. The first step is to create an instance of Prophet. Since the dataset is from the United Kingdom, we also add the in-built UK holidays. Doing this will add visualization of holidays when we perform the visualization later. Finally, we fit the model to our sales data frame. model = Prophet() model.add_country_holidays(country_name=’UK’) model.fit(sales) Next, let’s make a year’s worth of predictions in the future. In order to do this, we have to create a data frame with those future dates. Prophet provides a make_future_dataframe function to enable this. future = model.make_future_dataframe(periods=365) future.tail() Once the new dates are ready, we can make our forecasts. As you can see below, Prophet’s usage is similar to Scikit-Learn’s implementation. forecast = model.predict(future) Let’s check out these predictions. forecast.head() However, the columns that are of most interest to us are ds, yhat, yhat_lower, and yhat_upper. yhat represents the predicted sales. The rest are its upper and lower boundaries. forecast[[‘ds’, ‘yhat’, ‘yhat_lower’, ‘yhat_upper’]].tail() Prophet also allows us to visualize our model. The black dots represent our dataset while the blue line represents the predicted sales. plot1 = model.plot(forecast) Prophet also enables us to quickly plot the time series seasonality. This includes the trend, weekly and yearly seasonality. plot2 = model.plot_components(forecast) We can now check the performance of the model by comparing the predicted results with the historical data. This is done via cross-validation. This function requires us to pass in our model and the forecast horizon. from fbprophet.diagnostics import cross_validation df_cv = cross_validation(model,horizon = ’50 days’) df_cv.head() We can now check the performance metrics. from fbprophet.diagnostics import performance_metrics df_p = performance_metrics(df_cv) df_p.head() Prophet also allows us to visualize these metrics as shown below. from fbprophet.plot import plot_cross_validation_metric fig = plot_cross_validation_metric(df_cv, metric=’rmse’) Check out this [VIDEO] to see how I completed the analysis. Final Thoughts In this article, we’ve covered a couple of application areas of data science in the retail sector. We’ve seen how it can be applied in sales projection, determining a store’s location, preventing theft, and executing drone delivery — just to mention a few. We have also combed through a case study that I hope has shed some light on some of the application use cases. Guest post: Derrick Mwiti Stay up to date with Saturn Cloud on LinkedIn and Twitter. You may also be interested in: Evolving the Transportation Industry with Machine Learning.
https://www.saturncloud.io/s/datascienceretail/
CC-MAIN-2020-24
refinedweb
1,507
56.96
Introduction: Studying Orientation With Raspberry Pi and MXC6226XU Using Python Noises are simply a portion of working a vehicle. The hum of a very much tuned vehicle motor is a magnificent sound. Tire treads murmur against the road, the wind shrieks as it goes around mirrors, plastic bits, and pieces in the dashboard produce little squeaks as they rub together. The vast majority of us don't see these innocuous notes before long. Yet, a few commotions aren't so harmless. An unusual noise can be seen as an early attempt by your vehicle to let you know that something isn't right. What if we use instrumentation and techniques to identify Noise, vibration, and harshness (NVH) including rig squeak and rattle tests, etc. That's worth looking into. Innovation is one of the important force of the future without bounds; it is changing our lives and molding our future at rates remarkable ever, with significant ramifications which we can't start to see or get it. Raspberry Pi, the micro, single board Linux computer, gives a cheap and moderately simple base for hardware ventures. As computer and electronics enthusiasts, we've been learning a lot with the Raspberry Pi and decided to blend our interests. So what are the conceivable results that what we can do on the off chance that we have a Raspberry Pi and a 2-axis Accelerometer close by ? In this task, we will check the acceleration on 2 perpendicular axes, X and Y, Raspberry Pi and MXC6226XU, a 2-axis accelerometer. So we should see on this, to make a framework analyze the 2-dimensional acceleration. Teacher Notes Teachers! Did you use this instructable in your classroom? Add a Teacher Note to share how you incorporated it into your lesson. Step 1: Equipment We Require The issues were less for us since we have a tremendous measure of stuff lying around to work from. Regardless, we know how it's troublesome for others to store up the right part in impeccable time from the supportive spot and that is shielded paying little notice to each penny. So we would help you. Follow the accompanying to get a complete parts list. 1. Raspberry Pi The initial step was getting a Raspberry Pi board. The Raspberry Pi is a single-board Linux based PC. This little PC packs a punch in computing power, utilized as a part of gadgets activities, and straightforward operations like spreadsheets, word preparing, web scanning and email, and games. You can purchase one at almost. 2-Axis accelerometer, MXC6226XU The MEMSIC MXC6226XU Digital Thermal Orientation Sensor (DTOS) is (was ;) the world's first fully-integrated orientation sensor. We acquired this sensor from DCUBE Store 4. Connecting Cable We acquired the I2C Connecting cable from DCUBE Store 5. Micro USB cable The littlest dazed, yet most stringent to the degree power need is the Raspberry Pi! The simplest approach to arrangement is by the utilization of the Micro USB cable. GPIO pins or USB ports can likewise be utilized to give abundant power supply. 6. Web Access is a Need INTERNET kids NEVER sleep! Get your Raspberry Pi connected with an Ethernet (LAN) cable and interface it to your system network. Elective, scan for a WiFi connector and use one of the USB ports to get to the remote network. It's a sharp choice, basic, little and easy! 7. HDMI Cable/Remote Access The Raspberry Pi has an HDMI port which you can interface particularly to a screen or TV with an HDMI cable. Elective, you can use SSH to take up with your Raspberry Pi from a Linux PC or Mac from the terminal. Moreover, PuTTY, a free and open-source terminal emulator sounds like a not too bad option. Step 2: Connecting the Hardware Make the circuit according to the schematic appeared. In the diagram, you will see the diverse parts, power segments and I2C sensors taking after I2C communication protocol. Imagination is more important than Knowledge. Connection of the Raspberry Pi and I2C Shield Most importantly else, take the Raspberry Pi and spot the I2C Shield on it. Press the Shield carefully over the GPIO pins of Pi and we are done with this step as straightforward as pie (see the snap). Connection of the Raspberry Pi and Sensor support the utilization of the I2C cable as it refutes the need for analyzing pin outs, securing, and inconvenience achieved by even the humblest botch. With this crucial connection and play cable, you can introduce, swap out contraptions, or add more devices to an application viable. This encourages the work weight up to a huge The Python Code for the Raspberry Pi and MXC6226XU Sensor is accessible accompanying is the python code and you can clone and change the code in any capacity you slant toward. # Distributed with a free-will license. # Use it any way you want, profit or free, provided it fits in the licenses of its associated works. # MXC6226XU # This code is designed to work with the MXC6226XU_I2CS I2C Mini Module available from dcubestore.com # import smbus import time # Get I2C bus bus = smbus.SMBus(1) # MXC6226XU address, 0x16(22) # Select detection register, 0x04(04) # 0x00(00) Power up bus.write_byte_data(0x16, 0x04, 0x00) time.sleep(0.5) # MXC6226XU address, 0x16(22) # Read data back from 0x00(00), 2 bytes # X-Axis, Y-Axis data = bus.read_i2c_block_data(0x16, 0x00, 2) # Convert the data xAccl = data[0] if xAccl > 127 : xAccl -= 256 yAccl = data[1] if yAccl > 127 : yAccl -= 256 # Output data to screen print "Acceleration in X-Axis : %d" % xAccl print "Acceleration in Y-Axis : %d" % yAccl Step 4: The Portability of the Code Download (or git pull) the code from Github and open it in the Raspberry Pi. Run the commands to Compile and Upload the code in the terminal and see the yield on Screen. Taking after a couple of minutes, it will demonstrate each one of the parameters. In the wake of ensuring that everything works easily, you can utilize this venture each day or make this venture a little part of a much bigger task. Whatever your needs you now have one more gadget in your collection. Step 5: Applications and Features Manufactured by MEMSIC Digital Thermal Orientation Sensor (DTOS), the MXC6226XU is a fully Integrated Thermal Accelerometer. The MXC6226XU is appropriate for Consumer applications like Cell Phones, Digital Still Cameras (DSC), Digital Video Cameras (DVC), LCD TV, Toys, MP3 and MP4 Players. With patented MEMS-thermal technology, it’s useful in Household safety applications like Fan Heaters, Halogen Lamps, Iron Cooling and Fans. Step 6: Conclusion On the off chance that you've been pondering to investigate the universe of the Raspberry Pi & I2C sensors, then you can astound yourself by making utilized of the electronics fundamentals, coding, planning, binding and so forth. In this procedure, there may be a few tasks that might be simple, while some may test you, challenge you. Be that as it may, you can make a way and immaculate it by altering and making a creation of yours. For example, You can start with the idea of a prototype to Measure Noise and Vibration (N & V) characteristics of vehicles, particularly cars and trucks using the MXC6226XU and Raspberry Pi along with microphone and force gauges. In the above task, we have utilized fundamental computations. The ideas are to look for tonal noises i.e. engine noise, road noise or wind noise, normally. The resonant systems respond at characteristic frequencies looking like on any one spectrum, their amplitude varies considerably. We can check that for varying amplitudes and create a noise spectrum for that. For e.g. the x-axis can be in terms of multiples of engine speed while the y-axis is logarithmic. Fast Fourier transforms and Statistical Energy Analysis (SEA) can be approached to create a pattern. So you could utilize this sensor in various ways you can consider. We will attempt to make a working rendition of this prototype sooner rather than later, the configuration, the code, and modeling works for structure borne noise and vibration analysis. We believe all of you like it! For your comfort, we have a charming video on YouTube which may help your examination. Trust this endeavor motivates further exploration Trust this venture motivates further exploration. Start where you are. Use what you have done. Do what you can. Be the First to Share Recommendations Discussions
https://www.instructables.com/id/Studying-Orientation-With-Raspberry-Pi-and-MXC6226/
CC-MAIN-2020-10
refinedweb
1,408
62.88
15 June 2012 08:11 [Source: ICIS news] By Jenny Jin SINGAPORE (ICIS)--Monoethylene glycol (MEG) prices in east China have declined by 11% over the past two weeks, tracking falls in crude oil prices and amid weak demand from downstream polyester sector, industry sources said on Friday. MEG spot prices declined to yuan (CNY) 6,330-6,380/tonne ($994-$1,001/tonne) ex-tank east ?xml:namespace> Panic-selling in the MEG market ensued following sharp falls in crude prices. Some traders have been dumping cargoes to prevent further losses, while others are staying out of the market. MEG buyers, on the other hand, are not too keen to procure cargoes. On Friday, crude prices were trading at above $84/bbl, down by about $2/bbl from the start June. “Though prices decreased to the low level from 2011, I am still hesitating to purchase lots of cargoes considering the uncertain economic environment,” a trader said. Downstream polyester producers in Operating rates at MEG inventories at Chinese ports have declined to around 750,000 tonnes this week from 800,000-850,000 a month earlier, with a number of Middle Eastern and Taiwanese producers that export MEG to China are conducting maintenance at their plants between May and June. Chinese petrochemical major Sinopec, meanwhile, is still considering running its MEG plants at full capacity as production margins have improved with the softening prices of feedstock ethylene, market sources said. ($1 = CNY6.37)
http://www.icis.com/Articles/2012/06/15/9569788/east-china-meg-falls-11-in-two-weeks-on-crude-falls-poor-demand.html
CC-MAIN-2014-42
refinedweb
244
54.56
: XSLProcessorVersion.java,v 1.49 2004/02/26 04:00:47 zongaro Exp $18 */19 package org.apache.xalan.processor;20 21 /**22 * Administrative class to keep track of the version number of23 * the Xalan release.24 * <P>See also: org/apache/xalan/res/XSLTInfo.properties</P>25 * @deprecated To be replaced by org.apache.xalan.Version.getVersion()26 * @xsl.usage general27 */28 public class XSLProcessorVersion29 {30 31 /**32 * Print the processor version to the command line.33 *34 * @param argv command line arguments, unused.35 */36 public static void main(String argv[])37 {38 System.out.println(S_VERSION);39 }40 41 /**42 * Constant name of product.43 */44 public static final String PRODUCT = "Xalan";45 46 /**47 * Implementation Language.48 */49 public static final String LANGUAGE = "Java";50 51 /**52 * Major version number.53 * Version number. This changes only when there is a54 * significant, externally apparent enhancement from55 * the previous release. 'n' represents the n'th56 * version.57 *58 * Clients should carefully consider the implications59 * of new versions as external interfaces and behaviour60 * may have changed.61 */62 public static final int VERSION = 2;63 64 /**65 * Release Number.66 * Release number. This changes when:67 * - a new set of functionality is to be added, eg,68 * implementation of a new W3C specification.69 * - API or behaviour change.70 * - its designated as a reference release.71 */72 public static final int RELEASE = 6;73 74 /**75 * Maintenance Drop Number.76 * Optional identifier used to designate maintenance77 * drop applied to a specific release and contains78 * fixes for defects reported. It maintains compatibility79 * with the release and contains no API changes.80 * When missing, it designates the final and complete81 * development drop for a release.82 */83 public static final int MAINTENANCE = 0;84 85 /**86 * Development Drop Number.87 * Optional identifier designates development drop of88 * a specific release. D01 is the first development drop89 * of a new release.90 *91 * Development drops are works in progress towards a92 * compeleted, final release. A specific development drop93 * may not completely implement all aspects of a new94 * feature, which may take several development drops to95 * complete. At the point of the final drop for the96 * release, the D suffix will be omitted.97 *98 * Each 'D' drops can contain functional enhancements as99 * well as defect fixes. 'D' drops may not be as stable as100 * the final releases.101 */102 public static final int DEVELOPMENT = 0;103 104 /**105 * Version String like <CODE>"<B>Xalan</B> <B>Language</B> 106 * v.r[.dd| <B>D</B>nn]"</CODE>.107 * <P>Semantics of the version string are identical to the Xerces project.</P>108 */109 public static final String S_VERSION = PRODUCT+" "+LANGUAGE+" "110 +VERSION+"."+RELEASE+"."111 +(DEVELOPMENT > 0 ? ("D"+DEVELOPMENT) 112 : (""+MAINTENANCE));113 114 }115 Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
http://kickjava.com/src/org/apache/xalan/processor/XSLProcessorVersion.java.htm
CC-MAIN-2017-04
refinedweb
475
52.15
a Flutter app that can do Firestore CRUD Operations with ListView widget. Related Posts: – How to integrate Firebase into Flutter App – Android Studio – Flutter Navigator example – Send/Return data to/from new Screen – Flutter ListView example with ListView.builder Firebase Database: Flutter Firebase Database example – Firebase Database CRUD with ListView Contents Flutter Firestore Example Overview We will build a Flutter App that supports showing, inserting, editing, deleting Notes from/to Cloud Firestore Database with ListView: Firebase Console for Firestore will be like: You can find how to: – use Flutter ListView at: Flutter ListView example with ListView.builder – send/receive data between screens at: Flutter Navigator example – Send/Return data to/from new Screen Cloud Firestore Add Firestore to Flutter App We’ve already had a Tutorial, please visit: How to integrate Firebase into Flutter App – Android Studio. Initialize & Reference Create Assume that our database structure is like: Using set() to create or overwrite a single document: Read – get all Documents from Collection: Update Practice Set up Project Follow these steps to add Firestore to the Project. Project Structure Data Model lib/model/note.dart Firebase Firestore Data Service lib/service/firebase_firestore_service.dart UI List of Items Screen lib/ui/listview_note.dart Item Screen lib/ui/note_screen.dart Source Code flutter_firebase_firestore 7 thoughts on “Flutter Firestore example – Firebase Firestore CRUD with ListView” Hi, first of all, awesome post, something which I was looking for. I just have one query, how can I manage(CRUD) a field of type array in firestore. I want to store dates for user for a particular year, and feel arrays should be the datatype to use, do you think there another way of storing this information? Thanks for the post, awaiting your response. All throughout the instructions the direction is to put the darts under /lib/… ## List of Items Screen lib/ui/listview_note.dart ## but in the dart files the imports statements reference them otherwise: ## import ‘package:flutter_firebase/model/note.dart’; import ‘package:flutter_firebase/ui/note_screen.dart’; ## while i have put the code under /lib/ and changed the import references, the call in main.dart to: ## ‘package:flutter_firebase/lib/ui/listveiw_note.dart’; ## will not compile for me .. and i cannot get it to?? Feedback from AS suggest it knows nothing about a package called ‘flitter_firebase/lib’ ??? It works like a charm! Thanks for sharing! I refactored this solution a bit so it’s easier to re-use in other projects: Hi Frank Paepens, We are so happy when you can make the world better 🙂 Best Regards, grokonez. thanks, u help me a lot what is the function of noteSub?.cancel(); thank you for this
https://grokonez.com/flutter/flutter-firestore-example-firebase-firestore-crud-operations-with-listview
CC-MAIN-2019-26
refinedweb
438
64.51
I've loosened this up a little for b2. The thing about fully qualifying nested view/model/controller/store/profile classes is that although it's correct, it doesn't optimize for the most common use case. As of b2 you'll be able to locally qualify nested classes so long as you follow the simple convention that top-level namespaces are capitalized and packages are lowercase. The docs are once again updated for b2 to give examples of how this worksExt JS Senior Software Architect Personal Blog: Twitter: Github: Is what you call 'namespace' the alias to a package path? (namespaces are typically embodied in packages in java-like languages) 'MyPath.MyClass' as a shorthand reference to 'MyApp.path.to.MyClass', while having used setAlias('MyApp.path.to', 'MyPath')? Discussion on why to remove the requirement of having MyApp as first item of a fully qualified path name would lead to various benefits moved to discussion (as this was a bug report marked as fixed). If the uppercase convention is about imposing some folders in the package space to start with uppercase, then it may make the code more arbitrary. The absence of documents describing the language logic (at high levels) makes it difficult for persons without prior extJS experience to find out about these sencha-specific code conventions. Even if this gets documented, the information often only exists at class level and finding out about such idiosyncrasies too often turns into a 'find Wally' type of exercise. It's been common for me to follow the conventions of the framework Im building with. So with Ext (Sencha etc.) I tend to follow their lead. Which is documented here: I like to package my classes with a top-level domain of sorts so it's easier to integrate others' libraries into mine that may have the same class names. Furthermore, it's fairly common to have a folder structure that relates to the class's namespace (at least when I build the main app). Success! Looks like we've fixed this one. According to our records the fix was applied for TOUCH-1554 in a recent build.
http://www.sencha.com/forum/showthread.php?176429-PR4-Custom-folder-structure-for-MVC-no-longer-supported/page3
CC-MAIN-2015-14
refinedweb
358
61.16
Message-ID: <2137222506.8501.1418892267990.JavaMail.haus-conf@codehaus02.managed.contegix.com> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_8500_114739695.1418892267989" ------=_Part_8500_114739695.1418892267989 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: Access to models generated with the Eclipse Model= ing Framework is easy in Groovy with the help of Groovy Beans and GPath= .=20 As an example we use the EMF tutorial. This model contains three= classes Book, Writer and Library and an enumeration BookCategory. From thi= s model EMF generates Java code. There are two special classes: a package c= lass and a factory class. We need the package class for reading the model. = We have to instantiate it and load a file with data as following.=20 LibraryPackage.eINSTANCE def resource =3D new XMIResourceImpl(URI.createURI('hardboiled.library')) resource.load(null) Library library =3D (Library) resource.contents\[0\] // get the root= element=20 Now we are able to query the model using standard Groovy. For example=20 for ( book in library.books ) { println book.author.name + ', ' + book.title + ', ' + book.category + '= , ' + book.pages=20 }=20 prints out all books. We can print out all the books with less than 240 = pages with the following statement.=20 println library.books.grep { it.pages < 240 }.title.join(", ")=20 All the objects in an EMF model are constructed with methods from a fact= ory (LibraryFactory in this example). The Groovy EMF Builde= r provides an interface for constructing models and model elements. It = takes an EMF factory as an argument. In the following snippet three objects= are created in the model: a Library, a Writer and a Book.=20 def builder =3D new EMFBuilder(LibraryFactory) def writer def library =3D builder.Library( name : 'Hardboiled Library') { writers { writer =3D Writer( name : 'Raymond Chandler') =20 } books {=20 Book ( title: 'The Big Sleep', pages: 234, category: BookCateg= ory.MYSTERY_LITERAL, author: writer) =20 } }=20 The braces indicate the containment relationships writers and books of t= he class Library. See the homepage of the Groovy EMF = Builder for further details.
http://docs.codehaus.org/exportword?pageId=1605636
CC-MAIN-2014-52
refinedweb
345
51.14
MMAP2(2) Linux Programmer's Manual MMAP2(2) mmap2 - map files or devices into memory #include <sys/mman.h> void *mmap2(void *addr, size_t length, int prot, int flags, int fd, off_t pgoffset);). On success, mmap2() returns a pointer to the mapped area. On error, -1 is returned and errno is set appropriately.). mmap2() is available since Linux 2.3.31. This system call is Linux-specific.. getpagesize(2), mmap(2), mremap(2), msync(2), shm_open(3) This page is part of release 4.16 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. Linux 2017-09-15 MMAP2(2) Pages that refer to this page: mmap(2), remap_file_pages(2), syscalls(2)
http://www.man7.org/linux/man-pages/man2/mmap2.2.html
CC-MAIN-2018-51
refinedweb
129
69.07
Unit testing is a highly effective verification and validation technique in software engineering. You can use unit testing to improve code quality. In this article, we discuss unit testing using the UnitTest++ tool. We explore how to decipher code coverage using lcov, and then move on to valgrind to check for memory leaks. Prerequisites You need to install UnitTest++, lcov and valgrind. The compilation and installation process mainly needs GCC, g++ and Perl on your system. I have successfully installed these tools under Fedora 10 and RHEL5. In Ubuntu 9.10, I had to manually install g++. If you get into dependency trouble during installation, I would recommend that you use the package management software of your distribution to install these packages and their dependencies. The commands for those would be: yum install <packagename> (Redhat/Fedora); apt-get install <packagename> (Debian/Ubuntu); zypper install <packagename> (OpenSuSE); and urpmi <packagename> (Mandriva). Let’s assume that you are going to install the tools in your $HOME/tools directory, and that your source and test code is in the $HOME/src and $HOME/test directories, respectively. If this is not the case, use the paths that are specific to your system, while setting up the environment variables below. Note: Your login account should have sudo privileges to execute make install, as in the command snippets below. Alternately, you can run the commands as the root. 1. Export the paths to your folders as environment variables: bash> export TOOLSROOT=$HOME/tools bash> export SRCROOT=$HOME/src bash> export TESTROOT=$HOME/test 2. Download (see the Links section at the end for source URLs) and extract the tools: bash> cp lcov-1.8.tar.gz unittest-cpp-1.4.zip valgrind-3.5.0.tar.bz2 $TOOLSROOT bash> cd $TOOLSROOT bash> tar -xvzf lcov-1.8.tar.gz bash> unzip unittest-cpp-1.4.zip bash> tar -xvjf valgrind-3.5.0.tar.bz2 3. Configure and build UnitTest++: bash> cd $TOOLSROOT/UnitTest++/ bash> make 4. Configure, build and install lcov: bash> cd $TOOLSROOT/lcov-1.8 bash> make bash> sudo make install 5. Configure, build and install valgrind: bash> cd $TOOLSROOT/valgrind-3.5.0 bash> ./configure bash> make bash> sudo make install Getting started with unit testing Here’s a snippet of source code that compares two integers. Leave the commented lines as they are; we will uncomment them later in the article. bash> cat $SRCROOT/test.c 1 #include <stdio.h> 2 #include <stdlib.h> 3 int compare_function(int a, int b) 4 { 5 int result = 0; 6 int *p; 7 if ( a > b ) { 8 result = 1; 9 } else if (a < b ){ 10 result = -1; 11 } 12 13 // p = malloc(sizeof(int) * 10); 14 // free(p); 15 return result; 16 } bash> cat $SRCROOT/test.h int compare_function(int a, int b); Unit testing generally includes a data generation part that feeds test data to the code that is being tested, and a set of logically related test cases that are grouped into one or more test suites. The following test program is explained below the code snippet. bash> cat $TESTROOT/testUT.cpp #include <UnitTest++.h> #include <TestReporterStdout.h> #ifdef __cplusplus extern “C” { #endif #include <stdio.h> #include “test.h” extern int compare_function(int,int); #ifdef __cplusplus } #endif class dataFixture { public : dataFixture() {} ~dataFixture() {} int getGreaterElt(int a) { return (a+1) ; } int getLesserElt(int a) { return (a-1) ; } }; SUITE(TestUtSuite) { TEST(TestUTCompareGreater) { int result; result = compare_function(2,3); CHECK(result == -1); } TEST_FIXTURE(dataFixture, TestUTCompareGreaterFixture) { int result; result = compare_function(2,getGreaterElt(2)); CHECK_EQUAL(result,-1); } TEST(TestUTCompareEqual) { int result; result = compare_function(2,2); CHECK_EQUAL(result,0); } TEST_FIXTURE(dataFixture, TestUTCompareLesser) { int result; result = compare_function(2,getLesserElt(2)); CHECK_EQUAL(result,1); } } int main() { return UnitTest::RunAllTests(); } In the code above, the dataFixture class generates a number that is one higher or lower than the passed number (this is the data generation part). The SUITE(<suitename>) macro embeds the set of test cases into a single suite. The TEST_FIXTURE(<datafixture>, <testname>) macro uses the data fixture class to obtain the data to be used in the test. The TEST(<testname>) macro is used for simple tests. CHECK or CHECK_EQUAL macros are used for comparing the results. UnitTest++ also provides macros for boundary checking, condition assertions, a timed constraint test and exception checking: UNITTEST_TIME_CONSTRAINT, UNITTEST_TIME_CONSTRAINT_EXEMPT, CHECK_CLOSE, CHECK_THROW, CHECK_ARRAY_EQUAL, CHECK_ARRAY_CLOSE, and so on. You can explore the UnitTest++/docs directory for more information. Here is the Makefile that we will use to build the test_ut.bin binary, which is linked with UnitTest++ and the gcov library. The various make targets provide for testing builds as well as release builds, prior to distributing the application to users. The compilation options -fprofile-arcs and -ftest-coverage are needed for code coverage checking, which we will discuss in the next section. bash> cat $TESTROOT/Makefile DEFAULT : all CC=gcc CXX=g++ CROSS_COMPILE=arm-linux- TCC=${CROSS_COMPILE}${CC} RM = rm release.o : ${TCC} -c ${SRC_ROOT}/test.c ${SRC_ROOT}/main.c -I ${SRC_ROOT} release_cov_valgrind.o : ${CC} -fprofile-arcs -ftest-coverage -c ${SRC_ROOT}/test.c ${SRC_ROOT}/main.c -I ${SRC_ROOT} test_cov.o : ${CC} -g -fprofile-arcs -ftest-coverage -c ${SRC_ROOT}/test.c -I ${SRC_ROOT} test.o : ${CC} -g -c ${SRC_ROOT}/test.c -I ${SRC_ROOT} unittest : test.o ${CXX} test.o ${TEST_ROOT}/testUt.cpp -o test_ut.bin -I ${SRC_ROOT} -I ${TOOLS_ROOT}/UnitTest++/src/ -L${TOOLS_ROOT}/UnitTest++ -lUnitTest++ unittest_cov : test_cov.o ${CXX} test.o ${TEST_ROOT}/testUt.cpp -o test_ut.bin -I ${SRC_ROOT} -I ${TOOLS_ROOT}/UnitTest++/src/ -L${TOOLS_ROOT}/UnitTest++ -lUnitTest++ -lgcov release : release.o ${TCC} test.o main.o -o release.bin -I ${SRC_ROOT} release_cov_valgrind : release_cov_valgrind.o ${CC} test.o main.o -o release_cov_valgrind.bin -I ${SRC_ROOT} -lgcov all : unittest_cov clean: -@${RM} *.o *.bin *.html *.gcda* *.gcno* *.info* *.png *.bin *.css 2>/dev/null Let’s compile the code and run the test: bash> cd $TEST_ROOT bash> make unittest bash> ./test_ut.bin Success: 4 tests passed. Test time: 0.00 seconds. You can play around with conditional assertions to get different results. Viewing code coverage lcov is an extension of gcov, a GNU test coverage tool. lcov code coverage is used to examine the parts of the source code that are executed—the branches taken, and so on. It also gives us an execution count for each line of the source code. To get the code coverage, in the Makefile, we added the fprofile-arcs option to instrument the program flow, and thus record how many times each function call, branch or line is executed. During the program run (execution of the test binary test_ut.bin), the generated information is saved in a .gcda file. The ftest-coverage option we added in the Makefile generates the test coverage note files .gcno files for coverage analysis. The geninfo command converts the coverage data files into trace files, which are encoded ASCII text files containing information about the file location, functions, branch coverage, frequency of execution and so on. The genhtml command can then convert these to a readable HTML output (it creates a file named index.html): bash> make unittest_cov bash> geninfo . bash> genhtml test.gcda.info A quick look at index.html shows the frequency of the source code statements’ execution. This can suggest test or source code enhancements: 1 : #include <stdio.h> 2 : #include <stdlib.h> 3 : int compare_function(int a, int b) 4 4 : { 5 4 : int result = 0; 6 : int *p; 7 4 : if ( a > b ) { 8 1 : result = 1; 9 3 : } else if (a < b ){ 10 2 : result = -1; 11 : } 12 : 13 : //p = malloc(sizeof(int) * 10); 14 : //free(p); 15 4 : return result; 16 : } A closer look at coverage can unearth redundant code, unexpected branches taken, functions that are not executed, potential bugs, and so on. Let’s take a separate code coverage example to illustrate a simple case: 1 1: int var = 4; 2 1: if(var = 5) { 3 1: printf(“ 5 “); 4 1: } In the example above, the if statement on Line 2 was intended to compare the value of the variable var with the numeric value 5. Due to a typo, it instead accidentally assigns the value of 5 to the var variable. The coverage shows that the branch is taken at line Number 2, which is an unexpected path. On investigating the reason, it becomes obvious that a typo has occurred, and it can then be corrected. Tip: In your code, suppose a function a() calls b(), and b() in turn calls c(), always try to call the function a() in your unit testing, providing it with the necessary data. This adds practical value to unit testing and coverage analysis. Memory leak checking Valgrind is a powerful binary-level debugging and profiling tool for executables, which is employed to check for memory leaks, cache profiling, deadlock detection and so on. You need to compile your code with the -g option to generate more debugging information in your executables. Memory leaks and similar errors occur when free() or delete is used inappropriately on allocated data, if a double free is done, or allocated data is not freed. The generic valgrind invocation syntax is as follows: bash> valgrind –tool=toolname <program arguments> To see valgrind in action, uncomment lines 13 and 14 in test.c, which allocate and free memory. Rebuild the test binary (run make again). Proceed to run test_ut.bin again, as shown below, to view a sample of memory leakage detection: bash> make unittest_cov bash> valgrind ./test_ut.bin ==14817== Memcheck, a memory error detector ==14817== Copyright (C) 2002-2009, and GNU GPL’d, by Julian Seward et al. ==14817== Using Valgrind-3.5.0 and LibVEX; rerun with -h for copyright info ==14817== Command: ./test_ut.bin ==14817== Success: 4 tests passed. Test time: 0.07 seconds. ==14817== ==14817== HEAP SUMMARY: ==14817== in use at exit: 160 bytes in 4 blocks ==14817== total heap usage: 5 allocs, 1 frees, 512 bytes allocated ==14817== ==14817== LEAK SUMMARY: ==14817== definitely lost: 160 bytes in 4 blocks ==14817== indirectly lost: 0 bytes in 0 blocks ==14817== possibly lost: 0 bytes in 0 blocks ==14817== still reachable: 0 bytes in 0 blocks ==14817== suppressed: 0 bytes in 0 blocks ==14817== Rerun with –leak-check=full to see details of leaked memory ==14817== ==14817== For counts of detected and suppressed errors, rerun with: -v ==14817== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 15 from 8) The leak summary shows that there was definitely a memory leak. You can use the options –tool=memcheck –leak-check=full to obtain a more detailed output. Refer to the documentation in the valgrind-3.5.0/docs directory to get more information. You might wonder what’s the use of running valgrind on test binaries rather than source binaries? It’s handy when the source is cross-compiled to run on different target architectures, and thus not realistic to test it frequently. In our test.c example, the release executable is intended for the ARM architecture: bash>make release bash>file release.bin release.bin: ELF 32-bit LSB executable, ARM, version 1 (ARM), for GNU/Linux 2.6.4, dynamically linked (uses shared libs), for GNU/Linux 2.6.4, not stripped Note: A discussion of cross-compilation is beyond the scope of this article, but I’ve mentioned a reference at the end of the article that will provide you with more information, if you’re interested. You can also try the code coverage and valgrind check on the source binary without building it into the unit testing target—you can build with the release_cov_valgrind make target: bash>make release_cov_valgrind Useful definitions Native compilation: Building executables for the same platform as the one on which the compiler is run, to compile the code. Cross-compilation: Creation of executables for a target platform other than the one on which the compiler is run (which is the build platform) Here are a few exciting ideas before signing off: 1. You can implement the idea of unit testing in coding contests, to validate and compare the submitted results. Evaluation is made easier by automating the testing of the submitted code. 2. This could be helpful in examinations—for teachers who are assessing students’ program submissions. 3. A logical mix of unit testing, code coverage, memory leak and error checking is a valuable validation, verification and code quality measurement, especially in corporate projects. Finally, I would like to acknowledge all the people who contributed to the UnitTest++, gcov, lcov, and valgrind open source projects, and thank them for their efforts.
http://www.opensourceforu.com/2010/08/thinking-beyond-unit-testing/
CC-MAIN-2014-49
refinedweb
2,081
56.55
Lets look at what is happening on a few of the lines here: First: from coin_toss import coin_toss imports the module coin_toss and sets the local variable coin_toss to the value of coin_toss in the module coin_toss. Second: coin_toss = coin_toss() calls the function bound to the name coin_toss and assigns the result to coin_toss. Now this appears to be what you want (and run outside a function it would work as you intend, the first time). However, within a function, a different namespace exists, and Python sees that you are assigning to coin_toss, and thus uses the local version of that variable everywhere else within the function. As such, Python is attempting to call the local variable coin_toss, which has not yet been assigned too. While not really the "correct" solution, you may want to look into the global keyword for more information about your problem. Chris On Sun, Sep 27, 2009 at 8:53 PM, pylearner <for_python at yahoo.com> wrote: > Python version = 2.6.1 > IDLE > Computer = Win-XP, SP2 (current with all windows updates) > > --------------------------------------------------------------- > > Greetings: > > I have written code for two things: 1) simulate a coin toss, and 2) > assign the toss result to a winner. Code for the simulated coin toss > is in a file named "coin_toss.py." Code for the assignment of the > toss result is in a file named "toss_winner.py." Each file has one > simple function: 1) coin_toss(), and 2) toss_winner(), respectively. > The code for each file is listed below. > > Problem: > > I am getting an error when I run "toss_winner.py." The error message > is listed below. > > Question #1: > > Why am I getting this error? > > Explanation: > > As I understand it, the first statement of the toss_winner() function > body -- i.e. "coin_toss = coin_toss()" -- causes four things to > happen: 1) the coin_toss() function is called, 2) the coin_toss() > function is executed, 3) the coin_toss() function returns the value of > its local "coin_toss" variable, and 4) the returned value of the "coin > toss" variable that is local to the coin_toss() function is assigned > to the "coin toss" variable that is local to the toss_winner() > function. > > Given this understanding, it seems I should NOT be getting a > "referenced before assignment" error, involving the "coin_toss" local > variable of "toss_winner()." > > Note: > > I am new to programming and Python. I'm currently self-studying > "Python Programming: An Intro to Computer Science" by Zelle. > > Thanks! > > ------------------------------------------------------------------- > > Traceback (most recent call last): > File "<pyshell#2>", line 1, in <module> > toss_winner() > File "C:/Python26/toss_winner.py", line 7, in toss_winner > coin_toss = coin_toss() > UnboundLocalError: local variable 'coin_toss' referenced before > assignment > > --------------------------------------------------------------- > > # toss_winner.py > > from coin_toss import coin_toss > > def toss_winner(): > > coin_toss = coin_toss() > > if coin_toss == "Heads": > print 'From "toss_winner" function >>', > print "Toss print 'From "toss_winner" function >>', > print "Toss > print 'From "coin_toss" function >>', > print "Toss > print 'From "coin_toss" function >>', > print "Toss result = " + str(coin_toss) > > return coin_toss > -- > > -------------- next part -------------- An HTML attachment was scrubbed... URL: <>
https://mail.python.org/pipermail/python-list/2009-September/552934.html
CC-MAIN-2014-15
refinedweb
475
60.55
Aros/Platforms/68k support/Developer/Exec Contents ArosBootStrap[edit] Can I use the maproom on the blizzard if i change the address from - Main ROM (0xf80000 - 0xffffff) ROMLOC_rom := 0x0f80000 to 0x4FF80000 - 4FF8ffff Would i need to tell arosbootstrap or does this follow that pointer? Arosbootstrap is not compatible with any kind of external address remapping. That address is used to build normal 2x512K ROM images, arosbootstrap uses relocatable elf-image. Note that rom detects arosbootstrap mode and automatically uses MMU (if available) to remap "ROM" to fast RAM if arosbootstrap originally loaded it in chip ram (happens when available fast RAM is not compatible, for example Blizzard A1200 accelerators' fast RAM has this problem because it is not autoconfig) Check the log for MMU messages. It apparently has to be UBYTE m68060[0x12]; Guess mc68060 conflicts with some other compiler variable when compiling for 68060 or something like that. That union is only used to reserve space for largest FPU stack frame (68882), variable names are not important. (Perhaps prefix them with fpu_?) Noticed a peculiar quirk of InternalLoadSeg_ELF. It was doing Seek() calls on the BPTR passed into it. Now, this in itself wasn't too strange, except for when the funcarray[] has an override for Read (see workbench/c/AddDatatypes.c) which operated on an in-memory data structure instead of a file. As I also needed the in-memory seeking capability for loading GZIP compressed ELF files into RAM (don't ask), I modified InternalLoadSeg and friends to use a 4th funcarray member (overriding Seek) to provide this capability. PPC maintainers: Please double check my work to your files, and also, see if they can be merged back to rom/dos/internalloadseg_elf.c Shutdown[edit] Is Exec/ShutdownA(SD_ACTION_POWEROFF) defined for any Amiga machine? If so, what do I need to do to make that machine power off? There is no soft power hardware in any Amiga model. For other machines, what is the quickest way to make the screen go all black? (This is probably a copper list thing, right?). AFAIK each graphics driver installs a reset handler hook that blanks the screen. Reset handlers are probably not called upon ShutdownA(SD_ACTION_POWEROFF), but maybe they should be. Opening up windows seems to take a LOT of memory. Is AROS allocating a full bitmap for each window?!? Or am I opening them wrong? Some magic OpenWindow() tag that says 'no backing store'? Only Smartrefresh windows should use extra window bitmaps (at least on AOS). Turns out that Deallocate() NULL was the problem child. I've already committed a fix to make Deallocte() of NULL a no-op. I'll revert the UAEGfx change tonight. Exec[edit] what priv mode is task->tc_Launch(), task->tc_Switch(), and Exec/Exception() supposed to run in? I would assume in User mode, but given that core_Dispatch can be called from Switch(), wouldn't that mean that tc_Launch() could be executed in Supervisor mode? Other important missing part seems to be expansion.library autoconfig board handling (and tricky extra: exec/supervisor stack relocation to fast ram if fast ram detected.) Will be simpler than the Exec/Dispatch() and Exec/Supervisor() implementations! However, you may want to look at the exec/child* family of calls, as it looks like they are casting in a funny way, i.e.: child = FindChild((ULONG)tid); I don't know if this is safe for x86_64, I don't know your tid implementation. A typical 68k asm coded PutChProc function does: move.b d0,(A3)+ rts A3 = PutChData (typically string buffer) and the function relies on getting back the modified A3 (pointing to next byte/char to be poked), the next time the PutChProc is called. Example: "Hello", A3 = 0x100000 call PutChProc("H", 0x100000) call PutChProc("e", 0x100001) call PutChProc("l", 0x100002) call PutChProc("l", 0x100003) call PutChProc("o", 0x100004) So A3 is basically an input+output param, not just input param. You can also try getting exec in real autoconfig fast ram (after ConfigChain() call) if you still have too much free time. Are you talking about moving the "Boot Task" stack, or moving all of Exec out of ROM and into RAM? No, I meant execbase only. Official 2.0+ ROMs move execbase to real autoconfig fast (instead of 0xc00000 slow ram) ram if it exists. This is done because Chip RAM and "slow ram" (which actually has exactly same speed as chip ram) are relatively slow on real Amigas. That's pretty easy to do then. I'll add it as an arch function to either rom/expansion or rom/strap. About dynamic relocation: I am attempting to load relocatable 1M rom image to end of available fast RAM. Technically it already works but there is problem with autoconfig ram boards that disappear during reset. (UAE Z3 board does not, at least in WinUAE, A3000/A4000 motherboard ram is also "safe") 1: Let original KS (which is in ROM) do the autoconfig stuff and add coolcapture/kicktag hack that copies ConfigDevs to AROS expansion list. - aros autoconfig implementation is unused. This won't help use to find any bugs. - KS expansion behavior may have undocumented features that can break the copy-phase.. 2: Put coldcapture/kicktag to chipram that points to aros expansion.library (also in chip ram). It runs autoconfig first phase (enabling the ram board where "rom" is located), stores the data in chip ram, jumps to aros rom which detects this situation and only collects the autoconfig data without rerunning autoconfig again. - is it possible to have separate relocatable file that only contains expansion.library? (Jason?) - can't use any exec routines, some patching needed.. (just use absolute chip ram addresses, no one cares, they are temporary anyway) (yes, I know that chipram also temporarily gets replaced by rom when reset is executed but this can be worked around, even on a 68000 without crashing. One game even used this as part of copy protection..) 3: just copy current autoconfig data and jump to rom image in ram without reset. Works only once (any reset kills it), any boot rom boards are not visible to aros rom. Too stupid for my tastes :) Option 2 is not easy but it would be compatible with all Amiga models (as long as it has at least 1M of fast ram). Testing is too difficult for "normal" users as long as it needs specific hardware or eprom burner. (Unfortunately you have autoconfig ALL boards, you can't choose specific RAM board, unless that ram board is first board in autoconfig chain but I don't think you can assume that..) How does WHDLoad do it? Can we generate a ROM image/relocation map WHDLoad can use? It uses .RTB files that are also some kind of relocation files (afaik they were originally used by some rom image loader). But personally I'd prefer everything in single file, it is too easy to mix different versions (perhaps even the loader should be included, titanics cruncher like "pseudo-overlay" file is easy to create). Anyway, I don't really care much until I have working expansion.library autoconfig hack (+ Gayle IDE driver port). Removed the softint check (r36842) in m68k-amiga Disable() and moved all processing to Cause() because m68k Disable() and Enable() should be as short and as fast as possible. Softints are "automatic" when using Paula interrupts, there is no need to check them in each Enable() call. (m68k-amiga interrupt processing probably should be completely in assembly, it needs to be really optimized if this thing is going to be useful on A500.. But it is much too early now.) The crash occurs when sprintf() is called. Isn't sprintf() in arosc.library? Of course, you could always replace that with a call to RawDoFmt() and remove the arosc.library dependency. I have half a mind to do that for all of workbench/c anyway (remove the arosc.library dependency) The AOS 3.1 iPrefs utility patches into RawDoFmt() for localization, and doesn't understand the AROS 'magic' constants: - RAWFMTFUNC_STRING - RAWFMTFUNC_SERIAL - RAWFMTFUNC_COUNT Would anyone mind if I made those 'magic constants' point to real m68k functions on AROS m68k, for better 3.1 support? It changes the magic constants for m68k to point to functions, but continues to support the 'NULL == RAWFMTFUNC_STRING' assumption of AOS 4.x and Morphos. Load up locale.library, have it patch AROS' RawDoFmt and then patch it too, with a wrapper that properly translates the special codes into real functions. Before exiting SetPatchAROS remember to unpatch RawDoFmt before unloading locale.library. I like that more than a pop-up. It'll be a little hacky in Exec/SetFunction, but doable. So WB locale.library SetFunction()'s RawDoFmt() and then AROS programs that need non-AOS extensions (RAWFMTFUNC_STRING and others) stop working, right? Extending RawDoFmt() was bad idea. RawDoFmt() should only do what original Autodocs say and all AROS programs should use VNewRawDoFmt() Solution: replace or add wrapper macro that wraps all AROS RawDoFmt() calls with VNewRawDoFmt(). Well, while that does fix the WB on AROS issue, it makes AROS userspace on AOS ROM pretty much impossible (out of room in the AOS exec.library vector space), unless SetPatchAROS relocates and extends exec.library. In that case, I might as well make an external 'exec.library' that replaces the AOS one. The Facts: - AROS RawDoFmt() accepts 'special' PutChFunc vectors 0, 1, and 2 - AOS RawDoFmt() assumes that PutChFunc always points to a valid function - AOS locale.library uses SetFunction() to update Exec/RawDoFmt() to one with AOS PutChFunc conventions APTR realRawDoFmt; AROS_UFP4(fixupRawDoFmt, blah, blah) { If PutChProc is a magic vector, make it a real function. Call realRawDoFmt; } ... Exec/SetFunction (AROS) ... if (library_to_patch == SysBase && function_to_patch == LVO_RawDoFmt) { realRawDoFmt = vector_of_patch; vector_of_patch = fixupRawDoFmt; } Set library_to_patch -> function_to_patch = vector_of_patch That's actually even better than what I had in mind, since it doesn't depend on locale.library, but anything that attempts to patch RawDoFmt() will get the wrapper around it. If you don't consider hackish patching SetFunction() itself. What happens now when AROS' locale.library patches RawDoFmt? It goes through the magic to real function translator fixup too, which adds 16 more m68k instructions to every call. We will move all the m68k specific stuff in rom/exec/setfunction.c to arch/m68k-all/exec/setfunction.c, that way it'll actually end up as a 'cleanup' for the other architectures. Should these AOS compatibility hacks be clearly marked or put inside some ifdefs? (They should be easily found or someone will sooner or later forget about them completely and it will get quite confusing..) Putting extensive comments into arch/m68k-all/exec/setfunction.c Only one problem I can think of: - Someone who tries to replace RawDoFmt will remove the chain instead and get unconverted values in his replacement. - Someone who happens to chain RawDoFmt to catch exactly the same values will have to actually receive them But from the call, we can't tell the chaining apart from the replacement, so how do we know if we have to add a converter patch to that patch as well? Is a very minor issue, so it's a matter of whether we want guaranteed full compatibility, or just a good approximation. Plus, I don't think there's any unpatching. I'm not sure whether that is going to cause notable problems. But if VNewRawDoFmt uses those constants, doesn't it have to check against them? For comparison it wouldn't matter that they are really static function pointers. And being symbolical, comparison is the only operation allowed for recognising them. Since AOS doesn't have the extended task structures, this code sets acpd = NULL->iet_acpd Fix arosc not to use private fields in task structure. You can use AVL trees for association (OS3.5+). Or duplicate these functions in arosc statically. Remember also that arosc.library relies on some other AROS extensions like NewAddTask() and NewStackSwap(). ACPI. Kernel/exec init is tricky, it perhaps can't be done at once. Exec is already initialized twice (and even thrice, if we count kernel.resource pickup). Kernel.resource should be initialized at 127 priority because in future even AllocMem() won't work without it (it will work on top of kernel's page allocator). SAD debug[edit] For debugging before PrepareExecBase. For this purpose you have own bug() macro in kernel_debug.h. For very early debugging in kernel.resource you can use kernel's own bug() definition which statically calls KrnBug(). It doesn't use exec in any way. And exec's facilities are up and running after calling exec's init code, which fills in KernelBase. Note that no other code than kernel.resource's startup code can be run before PrepareExecBase(). Remember also KrnPutChar() and internal krnPutC(). How do I enable SAD early? I currently have a (poorly licensed) m68k-gdbstub I'm using to provide debugging to my ROM that has to go when I commit. Implement KrnPutChar() and KrnMayGetChar() in your kernel.resource and it will work. Note that Alert() will not call it because current alert routine is very basic and does not process any input. This is done because there's no universal input hardware on PC. In fact this needs to be worked on. Perhaps alert needs to take over the screen, print information on it, then take over input and ask for some command from it. RFC, I wrote about debug channels is one small step towards implementing this mechanism. For list review. This patch enables the '--with-paranoia' ./configure option, and gives an example usage in rom/exec. Semantics: ./configure => No paranoia ./configure --with-paranoia => PARANOIA_CFLAGS=-Wall -W -Werror ./configure --with-paranoia=-Wmega => PARANOIA_CFLAGS=-Wmega This allows (a) no changes to the build process, (b) devs to enable paranoia *for themselves* and (c) devs to enable paranoia *only* on targets they think are clean. This way, once all the -Wall issues on a library are cleared, it will stay that way. People use different compiler version(*) and those versions report different warnings - for example right now the 4.4 series reports *tons* of strict aliasing problem when compiled without debugging. We would probably end up in situation that a certain module build for 9 out of 10 people but the unlucky 1 person is not capable or not inclined to do the fixes. Whilst agreeing with some of the warnings in -Wall. Some of them can be wrong (e.g. "variable x may not be initialised"), or try to enforce a particular coding style (e.g. "consider using parentheses around assignment used as truth value"). We're aiming to avoid the use of -fno-strict-aliasing for performance reasons. I think USER_CFLAGS is overused in the mmakefiles; IMO it should only be used in special occasions when a certain symbol needs to be defined etc. Most programs/libs should be compiled with the default CFLAGS as generated by configure. BRA := "\(" KET := "\)" TST := "test$(BRA)test$(KET) test" USER_CFLAGS := -DDEFINE=\"$(TST)\" or USER_CFLAGS := -DDEFINE=\"test\(test\)\ test\" Is the backslash after the last bracket really necessary though? I thought it was only needed to specify that the following character was a special case? Yes, it's needed to escape the space and make the whole thing one command-line argument. Additionally you would not need to duplicate debugging functions. And one more, about GDB stubs. May be you should consider integrating it into existing SAD somehow? Ability to debug any machine remotely with gdb would be very nice. Unfortunately, gdb stubs are very machine specific. I believe they would need to be rewritten for every port. I plan to remove the GDB stubs once I get to the point where SAD works. Unfortunately, gdb stubs are very machine specific. I believe they would need to be rewritten for every port. I plan to remove the GDB stubs once I get to the point where SAD works. Well, SAD really needs face-lift then. It's very old thing. This message is addressed mainly to Jason and Toni. If you look at arch/all-<cpu>/include/aros/<cpu>/cpucontext.h, i've written CPU context definitions for all architectures except m68k. A pointer to such a structure will be used in two places: - It is passed as third argument to kernel exception handlers (added using KrnAddExceptionHandler()). - It is passed as second argument to exec trap handler (tc_TrapCode). The primary purpose of this is to unify and extend crash handling code, and provide possibilities for third party developers to write debugging tools which can catch exceptions and analyze task state. PowerPC context is binary compatible with AmigaOS4. I expect m68k context to be binary compatible with m68k AmigaOS. I know that on m68k tc_TrapCode gets the whole context frame on the stack, this should be the only difference to other ports. I. e. on m68k we should take context pointer as follows: void MyTrapHandler(ULONG trapCode) { struct ExceptionContext *regs = (struct ExceptionContext *)(&trapCode + 1); ... process exception here ... } Also you'll need to write m68k-specific cpu_init.c, KrnCreateContext() and PrepareContext(). Please look at other architectures for examples. Short explanation: - kb_ContextSize needs to be set to the total size of your context. - kb_ContextFlags can be used for any purpose you want. i386 and ARM use it to specify FPU type. Perhaps you won't need it at all because on m68k you have SysBase->AttnFlags. These two variables are set by cpu_init.c, which performs startup-time CPU probe. KrnCreateContext() allocates context frame and sets some initial data (if needed). The common use is to create initial FPU frame. Common part of your CPU context should be sizeof(struct AROSCPUContext). This is needed for hosted ports because on hosted you need to store some host-specific private data as part of CPU context. If you look at hosted CPU definitions you'll see struct ExceptionContext in the beginning of struct AROSCPUContext. On native you are expected just to: #define AROSCPUContext ExceptionContext Optional data (like FPU context) follow struct AROSCPUContext in the same block. PrepareContext() is not changed, i just expanded all macros. Since the context is unified, you don't need to define the same macros for every port any more. The only legacy macros still needed in kernel_cpu.h are GET_PC and SET_PC. They are used by exec's crash handler. They will disappear after some time. PRINT_CPU_CONTEXT in fact prints useless crap, so you may safely remove it. Debug() will work without it. should remember that BPTRs don't really exists on (all?) other ports.. > struct ExceptionContext This is public form of AROS-side context. This is what AROS exception handlers except to get. It is identical on all architectures using the same CPU. > regs_t This is raw stack frame produced by CPU. On hosted AROS it's an alias to host OS context structure. On native it can be identical to ExceptionContext. > struct AROSCPUContext ExceptionContext + private part. Makes sense on hosted (where you save host-specific stuff). On native it is expected to be identical to ExceptionContext. struct AROSCPUContext contains struct ExceptionContext in the beginning. > ucontext UNIX name of context. regs_t is an alias of it on UNIX-hosted. > I'm trying to get m68k to be similar to all the rest of the architectures, but there's been a lot of churn in the trap/exeception area, and too little documentation (or, at least, I don't know where the documentation is). This is newly designed thing. I provided some comments in the source code, i hope this is enough. I'm sorry, i currently have too little of time and even can't read the mailing list actively. The main idea of what is done is unification of CPU context format per single CPU. So struct ExceptionContext is the same on the same CPU, no matter if it's hosted or native system. Yes, i studied AmigaOS exec trap handling, i know about the quirk. I would suggest to use asm stub for it. I commented this in the code. Please put the indicator FIXME or TODO in your comment for things that need attention later. It makes it easier to find it back later and not forget about it. f.ex. /* fetch Task pointer before function call because A6 can change inside initialPC * (FIXME: temporary hack) */ Add MoveExecBase() that m68k-amiga port can use to move exec from chip/slow ram to autoconfig real fast. Out of curiosity: Does this give any improvements on WinUAE or is it targeted at real hardware? Real hardware, execbase or any other commonly accessed system structure in chip ram (or "slow" ram) can cause noticeable slowdown compared to real (accelerator board) fast ram, especially on accelerated OCS/ECS Amigas. 16-bit OCS/ECS chip RAM vs accelerator 32-bit fast ram speed difference can be huge, also chip ram is not cacheable. KS 2.0 was first official ROM that introduced this exec transfer to fast ram feature. (This will get really tricky if we want working reset proof programs) Scheduler[edit] You need to use the existing kernel.resource. It already has complete scheduler, you just need to write CPU-specific code and you're done. Look at the core_* code in rom/kernel/kernel_schedule.c. Note that in future there can be better scheduler (remember about KrnSetScheduler() function). One more note: startup code (start.c) in boot directory should IMHO better be in kernel directory because it is actually a part of kernel.resource. Look at Windows-hosted and UNIX-hosted ports. They are the most recent and they are engineered using the latest model. x86-64 and PPC ports are just older, they don't use common code, but they served as a base for my implementation. I just didn't rewrite them because i don't have these machines and can't test it. In fact boot directory contains an external bootstrap program, which is supposed to load the kickstart image into machine's RAM and execute it. On Amiga kickstart is in ROM, and it doesn't need any bootloader (well, it might have one if you leave an option not to reflash the ROM but reckick the Amiga programmatically, in this case kickstart swapper will be your bootstrap). I wrote it when I finished kernel.resource rewrite. It is still incomplete in places and lacks porting HOWTO. Make sure task switches are done only when returning from supervisor to user mode, but not when returning from supervisor to supervisor (interrupt inside interrupt). x86 native had a similar disappearing task problem looooong ago, caused by buggy "do we return to usermode or not" check in exitintr handling. The code in arch/m68k-amiga/kernel/amiga_irq.c only calls core_ExitInterrupt() when returning to user mode. Of course, I could be wrong. I would appreciate a second set of eyes to look at my arch exec and kernel code, now that I more closely conform to the standard conventions. Syscalls are handled in amiga_irq.c (via the F-Line trap) and only for User Mode, all other interrupts either trap to the debugger or (for Paula IRQs) go through the Paula handler in amiga_irq.c Under a slow processor it is more visible and task switching must as fast as possible. It had nothing to do with scheduler, problem was too low handler process priorities. Too low priority dos packet handlers and someone using all CPU time = console and filesystem io crawls. You can easily confirm it on AOS by changing all handler processes' priorities to zero or lower :) Sysbase[edit] In the expansion.library and scsi.device, there appear to be specially crafted 16-bit illegal instructions, that the 'original' Exec would look for and set D0 appropriately for after trapping them. Is there any documentation anywhere for these 'illegal instruction' traps? You must be using Amiga Forever 3.x ROM(s), they have at least one special illegal instruction to make it incompatible with real Amigas (UAE has a rom-check hack that fixes this..). AFAIK it was part of license, rom images must not work on real Amigas. Make sure you have real original kickstart rom image. The solution is to make sure that when you have that 'SysBase' is the global, not a local copy. SysBase = PrepareExecBase(...) One small question: why do you think PrepareExecBase() should not set global SysBase? I remember once i also came up with such an idea, just because i thought that it doesn't look good. After changing this i realized how wrong am i. Many things in unexpected places may rely on global SysBase. I remember i had early debug output somewhere, and this broke it. Perhaps that output is even not there any more, but this proved that the idea was bad. Global SysBase should be set up as early as possible. This means - before PrepareAROSS. misled by how PrepareExecBase() was returning SysBase, and all of its callers where using it as 'SysBase = PrepareExecBase()'. It just looked like a typo that PrepareExecBase() was missing a local 'struct ExecBase *SysBase'. Maybe PrepareExecBase() should return void, so that its callers have to *explicitly* pick up the global SysBase? Also, is goUser() supposed to drop down to user privs always, or restore the privs that were there before goSuper()? It switches to user mode. There's also goBack() definitions which jumps to previous mode remembered by goSuper(). One last thing - what priv mode is task->tc_Launch(), task->tc_Switch(), Launch and Switch callbacks are called directly from within supervisor, and i believe it's okay, and i think original Amiga does the same. Anyway you are not going to do something long-running in these callbacks. and Exec/Exception() supposed to run in? Exception() (unlike in AmigaOS) is called when all arch-specific preparations are already done, and you are in usermode in task's context. This code just checks signals and calls appropriate routines, it does not contain any arch-specific code. In order to process an exception correctly you need to save your task's context (which cpu_Dispatch() is going to jump to) somewhere, and then adjust the context so that your task jumps to exception handler. The handler should call exec's Exception(), then it should pick up the original task's context (one that you saved in your cpu_Dispatch()) and jump to it. The result looks like your task just calls its exception handler. You may look at Windows-hosted implementation as an example of working one. UNIX-hosted implementation does not work (at least on PPC). On native ports exceptions also don't work. Just note that on m68k-native you 100% know what's going on your stack so you don't need to that trick with passing context to exception handler. You may save it right on task's stack instead (this is what UNIX-hosted version does and this is why it doesn't work). Interrupts[edit] With my last change (where I remembered to re-enable the hardware interrupts before going to CPU idle state in cpu_Dispatch()), I appear to be able to get to the KickStart 1.3 Intuition idle loop. However, it appears I am missing something, because (other than a pretty white screen) I get nothing more. I *think* I need to call either KS's "strap" or "romboot.library", or AROS's "dosboot.library", but I'm not quite sure *where* to call those from. FWIW, AROS's strap is called as a result of it being in the resident list (at priority -50). See rom/boot/strap.c. The strap module does the disk block read in a real rom before dos initialization, it does not appear to be very well documented. Semaphores[edit] I'm having some structure layout issues, since I am getting corruption of the MH free lists when I am trying to init the KickStart 3.0 libraries from AROS exec. (Oddly enough, they seem to init quite a bit. The memory corruption occurs in expansion.library after a call to Exec/InitSemaphore, and another that seems near the Graphics/Screen OpenScreen when called by Intuition - probably both the same structure)> A good source for amiga os 3.1 are the .i includes are a good reference for the byte-level layout of AmigaOS structures? (i.e. SignalSemaphore, etc.). Does your compiler use WORD (2 byte) alignment for LONGs etc. as expected by AOS? If not you may need to have the headers like in AOS4/MOS where they surround stuff with: #pragma pack(2) [...] #pragma pack() NewAddTask wants to align the stack to 16 bytes. I've turned that off in AROS_FLAVOUR_BINCOMPAT for now. This alignment is need for e.g. PPC ports. Of course other ports might not need it. I've noticed on KS 3.1 that some callers of Exec/InitStructure set bit 16 of the size field to 1. Don't know why. For now, I have to mask that out (which limits InitStructure to only being able to handle structures up to 64k in size). That's likely a case of a function where the original function only looks at the lower 16 bits even if the library prototype for the param says LONG or ULONG. So the upper 16 bits may contain trash and some other code may rely on that (== actually have trash in upper 16 bits when calling the function). One can see this also in other places, like graphics.library. There in some functions we use x = (WORD)x (FIX_GFXCOORD macro) to kill trash in upper 16 bits. E.g. if the prototype is LONG, the intended value width is WORD and the passed trash is < 0, then why should the typecast be sufficient? x = (WORD) (x & 0x0000FFFF) to be sure that you only get the lower 16 bit? I tried on x86 (gcc 4) and 68k (gcc 2.95) where it works. #include <stdio.h> void func(int param) { int fixed = (short)(param); printf("%d (%x) %d (%x)\n", param, param, fixed, fixed); } int main(void) { func(0xF234fffe); } -231407618 (f234fffe) -2 (fffffffe) You are right - when converting from (unsigned or signed) long to (unsigned or signed) short the C rule seems to be to preserve the low-order word. Personally I do prefer the "&" notation, as it shows exactly what is done, instead of keeping in mind what the compiler is known to do by some implicitely defined convention. Read here or here or here Memory[edit] During fixing up i386-pc exec initialization i again came across SysBase->MaxLocMem. AmigaOS 3.x NDK describes it as "Max ChipMem amount". If so, why can't it be set by examining MemList and summing up all MEMF_CHIP memory size? Why is there some magic dance with addresses? What is exactly its value? And the same question about MaxExtMem. I'd like to try to implement it correctly once for all platforms. MaxLocMem is amount of chip ram (equals last address + 1 of chip ram on AOS) MaxExtMem is last address + 1 (not size!) of "slow"/"ranger" RAM (0x00C00000) only. No slow RAM = NULL. It never includes any other local RAM regions. I don't think these were designed to support multiple memory regions. Devices[edit] Documentation (and prototype files) say return type is BYTE but many programs (including WB1.3 system/setmap) assume LONG return type. Both KS1.3 and 3.1 OpenDevice() fetch io_Error and then extends it (EXT.W D0 + EXT.L D0) to LONG before returning it. Which one is wrong, documentation or implementation? (or neither? :)) (m68k-amiga port already has hack that extends OpenDevice() return code but it gets overwritten by dos lddemon hook) in rom/dos/endnotify.c: /* get the device pointer and dir lock. The lock is only needed for * packet.handler, and has been stored by it during FSA_ADD_NOTIFY */ iofs.IOFS.io_Device = (struct Device *) notify->nr_Handler; iofs.IOFS.io_Unit = (APTR)notify->nr_Reserved[0]; The 'iofs.IOFS.io_Unit' is a pointer, and notify->nr_Reserved[0] is ULONG. This code location and FSA_ADD_NOTIFY need to handle 64-bit pointers. No workaround at present. In rom/dos/deviceproc.c: /* all good. get the lock and device */ SetIoErr(dvp->dvp_Lock); res = dvp->dvp_Port; The problem is that dvp->dvp_Lock is a BPTR, which can't be (safely) cast to a LONG under x86_64. As his function is marked as obsolete, I'm thinking that it should just throw a ERROR_DEVICE_NOT_MOUNTED on x86_64. I'm not sure how workable this would be, probably we did lots of assumptions throughout the whole codebase to make this an easy feat, but one idea would be to consider a BPTR as an opaque handler: on x86 it would just be a pointer alright, for speed purposes, whilst on other architectures that don't allow such an "optimization" it could be a key into a dictionary of some sort (implemented with an hash table, a binary tree, or something like that). Would there be any harm in making pr_Result2 an SIPTR instead? this is our "old" (not used) USB stack in trunk/contrib/necessary/USB: classes/HID classes/MassStorage stack/stubs. Current stack is in rom/poseidon. The old one is used in ppc-efika, though, that port isn't working at the moment. On the other hand I hope to make some progress for the sam port. Does anyone happen to know what the IECLASS_TIMER tick rate of input.device on AmigaOS was? (Google and the Autodocs have been very unhelpful - they only say that IECLASS_TIMER is a 'timer event', but don't say at what rate the events occur). It's set to once very 100 milliseconds in AROS. Is that correct? Or should it be the VBlank rate? Or slower? Trackdisk.device[edit] MFM decoding and some simple hardware poking (Paula and CIA) needed. For cia, the skeleton should remain in rom/cia afaik - you should just add the amiga specific files into arch/m68k-amiga/cia and use the %build_archspecific macro to override initial files with architecture specific. disk.resource GetUnit() never enables disk dma finished interrupt. It only "worked" because serial transmit and disk interrupt use same interrupt level.. It is trackdisk.device that enables interrupt. Error appears if I add arch/m68k-amiga/cia directory and put my amiga-specific files there (seticr.c and ableicr.c for example) Target is amiga-m68k. arch/m68k-amiga/cia/mmakefile.src is simple: include $(TOP)/config/make.cfg FILES := seticr ableicr USER_CFLAGS := -I$(SRCDIR)/rom/cia %build_archspecific \ mainmmake=kernel-cia maindir=rom/cia arch=amiga-m68k \ files=$(FILES) modulename=cia trackdisk.device should be located in... m68k-amiga/devs/ and HIDDs in... m68k-amiga/hidd/ and resources on the other hand go directly to m68k-amiga - believe this maps the layout of AROS/rom directory "Retro"-specific question: how are we supposed to configure ROM for "compatible" mode/model specific configuration, for example unexpanded A500? ROM must not use too much RAM for advanced stuff (for example trackdisk.device must not allocate DMA buffer for HD drives, 15k vs 30k is huge difference in chip ram usage. Currently I do this dynamically, reallocate HD sized buffer only if HD disk is inserted, note that HD drives report being DD unless HD disk is inserted). Not too important today but free ROM that can boot most A500 games and demos is my goal :) (full compatibility is of course impossible, some very old games even jump directly to ROM..) I also noticed (previously I didn't know it was that bad) that M68k code produced is really terrible, inefficient and looooonng.. (hopefully only because of options used?) Code that reads drive IDs (from my disk.resource implementation): void readunitid_internal (struct DiscResource *DiskBase, LONG unitNum) { volatile struct CIA *ciaa = (struct CIA*)0xbfe001; volatile struct CIA *ciab = (struct CIA*)0xbfd000; UBYTE unitmask = 8 << unitNum; ULONG id = 0; int i; ciab->ciaprb &= ~0x80; // MTR ciab->ciaprb &= ~unitmask; // SELx ciab->ciaprb |= unitmask; // SELX ciab->ciaprb |= 0x80; // MTR ciab->ciaprb &= ~unitmask; // SELx ciab->ciaprb |= unitmask; // SELX for (i = 0; i < 32; i++) { ciab->ciaprb &= ~unitmask; // SELx id <<= 1; if (!(ciaa->ciapra & 0x20)) // RDY id |= 1; ciab->ciaprb |= unitmask; // SELX } if (unitNum == 0 && HAVE_NO_DF0_DISK_ID && id == 0) id = 0xffffffff; DiskBase->dr_UnitID[unitNum] = id; } result is this: (start and end removed) 00FE8AB6 206f 0010 MOVEA.L (A7, $0010) == $0000ee78,A0 00FE8ABA 7208 MOVE.L #$00000008,D1 00FE8ABC 2008 MOVE.L A0,D0 00FE8ABE e1a9 LSL.L D0,D1 00FE8AC0 1039 00bf d100 MOVE.B $00bfd100,D0 00FE8AC6 0200 007f AND.B #$7f,D0 00FE8ACA 13c0 00bf d100 MOVE.B D0,$00bfd100 00FE8AD0 1039 00bf d100 MOVE.B $00bfd100,D0 00FE8AD6 1401 MOVE.B D1,D2 00FE8AD8 4602 NOT.B D2 00FE8ADA c002 AND.B D2,D0 00FE8ADC 13c0 00bf d100 MOVE.B D0,$00bfd100 00FE8AE2 1039 00bf d100 MOVE.B $00bfd100,D0 00FE8AE8 8001 OR.B D1,D0 00FE8AEA 13c0 00bf d100 MOVE.B D0,$00bfd100 00FE8AF0 1039 00bf d100 MOVE.B $00bfd100,D0 00FE8AF6 0000 ff80 OR.B #$80,D0 00FE8AFA 13c0 00bf d100 MOVE.B D0,$00bfd100 00FE8B00 1039 00bf d100 MOVE.B $00bfd100,D0 00FE8B06 c002 AND.B D2,D0 00FE8B08 13c0 00bf d100 MOVE.B D0,$00bfd100 00FE8B0E 1039 00bf d100 MOVE.B $00bfd100,D0 00FE8B14 8001 OR.B D1,D0 00FE8B16 13c0 00bf d100 MOVE.B D0,$00bfd100 00FE8B1C 327c 0020 MOVEA.W #$0020,A1 00FE8B20 7000 MOVE.L #$00000000,D0 00FE8B22 1639 00bf d100 MOVE.B $00bfd100,D3 00FE8B28 c602 AND.B D2,D3 00FE8B2A 13c3 00bf d100 MOVE.B D3,$00bfd100 00FE8B30 d080 ADD.L D0,D0 00FE8B32 1639 00bf e001 MOVE.B $00bfe001,D3 00FE8B38 0803 0005 BTST.L #$0005,D3 00FE8B3C 6604 BNE.B #$00000004 == $00FE8B42 00FE8B3E 7601 MOVE.L #$00000001,D3 00FE8B40 8083 OR.L D3,D0 00FE8B42 1639 00bf d100 MOVE.B $00bfd100,D3 00FE8B48 8601 OR.B D1,D3 00FE8B4A 13c3 00bf d100 MOVE.B D3,$00bfd100 00FE8B50 5389 SUBA.L #$00000001,A1 00FE8B52 b2fc 0000 CMPA.W #$0000,A1 00FE8B56 66ca BNE.B #$ffffffca == $00FE8B22 No address relative CIA addressing, move to register, do operation, write it back.. Can't get any worse. Why does it use address registers as counters (subaq.l #1,a1; cmpa.w #0,a1? You have used volatile keyword, so every operation on ciaprb is executed! Consider: ciab->ciaprb &= ~0x80; // MTR ciab->ciaprb &= ~unitmask; // SELx to be replaced with ciab->ciaprb &= ~(0x80 | unitmask); // MTR, SELx and the generated code will be shorter... ;) Change "int i" to "UBYTE i", since you're counting from 0 to 31 only. Otherwise you force gcc to allocate a 32 bit register for you : I meant why it didn't create simple and short: and.b #$7f,$bfd100 and.b d0,$bfd100 (or even better, put bfd100 in some address register and do and.b #$7f,(a0)) Many data registers are totally unused. Does volatile force totally-completely-as-unoptimized-as-possible code? :) sure! It forbids any excessive optimizations, since the state of variable can always change in unpredicted way. Register described by volatile keyword will be accessed as many times, as your code suggests :P Handlers[edit] The con-handler steers the Shell's I/O to the console.device which draws output in a window and reads input from the keyboard. The con-handler used to open the window and pass it to the console.device, but now the console.device does it. The con-handler still handles name-completion, command history, etc., but has been changed considerably. The console.device opens the display, reads the keyboard, handles the display history, multiple consoles in the one window (tabs), the menu, etc. It has been restructured and largely rewritten. I managed to get dos packets working yesterday (mainly functions that are needed at startup,like open/lock, examine, getdeviceproc, assignlock) Lock and Open are using FileLock and FileHandle structures. UAE FS boots until first executable is run, also CON needs to be converted soon. But reason I posted this: NIL handler. It is not possible to create NIL handler because CreateProcess() needs NIL: which needs CreateProcess() and so on... Original NIL "handler" is nothing more than Open() checking for "NIL:" and returning FileHandle with fh_Type = NULL (packet port). Question is: does this cause issues with other ports? If yes, how to solve it? (some kind of special CreateProcess () needed?) There is a NULL handler filesystem on the Aminet but I haven't looked at it in years. could be a possibility since it comes with source. The difference between NULL: and NIL: is that NIL: is a dummy filesystem while the NULL: filesystem is not. Can't we just get rid of NIL handler? Would that be a compatibility issue? If there are existing AROS specific programs that expect (possible accidentally) "full" NIL handler (or NIL listed in DosList as DLT_DEVICE). Actually this is non-issue, real NIL: handler can be started from Dosboot manually (overriding pseudo NIL) easily if needed. DOS packet conversion is advancing nicely, UAE FS boots now (and is faster for some reason). Until Open(CON:) is called. Next problem: console handler, it is quite difficult to test disk based commands without seeing anything on screen.. It does not appear to be as simple conversion as NIL handler (I converted it before I noticed it can't be used..) Do we have some older real dos packet based version hidden somewhere? Nope, the very first checkin (in 1998) of console.handler used the FSA_* API. diff --git a/arch/m68k-amiga/devs/filesys/console_handler/con_handler.c b/arch/m index 2f0bc85..38c3bb9 100644 --- a/arch/m68k-amiga/devs/filesys/console_handler/con_handler.c +++ b/arch/m68k-amiga/devs/filesys/console_handler/con_handler.c @@ -297,10 +297,10 @@ static void startread(struct filehandle *fh) } #if (AROS_FLAVOUR & AROS_FLAVOUR_BINCOMPAT) - - /* SegList points here, must be long aligned */ - __attribute__((aligned(4))) - + /* We use the GCC trick of .balign, which + * will pad us with NOPs + */ +asm (" .text\n.balign 4\n"); #endif LONG CONMain(void) Why does this problem not occur with Amiga compilers? Just luck, or do they automatically align functions to 4 bytes? If the latter, maybe the same should be done in our 68k cross-compiler. That is horribly bad code that only works accidentally.. There is nothing in documentation that says input handlers have extra scratch registers. Do we need to save D2 and D3 only or do other programs modify (illegally) other non-scratch registers too? Equally horrible was Titanics Cruncher decruncher that calls dos functions using A5 as a base register, A6 pointed to something else than dosbase, it only worked (accidentally again) because dos packets can be sent without dosbase... i think all registers need save, but maybe its possible to add a GCC compiler command that after the call of eventhandler, all registers are invalid, so later the code use no registers. but when you look in software interrupts, docu here stand "dont trash a6" and on interrupts stand "(D0/D1/A0/A1/A5/A6) may be used as scratch registers by an interrupt handler" And when there do not stand anything what registers can use in inputhandler, i think code in AOS is written that all registers can use from inputhandler. Don't assume something like that is allowed. Show us the documentation which confirms your assumption. And since InputHandlers are chained in input.device as struct Interrupt, I assume the same rules as by software interrupts applies. IMO the rule is scratch registers are D0-D1/A0-A1, do not touch other registers unless otherwise specified in documentation. Do not just think, modify your test program to confirm it :) (see what registers can be changed without crashing AOS input handler, I am quite there is at least one address register that can't be modified without crashing it). or maybe you take a look, why screenswitcher awin do not work on AROS.maybe i am wrong, but to find out, a version of AROS that save all registers is usefull.all this programs crash aros total. awin is written from kas1e, maybe he know what can go wrong maybe commodity handlers have same problems, they do not accept if a register is change. CON[edit] Original reason is for CON: is compiler/autoinit/stdiowin.c which always sets input and output streams as "CON://///AUTO/CLOSE". Which forces open console window if program does any read or write from Input()/Output(). This is correct. The behavior is copied from libnix. I suggest first to test on AmigaOS whether reading from this file really opens console window. Perhaps only output opens it, and input does not. The window is not opened immediately, but upon first access. While writing this i understood the origin. CreateNewProc() takes care about this, but when started from Workbench, process' input/output are both NIL:. Startup code takes over both of them, and we end up with Input() not containing any pre-injected data. I would suggest to test the following sequence on AmigaOS: 1. Open CON: with these parameters. 2. Try to read something. 3. Try to write something. 4. Try to read again. If my guess is correct, step (2) will not cause window open, and just return EOF. If so, our console handler needs to be fixed. This didn't happen originally (=wrong behavior) until (I guess) some dos packet related fix was moved to mainline. It's not packet-related, it relates to how AmigaOS handles process' arguments. It injects them into Input(). KS3.1: handle = Open("CON://///AUTO/CLOSE", MODE_OLDFILE) Console window is not yet open Read(handle,buf,1) = Window opens, waits for input or Write(handle,buf,1) = Window opens Resources[edit] There is tiny chicken and egg problem. AOS does this when automounting with 3rd party autoboot rom (UAE hardfile driver is 3rd party autoboot rom): FileSystem.resource is added. Something adds FFS dostype nodes to FileSystem.resource. This happens before dos initializes. (maybe when FileSystem.resource is initialized or when FFS is initialized, according to Guru Book it initializes before dos). Autoboot ROM does its job, checks for partitions, loads filesystem(s) from RDB if installed and compares versions against filesystems in FileSystem.resource, if no RDB FFS, checks FileSystem.resource, adds dosnodes etc... Dos initializes and so on... Issue: AROS non-dospacket AFS.handler requires dos during init. Which means it can't be initialized before dos, so FileSystem.resource nodes can't be added by AFS handler early enough. (I'd be happy to break it because it is wrong but I guess I am in minority :D) But AFS handler version and revision info is needed to populate FileSystem.resource AFS entries properly... (seglist isn't needed because NULL seglist means use rn_FileHandlerSegment which can be set up after dos). Also resident list entries need to be added by something else (I have no idea) than AFS.handler (again, dos needed to do it). I guess the real problem is that AROS FS detection didn't use FileSystem.resource at all. M68k-amiga needs it because 3rd party boot roms expect it to be there and expect it to contain AFS Dostypes if KS2.0+. This remaining problem prevents booting from 3rd party boot rom driver RDB harddrives with FFS partitions without RDB LSEG FFS installed (real Amiga with for example any SCSI adapter or normal UAE hardfile driver) or normal partition hardfiles (UAE only). AROS ata.driver works because it knows how to handle it, so does UAE directory harddrive because it is a filesystem, not a device driver.. That would seem to imply that: - FileSystem.resource initializes - afs.handler inititializes by registering itself with FileSystem.resource - dos.library initializes, looks up the 'afs.handler' entry in FileSystem, and sets rn_FileHandlerSegment to the entry's fse_SegList If AROS's AFS Handler didn't need DOS during init, we'd be fine. Yeah, as long as it works with higher resident priority than dos. If so, that may be the best way to go. All the AFS handler would do (during init) would be to register itself (version and Handler LSEG) with FileSystem.resource?. Can we get rid of AROS_STACK_GROWS_DOWNWARDS? It seems to needlessly complicate a few things, and it does not seem to be consistently used throughout AROS. Do we actually support a 'stack grows up' architecture? Can anyone even think of one that isn't from before the 1980s? It was for PA RISC was I believe (I might be wrong), and that was still moderately common in the mid-90's when this code was written. (I recall at the time it was the damn near the fastest CPU). That said, I think for practical reasons getting rid of it makes sense. I would expect AROS to run on a CPU with register windows before it ever runs on an grows the other way stack.
https://en.wikibooks.org/wiki/Aros/Platforms/68k_support/Developer/Exec
CC-MAIN-2016-40
refinedweb
8,145
66.54
Creating Reusable Custom Widgets in Flutter Learn how to design and create your own custom widgets in Flutter that you can use in any of your projects or share with the world. Version - Dart 2.7, Flutter 1.7, Android Studio 4.0 Everything’s a widget in Flutter… so wouldn’t it be nice to know how to make your own? There are several methods to create custom widgets, but the most basic is to combine simple existing widgets into the more complex widget that you want. This is called composition. In this tutorial, you’ll learn how to compose a custom widget that you can reuse anywhere. These are the specific skills you’ll learn how to do: - Design the widget’s UI - Build your design using existing widgets - Plan and implement how users interact with the widget Getting Started Download the project by clicking the Download Materials button at the top or bottom of the page. This article uses Android Studio, but Visual Studio Code will work fine as well. You’ll make a music-playing app called Classical. It only plays one song, but that’s OK because the song is so great you won’t ever want to listen to anything else. :] Here’s how the app will look when you’re done: The audio player control at the bottom is the custom widget that you’ll make. Open the starter project by navigating to the starter folder and clicking Get dependencies when Android Studio prompts you to do so. The starter project already includes some code so you can finish this project in a single tutorial. If you are curious about the app architecture, check out the article State Management With Provider. Run the app now and you’ll see this: It’s time to start composing your widget so your users can listen to some delightful music. Refactoring UI Layouts As you probably know, Flutter’s UI layout consists of a tree of widgets. Each leaf in the tree is a widget. Each branch of the tree is a widget. The whole UI itself is also just a widget. That’s what composition is all about: widgets made of widgets all the way down to the smallest components. The code for the widget tree can get pretty nested. To make your layout code more readable and maintainable, you should factor out groups of widgets into their own standalone widget classes. Extracting Widgets In the lib folder, open main.dart. Find the MyApp widget, which looks like this: class MyApp extends StatelessWidget { @override Widget build(BuildContext context) { return MaterialApp( ... body: SafeArea( ... ), ); } } Even though MyApp is quite simple already, you can break it down still further. This is a good opportunity to learn about refactoring and using Android Studio’s tools to extract widgets. Put your cursor on Stack and right-click to show the context menu. Then choose Refactor ▸ Extract ▸ Extract Flutter Widget…. This is the body of Scaffold, so name it BodyWidget. class BodyWidget extends StatelessWidget { const BodyWidget({ Key key, }) : super(key: key); @override Widget build(BuildContext context) { return Stack( children: <Widget>[ // ... ], ); } } Android Studio automatically created a new widget from Stack and its descendant widgets. That’s it. You’re finished. Now you know how to make custom widgets in Flutter. Thanks for reading. Come again in the next tutorial for more great content from raywenderlich.com. Just kidding. :] There’s more to this article ahead. But in all seriousness, it really is that easy to create new widgets. You could put BodyWidget into its own file and use it in another part of this app or even another app. While this widget itself isn’t particularly interesting, the audio player widget that you’ll build next is. Stackwidget tree from a method within MyApp. While this is fine, there are a number of advantages to extracting as a widget rather than a method or function. The main advantage for the purpose of this article is that you can reuse extracted widgets. Types of Custom Widgets This article focuses on the easiest way to make custom widgets: composition, or building complex widgets by combining simpler widgets. However, it’s worth mentioning a couple of other ways to make custom widgets. If you can’t get the widget you want by combining other widgets, you can draw it on a canvas that Flutter provides. You do this using CustomPainter. Read Drawing Custom Shapes With CustomPainter in Flutter for a great example of how to do that. If you want to go really low level, it’s also possible to make widgets the same way that the Flutter framework does it: by using RenderObjects. The best way to learn about this is to explore the Flutter source code of a widget similar to the one you want to make. Check out Flutter Text Rendering for a real-life example of making a widget from scratch. It’s time to get down to work. This article will take you step-by-step through everything you need to do to create your own custom widgets. Here are the steps you’ll take: - Design your widget - Decompose the design - Build the basic widget - Customize the look - Determine the user interaction - Define the parameters - Implement the parameters - Test the widget - Share your widget with the world In the next four steps, you’ll determine how the user interface will look. Designing Your Widget It’s helpful to have a visual representation of the widget you want in your mind. Sketch it out on paper or use design software to draw it. You can also borrow design ideas from others. Just because you’re a developer, doesn’t mean you can’t learn to be a great designer as well. If you’re into podcasts, check out Design Advice for Engineers to further develop your skills in this area. For an audio player control widget, MediaElement.js is a good solid place to start: The volume control is not important for this tutorial, so crop it out: Decomposing the Design Once you have the design you want, identify which smaller widgets you can use to build it. You should be able to get something close with IconButton, Slider, Container and a couple of Text widgets. Oh, yes, they’re laid out in a row, so you’ll need a Row widget, too. Building the Basic Widget Create a new file by right-clicking the lib folder and choosing New ▸ File. Name it audio_widget.dart. Then enter the following code: import 'package:flutter/material.dart'; class AudioWidget extends StatelessWidget { @override Widget build(BuildContext context) { return Container( child: Row( children: [ IconButton(icon: Icon(Icons.play_arrow)), Text('00:37'), Slider(value: 0), Text('01:15'), ], ), ); } } Note the Container, Row, Button, Text and Slider widgets. Back in lib/main.dart, scroll to the bottom of the file and delete the line that says TODO delete this line. Then uncomment the line that says TODO uncomment this line. Add the import at the top: import 'audio_widget.dart'; Build and run the app. This gives you the following. If you ignore the fact that it’s sitting on Beethoven’s chest, it already looks a lot like an audio player control. Customizing the Look To make the control look more like the MediaElement.js audio player, you need to make a few adjustments. Open lib/audio_widget.dart again. The first thing to do is give the widget a fixed height so it doesn’t take up the whole screen. Add the following line to the Container widget before its child parameter. height: 60, This is where hot reload shines. Press the yellow Lightning button in Android Studio to get an instant update. That’s better. Now it’s at the bottom where it’s supposed to be: The button looks a little too dark. That’s because it needs a function for its onPressed callback to enable it. Add onPressed: (){}, to IconButton so it looks like this: IconButton( icon: Icon(Icons.play_arrow), onPressed: (){}, ), Do a hot reload. The Play button is brighter now: There’s quite a bit you can customize about the Slider widget. Add the following parameters: Slider( value: 0.5, activeColor: Theme.of(context).textTheme.bodyText2.color, inactiveColor: Theme.of(context).disabledColor, onChanged: (value){}, ), Here are some notes about this code: - A value of 0.5puts the slider thumb in the middle. - Rather than hardcoding the active and inactive colors, getting the colors from the theme makes this widget work in both dark and light modes. That’s a win for reusability. - Giving onChangeda value enables the slider. You’ll add more code here later. Do a hot reload. There’s too much empty space on the right. Slider can be any length, so wrap it with Expanded. With your cursor on Slider, press Option-Return on a Mac or Alt-Enter on a PC. Choose Wrap with widget in the context menu and change widget to Expanded. Expanded( child: Slider(...), ) Do a hot reload. Looks like it needs a little padding on the right. Add SizedBox(width: 16), to the end of the list of Row children like so: IconButton(...), Text(...), Slider(...), Text(...), SizedBox(width: 16), Do a hot reload. Great! That looks pretty good for now. Now that you’ve finished the UI, you need to allow the user to interact with the audio widget. You’ll add these UX features in the next three steps. Determining the User Interaction There are four pieces here: - Play/Pause button: When a user clicks this, it should alternate between a Play and a Pause icon. When the audio reaches the end of the track, it should also revert to the Play icon. That means there needs to be a way to set the button icon, or maybe the play state. - Current time: The app user doesn’t interact with the current time, but the developer needs to have some way to update it based on whatever audio plugin they’re using. - Seek bar: The developer should be able to update the position based on the elapsed time of the audio that’s playing. The user should also be able to drag it to a new location and have that notify a listener. - Total time: The developer needs to be able to set this based on the audio file length. Defining the Parameters Imagine that you’re a developer using this widget. How would you want to set the values? This would be one reasonable way to do it: AudioWidget( isPlaying: false, onPlayStateChanged: (bool isPlaying) {}, currentTime: Duration(), onSeekBarMoved: (Duration newCurrentTime) {}, totalTime: Duration(minutes: 1, seconds: 15), ), Here’s what this code is doing: - isPlaying: This allows you to toggle the Play/Pause button icon. - onPlayStateChanged: The widget notifies you when the user presses the Play/Pause button. - currentTime: By using Durationhere, rather than Stringor Text, you don’t need to worry about setting the current time text and the Sliderthumb position separately. The widget will handle both of these. - onSeekBarMoved: This updates you when the user chooses a new location. - totalTime: Like currentTime, this can also be a Duration. This is the tactic you’ll use in this tutorial. Implementing the Parameters There are a handful of sub-steps necessary to implement your plan above. Converting to StatefulWidget You originally made a stateless widget, but you need to convert it to StatefulWidget because you now have to keep track of the Slider state internally. In lib/audio_widget.dart, put your cursor on the AudioWidget class name. Press Option-Return on a Mac or Alt-Enter on a PC to show the context menu. Choose Convert to StatefulWidget. You’ll see something similar to the following: class AudioWidget extends StatefulWidget { @override _AudioWidgetState createState() => _AudioWidgetState(); } class _AudioWidgetState extends State<AudioWidget> { @override Widget build(BuildContext context) { return Container(...); } } Adding a StatefulWidget Constructor Now, in AudioWidget (not _AudioWidgetState), add a constructor with the parameters you defined above: const AudioWidget({ Key key, this.isPlaying = false, this.onPlayStateChanged, this.currentTime, this.onSeekBarMoved, @required this.totalTime, }) : super(key: key); final bool isPlaying; final ValueChanged<bool> onPlayStateChanged; final Duration currentTime; final ValueChanged<Duration> onSeekBarMoved; final Duration totalTime; Here are some things to note: - The source code of the standard Flutter widgets is very useful to see how other widgets are built. The Slider widget source code is especially helpful here. - All widgets have keys. Watch When to Use Keys to learn more about them. ValueChangedis just another name for Function(T value). This is how you make a parameter with a closure. - It wouldn’t make sense to have an audio player without a total time length. The @requiredannotation is useful to enforce that. Since totalTime is required now, go to main.dart and add an arbitrary Duration to the AudioWidget constructor. return AudioWidget( totalTime: Duration(minutes: 1, seconds: 15), ); You’ll hook AudioWidget up to the view model later to get a real audio duration. Implementing the Play Button You’ll handle the logic for the Play/Pause button next. You aren’t going to add any internal state for this button. The developer can keep track of the play state based on the audio plugin that’s actually playing the music. When that state changes, the developer can just rebuild this widget with a new value for isPlaying. To keep the UI layout code clean, build the Play button in its own method. Go back to lib/audio_widget.dart. In _AudioWidgetState, put your cursor on IconButton, right-click and choose Refactor ▸ Extract ▸ Method from the context menu. This time, you’re extracting as a method rather than a widget so that you can keep everything in the state class. Name the method _buildPlayPauseButton and give it this code: IconButton _buildPlayPauseButton() { return IconButton( icon: (widget.isPlaying) ? Icon(Icons.pause) : Icon(Icons.play_arrow), color: Colors.white, onPressed: () { if (widget.onPlayStateChanged != null) { widget.onPlayStateChanged(!widget.isPlaying); } }, ); } Here are some notes about the code above: IconButtonnow chooses an icon based on isPlaying‘s value. Pressing the button will notify anyone listening to onPlayStateChangedabout the event. - The variables in StatefulWidgetare available to the state class by prefixing them with widget.. For example, in _AudioWidgetStateyou can reference the isPlayingvariable of AudioWidgetby using widget.isPlaying. Do a hot restart. A disadvantage of extracting to a method rather than a widget is that hot reload doesn’t work. Press the Play button now, but there’s no response. That’s because you haven’t hooked up any logic to change the isPlaying value yet. You’ll do that once you’ve implemented all the other widgets. Implementing the Seek Bar Do the seek bar next because the current time label depends on it. Add two state variables at the top of _AudioWidgetState: double _sliderValue; bool _userIsMovingSlider; The slider value can be a double from 0.0 to 1.0. Add a method at the bottom of the _AudioWidgetState class to calculate it: double _getSliderValue() { if (widget.currentTime == null) { return 0; } return widget.currentTime.inMilliseconds / widget.totalTime.inMilliseconds; } Use milliseconds rather than seconds so the seek bar will move smoothly, rather than hopping from second to second. When the user is moving the slider manually, you’ll need a method to calculate the current time based on the slider value. Add the following method at the bottom of the _AudioWidgetState class: Duration _getDuration(double sliderValue) { final seconds = widget.totalTime.inSeconds * sliderValue; return Duration(seconds: seconds.toInt()); } Now you can initialize the state variables. Add the following method above build in _AudioWidgetState: @override void initState() { super.initState(); _sliderValue = _getSliderValue(); _userIsMovingSlider = false; } This method is only called the first time the widget is built. When the user is moving the seek bar at the same time that audio is playing, you don’t want _sliderValue to fight against widget.currentTime. The _userIsMovingSlider flag helps you check for that. Apply the flag by adding the following lines inside build before the return statement. if (!_userIsMovingSlider) { _sliderValue = _getSliderValue(); } Now, extract Slider into a method as you did for IconButton earlier. Put your cursor on Expanded — the parent of Slider — right-click and choose Refactor ▸ Extract ▸ Method from the context menu. Name the method _buildSeekBar and give it the following code: Expanded _buildSeekBar(BuildContext context) { return Expanded( child: Slider( value: _sliderValue, activeColor: Theme.of(context).textTheme.bodyText2.color, inactiveColor: Theme.of(context).disabledColor, // 1 onChangeStart: (value) { _userIsMovingSlider = true; }, // 2 onChanged: (value) { setState(() { _sliderValue = value; }); }, // 3 onChangeEnd: (value) { _userIsMovingSlider = false; if (widget.onSeekBarMoved != null) { final currentTime = _getDuration(value); widget.onSeekBarMoved(currentTime); } }, ), ); } Here are some things to note: - The user is starting to manually move the Sliderthumb. - Whenever the Sliderthumb moves, _sliderValueneeds to update. This will affect the UI by updating the visual position of the thumb on the slider. - When the user finishes moving the thumb, turn the flag off to start moving it based on the play position again. Then notify any listeners of the new seek position. Do a hot restart. The slider moves now, but the label is still not updating. You’ll address that next. Implementing the Current Time Label You can change the current time by changing the constructor value or by moving the slider. Since Slider should always stay in sync with the current time label, use _sliderValue to generate the label string. Add the following method at the bottom of the _AudioWidgetState class: String _getTimeString(double sliderValue) { final time = _getDuration(sliderValue); String twoDigits(int n) { if (n >= 10) return "$n"; return "0$n"; } final minutes = twoDigits(time.inMinutes.remainder(Duration.minutesPerHour)); final seconds = twoDigits(time.inSeconds.remainder(Duration.secondsPerMinute)); final hours = widget.totalTime.inHours > 0 ? '${time.inHours}:' : ''; return "$hours$minutes:$seconds"; } This method is a modification of the Dart Duration.toString() method. Next, extract the current time Text widget to a method. In build, put your cursor on the first Text widget, right-click and choose Refactor ▸ Extract ▸ Method from the context menu. Name the method _buildCurrentTimeLabel and give it the following code: Text _buildCurrentTimeLabel() { return Text( _getTimeString(_sliderValue), style: TextStyle( fontFeatures: [FontFeature.tabularFigures()], ), ); } FontFeature requires the dart:ui library, so add the following import at the top of the file: import 'dart:ui'; Using FontFeature.tabularFigures() ensures that the digits will use a monospaced width. This keeps the Text width from jumping around. Read about Font Features in Flutter to learn more. Do a hot restart. Now, the current time label updates when you move the seek bar thumb. Implementing the Total Time Label Last of all is the total time label on the far right. Extract the total time Text widget to its own method. As before, in build, put your cursor on the last Text widget, right-click and choose Refactor ▸ Extract ▸ Method from the context menu. Name the method _buildTotalTimeLabel and give it the following code: Text _buildTotalTimeLabel() { return Text( _getTimeString(1.0), ); } The total time is when the slider is all the way at the right, which is a slider value of 1.0. Thus, you can use _getTimeString() again to generate the label string. Do a hot restart. It looks the same as before because the totalTime argument is Duration(minutes: 1, seconds: 15), which you set previously in main.dart. Great! You now have your own custom widget composed completely of existing Flutter widgets. In the last two steps, you’ll finalize your widget for production. Testing the Widget Widget testing is an important part of creating custom widgets. To keep this article a manageable size, it won’t cover widget testing, but you should read An Introduction to Widget Testing in the Flutter documentation and Widget Testing With Flutter: Getting Started here on raywenderlich.com. For now, you’ll just test that AudioWidget works by hooking it up to an audio plugin. The view model in the starter project is all ready to communicate with your new widget. In lib/main.dart, delete the entire AudioPlayer class, located at the bottom of the file, then add the following code: class AudioPlayer extends StatelessWidget { @override Widget build(BuildContext context) { return ViewModelBuilder<AudioViewModel>.reactive( viewModelBuilder: () => AudioViewModel(), onModelReady: (model) => model.loadData(), builder: (context, model, child) => AudioWidget( isPlaying: model.isPlaying, onPlayStateChanged: (bool isPlaying) { model.onPlayStateChanged(isPlaying); }, currentTime: model.currentTime, onSeekBarMoved: (Duration newCurrentTime) { model.seek(newCurrentTime); }, totalTime: model.totalTime, ), ); } } AudioWidget is the most important part here. It gets its state from model and rebuilds whenever the values there change. It also updates model when the user presses the Play/Pause button or moves the seek bar. Do a hot reload, press the Play button and enjoy the concert. Here is what it looks like in action: Sharing Your Widget With the World Now that you have a working version of AudioWidget, you or anyone else can use it simply by copying audio_widget.dart into a project. You can make it even easier for other people to use it by sharing it on Pub, the central repository for Flutter and Dart packages. Here are a few general guidelines for adding a Pub package: - Start a new Flutter project in Android Studio and choose Flutter Package for the project type. - Put your custom widget in the lib folder. - Add a folder named example to the project root. In there, add a Flutter app that demonstrates how to use your widget. The example project’s pubspect.yaml imports the widget using path: ../. Find other developers’ examples on GitHub to see how they did it. Most Pub packages have links to their GitHub repos. - Make a GitHub repository for your own project. Make sure all your public methods and parameters have comment documentation. - Read Developing Packages & Plugins and Publishing Packages. - Once you’ve set everything up, running pub publishfrom the project root is how you publish your package. First, however, you should test it with pub publish --dry-run. Where to Go From Here? Download the final project using the Download Materials button at the top or bottom of this tutorial. If you want to improve the widget, here are a few ideas: - Add more constructor parameters to allow developers to further customize how the widget looks and behaves. - Support playing audio from remote sources by allowing the Play button to show a circular progress indicator while the audio file is downloading or buffering. - Replace Sliderwith a CustomPainterwhere you draw the play progress and buffered content separately. Refer to RangeSlider for ideas. Here are a couple good videos to learn more about building custom widgets: If you have any comments or questions, please leave them in the forum discussion below.
https://www.raywenderlich.com/10126984-creating-reusable-custom-widgets-in-flutter
CC-MAIN-2021-49
refinedweb
3,741
57.57
A bit difficult to get this information… Rails’ helper when building select elements for a form is the well-known select_tag. If you are populating the options within a select form with data from your database through a model, you would use the options_from_collection_for_select helper. An example is in fact quite straightforward: options_from_collection_for_select(@extensions, 'ext', 'name') Meaning if your controller has populated the collection “@extensions” using methods such as find, where, etc… the helper will then create a select element using the ‘ext’ field as a value, and the ‘name’ field as text to display in the web form, like so: But, what if you would like to display a different format to the user in the web form? Something like “name (ext)”, where name and ext are to be populated from the database. Simply using options_from_collection_for_select(@extensions, ‘ext’, ‘user(ext)’) will result in an error of course: But the error “undefined method ‘user(ext)’ for #Users gives an important clue. Users happens to be my model class. So it seems rails is looking for a method called user(ext) within the model class. Let’s add a method to the models/users.rb file: class Users < ActiveRecord::Base def userExt self.user + ” (” + self.ext.to_s + “)” end end The above method is simply concatenating strings. Each column from the database is referenced by a method, so self.user refers to the user field of a record, and similarly self.ext refers to the ext field of a record. We then change our original call to options_from_collection_for_select(@extensions, ‘ext’, ‘userExt’) to match our newly defined method and there we go…
http://blog.davidvassallo.me/2013/08/09/rails-of-select_tags-and-options_from_collection_for_select/
CC-MAIN-2016-22
refinedweb
269
61.56
I am a Spring/Java developer (primarily) and an advocate of unit testing. My co-workers have created some great blog posts on unit testing (especially for Java) for those who are also interested in the subject.. Architectural Structure Debate & The Risks Architectural structure is often another source of debate in an enterprise software team setting. - How should the code be organized? - What is the team going to do to keep it that way? - Is an architect going to spend time checking that no team member is violating basic plans for the architecture of all the projects for which they are responsible? - Who makes sure that the System.outcalls are removed before we go to production? - After we have released to prod, but we had to track down that big bug last week where we threw tons of sysoutsat it? - How do we know our cyclomatic complexity has gotten too large and it is time to refactor? - Who makes sure that in the maintenance phase, someone doesn’t add calls from our model objects up to our controllers or views? It is ultimately the responsibility of the architect or team lead. But, as deadlines creep up, we tend to “just get that last thing working.” Hopefully, there is time later to fix the “ugly bits,” but priorities are often not set by the architects or the teams writing the code. You can imagine there are even more rules that are violated once the primary developers and architects move on and there is no direct management of the bug-fix and maintenance phases. Introducing ArchUnit ArchUnit is a great library that does a lot for you to mitigate these risks. It is a Java unit testing framework, so you don’t need to learn a new tool. It has predefined classes that test for common things that go awry, but it is extensible so you can write your own rules and custom business checks that might only pertain to your company or even the specific project. It uses the Java Reflection API to verify the current state of the code in either case. The ArchUnit team have provided great examples in their code on Github. I encourage you to check it out! As an example, here is a common example provided by ArchUnit to catch sneaky layer violators… ArchUnit/archunit-example/src/main/java/com/tngtech/archunit/example/persistence/layerviolation/DaoCallingService.java public class DaoCallingService implements ServiceInterface { public static final String violateLayerRules = "violateLayerRules"; public static final String violateLayerRulesTrickily = "violateLayerRulesTrickily"; ServiceViolatingLayerRules service; void violateLayerRules() { service.doSomething(); } void violateLayerRulesTrickily() { new SomeMediator(service).violateLayerRulesIndirectly(); } } See It In Action Here is a project where I demonstrate how I see ArchUnit’s library working in my projects going forward. Advantages The biggest advantage I see is the ability to build this group of architecture rules and use it again. Even if a new member joins the team, they will learn the rules quickly if they have repeatable unit tests that define the parameters. As you find an edge case or something that had not been covered by testing, you can write tests to cover it. The tests run as you set it up, so you could make sure every check in is verified, nightly builds are verified or just when you move to production. Final Thoughts Over time, this could become an internal common library of architecture rules that you put in place at the beginning of a project to keep everyone on the same page from the beginning. After all, less time spent checking things that can be automated means there is more time for things that cannot. I recommend you give ArchUnit a try. Thanks for sharing !
https://keyholesoftware.com/2018/07/23/unit-testing-your-architecture-with-archunit/
CC-MAIN-2019-04
refinedweb
613
53.31
How to use standard HCI on CYW20719 / CYW920719Q40EVB-01 Evaluation Kit?MaRi_1281436 Jul 11, 2019 3:23 AM Hi. I've read which advices to write an empty app to the board. I've tried this - Re: How to use standard HCI on CYW20819 / CYW920819EVB-02 Evaluation Kit? explains to keep CTS high during reset. I've connected the dev kit to my Mac and try examples of a third-party Bluetooth stack (btstack.org(. If I press the Reset Button (SW2), and start the examples right away, I get this HCI sequence - HCI Reset - HCI Command Complete Event / Reset - HCI Read Local Version Information - HCI Command Complete Event / Local Version Information - HCI Read Local Name - HCI Hardware Error Event, HW Code 0x08 If I press the Reset Button, and wait 5 seconds, I get this - HCI Reset - HCI Hardware Error Event, HW Code 0x08 If I hold Recovery Button (SW1), then press Reset Button, I get this - HCI Reset - HCI Command Complete Event / Reset - HCI Read Local Version Information Nothing happens after this. I've also seen Cypress Bluetooth SoC Programming Guide but I'm not sure how to get regular HCI to work. If anybody wants to try, I'm using btstack/port/posix-h4 at master · bluekitchen/btstack · GitHub for testing. Thanks! 1. Re: How to use standard HCI on CYW20719 / CYW920719Q40EVB-01 Evaluation Kit?AnjanaM_61 Jul 11, 2019 4:32 AM (in response to MaRi_1281436)1 of 1 people found this helpful Hi , The method suggested in - CYW20719 in HCI mode should work for HCI commands testing. Can you please try with our CyBluetool if you have a Windows or linux system CyBluetool (Linux Installer) CyBluetool (Windows Installer) ? 1. First please program from WICED SDK with an empty project with #include "sparcommon.h" APPLICATION_START() {} and try sending commands with CyBluetool. Thanks, Anjana 2. Re: How to use standard HCI on CYW20719 / CYW920719Q40EVB-01 Evaluation Kit?MaRi_1281436 Jul 11, 2019 4:54 AM (in response to AnjanaM_61) I've put the suggested code into a file hci.c: #include "sparcommon.h" APPLICATION_START() {} and have a Makefile with NAME := hci APP_SRC = hci.c then, I've uploaded it with $ ./make snip.bt.hci-CYW920719Q40EVB_01 download Compiling spar_setup.c Compiling hci.c Compiling wiced_platform.c Compiling wiced_platform_pin_config.c Compiling lib_installer.c Linking target ELF OK, made elf. ../../43xxx_Wi-Fi/tools/ARM_GNU/OSX/bin/arm-none-eabi-objdump: section '.ordered' mentioned in a -j option, but not found in any input file ../../43xxx_Wi-Fi/tools/ARM_GNU/OSX/bin/arm-none-eabi-objdump: section '.data' mentioned in a -j option, but not found in any input file ../../43xxx_Wi-Fi/tools/ARM_GNU/OSX/bin/arm-none-eabi-objdump: section '.aon' mentioned in a -j option, but not found in any input file ../../43xxx_Wi-Fi/tools/ARM_GNU/OSX/bin/arm-none-eabi-objdump: section '.pram_rodata' mentioned in a -j option, but not found in any input file Call to hci_spar_crt_setup @ 00215871 OK, made /Projects/Broadcom/WICED-Studio-6.2.1-SDK/20719-B1_Bluetooth/WICED/../build/hci-CYW920719Q40EVB_01-rom-ram-Wiced-release/A_20719B1-hci-rom-ram-spar.cgs. MD5 sum is: 51b74b98f13dbe141f0396550673a97c ../build/hci-CYW920719Q40EVB_01-rom-ram-Wiced-release/A_20719B1-hci-rom-ram-spar.cgs -------------------------------------------------------------------------------- Patch code starts at 0x00270400 (RAM address) Patch code ends at 0x0027ACD8 (RAM address) Patch RW/ZI size 2936 bytes Application starts at 0x00215768 (RAM address) Application ends at 0x00215870 (RAM address) Patch code size 43224 bytes Application RAM footprint 264 bytes ------ Total RAM footprint 3200 bytes (3.1kiB) -------------------------------------------------------------------------------- Converting CGS to HEX... Conversion complete Creating OTA images... Conversion complete OTA image footprint in NV is 50218 bytes Detecting device... Device found Downloading application... Download complete Application running. Now. I start CyBluetool on Windows and connect to the controller successfully. 07/11/19 13:50:36.452 com2 -- Transport opened com2@115200 07/11/19 13:50:36.452 com2 -- Protocol set to HCI com2@115200 Then, I execute an HCI Reset 07/11/19 13:51:22.788 com2@115200 c> Reset HCI Command com2@115200 [03 0C 00 ] opcode = 0x0C03 (3075, "Reset") 07/11/19 13:51:22.803 com2 <e Hardware Error HCI Event com2@115200 [10 01 ]: 00 event = 0x10 (16,"Hardware Error") Hardware_Code = 0x0 (0, "UART Parsing Error") 07/11/19 13:51:22.834 com2 <e Vendor Specific HCI Event com2@115200 [FF 08 ]: 1B 04 01 00 00 79 01 00 event = 0xFF (255,"Vendor Specific") Event_Sub_Code = 0x1B (27, "DBFW Dump") Dump Type = 0x4 (4, "DBFW TraceDump2") Nof T = 0x1 (1) Trace Status = 0x0 (0, "") TRACE-1 = "00 79 01 00" I did not expect to get the Hardware error. What did I do wrong? 3. Re: How to use standard HCI on CYW20719 / CYW920719Q40EVB-01 Evaluation Kit?MaRi_1281436 Jul 13, 2019 1:21 PM (in response to MaRi_1281436) After successfully following the Run CYW20706 in HCI Mode guide with the 20706 dev kit, I've tried the same approach with the 20719: disabled the trace log by wiced_set_debug_uart(WICED_ROUTE_DEBUG_NONE) in the hello_sensor demo application and keep CTS high during RESET - the USB UART is closed when pressing RESET button. With this, the 20719 is mostly working over HCI. Could somebody explain the difference between an empty application and a full Bluetooth application? Is this correct? During startup, I get a few 'HCI Event Command Complete - Command Disallowed" for HCI Reset, HCI Read Local Name, HCI Set Event Mask, ... more or less all configuration commands. Any ideas about this? 4. Re: How to use standard HCI on CYW20719 / CYW920719Q40EVB-01 Evaluation Kit?AnjanaM_61 Aug 9, 2019 1:12 AM (in response to MaRi_1281436)1 of 1 people found this helpful Hi , Not sure if your issue is resolved or not. Whenever there is a full bluetooth application loaded, there will be HCI transaction going on from the application. The reason why its recommended to have an empty application is , it avoids any unwanted HCI transaction and will respond to the HCI commands sending from the host connected. I had tested 20719 downloaded with empty application as suggested here: CYW20719 in HCI mode and was able to successfully communicate with CyBluetool several times. Please make sure the CyBluetool settings are correct ( flow control enabled, HCI UART connected with baud rate 115200bps). Make sure HCI uart port is not connected to any other console when you try communicating with CyBluetool. Thanks, Anjana
https://community.cypress.com/message/202169
CC-MAIN-2019-43
refinedweb
1,058
55.95
Opened 3 years ago Closed 11 months ago #5086 enhancement closed fixed (fixed) Accept IPv6 address literals in IReactorUDP.listenUDP Description (last modified by habnabit) Similarly to #5084, Twisted should provide low-level support for IPv6 UDP servers. IReactorUDP.listenUDP implementations should accept IPv6 address literals and set up a UDP port bound to that address. (see #5084) from its getHost implementations. The protocol will be connected to an IUDPTransport implementation which also returns IPv6Address instances from its getHost implementation. When a datagram is delivered to the server, the datagramReceived method will be invoked with the address returned by socket.recvfrom (nominally a 4 tuple giving address, port, flow info, and scope id), just as is the case for IPv4 (where only address and port are present in the tuple). The IUDPTransport.write implementation in this case will also accept a 4 tuple of this sort. Sending to IPv6 addresses in this way is supported, but only if an IPv6 address literal was passed to listenUDP in the first place. IUDPTransport.getHost is documented as returning IPv4Address instances. This should probably change. Once this is resolved, Twisted servers should be able to bind UDP ports on IPv6 addresses, eg ::1 or ::, as well as send datagrams to IPv6 address from such ports. Addresses that include an embedded scope id will be supported after #6647 is resolved. Attachments (10) Change History (61) comment:1 Changed 3 years ago by thijs comment:2 Changed 17 months ago by marto1_ - Keywords review added comment:3 Changed 17 months ago by exarkun - Keywords review removed - Owner set to marto1_ Thanks for your work on this issue. Can you also write unit tests for this functionality? All changes and new code needs to have complete automated test coverage before it can be applied to trunk. comment:4 Changed 17 months ago by marto1_ - Keywords review added - Owner marto1_ deleted Changed 16 months ago by marto1_ Polished tests in twisted.test.test_udp Changed 16 months ago by marto1_ Here multicast works for v6 as far as loopback testing goes. comment:5 Changed 16 months ago by exarkun - Owner set to exarkun - Status changed from new to assigned comment:6 Changed 16 months ago by exarkun - Keywords review removed - Owner changed from exarkun to marto1_ - Status changed from assigned to new Thanks. - twisted/test/test_udp.py - we're trying not to write tests in this style anymore. All new tests for reactor functionality: - should be added in `twisted/internet/test/ - should be based on ReactorBuilder, which allows the test to be run against all reactor implementations instead of only one - Please split the multicast changes out into a separate ticket - The flowInfo and scopeId parts of this change look incomplete. They're also not necessary to resolve this ticket: the plan is to start off with the bare minimum necessary for IPv6, which just involves binding and connecting to IPv6 addresses. More advanced uses that involve flowInfo and scopeId can come later. I suggest removing them from this patch and contributing them as part of a separate ticket (along with documentation about what they're for and more test coverage that demonstrates the values we supply for them are correct). This should mean you won't need to make any changes to twisted/internet/address.py, I think - but I guess I'll also mention that the _bwHack changes added by this patch definitely shouldn't be added: that's backwards compatibility support code for IPv4Address. IPv6Address is not encumbered by those compatibility requirements because it never had the deprecated behavior _bwHack is in support of. Note also that if you did want to add a deprecation to IPv6Address, you'd want to make sure the warning mentions the correct API (this one still mentions IPv4Address) and have the correct version (the version the warning first appears in, which is usually the *next* released version of Twisted - that would be 13.2 at this point, since the 13.1 release process has already started - not 11.0 as the warning says). - twisted/internet/abstract.py, covertIPv6ToInteger: - there's a typo in the name :) - also, this seems unused, so I think it doesn't need to be added to the patch - in twisted/internet/udp.py - avoid adding new public interfaces like setAddressFamily that are really just implementation details. If necessary, add such methods as private - eg, _setAddressFamily. - It's usually a mistake to raise RuntimeError (Twisted code makes this mistake a lot). Also, this isn't a documented or tested way in which listenUDP can fail, so we probably want to avoid it. At the moment the TCP IPv6 code doesn't even try to deal with this case, so I think it's alright to follow that code's lead, but it raises an interesting question that's probably worth addressing in a future ticket. - The TCP IPv6 code also avoided needing to change the resolver code in base.py by introducing the _requiresResolution flag. That might not be ideal, but I think we should either: - follow that code's lead and leave base.py alone - or decide that resolve is going to be smart about this and remove the workaround that's in the TCP IPv6 code, since it will no longer be necessary. - Please also include a news fragment Of 5.3.1 and 5.3.2, I'd prefer the former for this ticket, although the latter is nicer in the long term and could be done separately either before or after finishing this ticket. Thanks again! Looking forward to the next revision of this patch. Changed 16 months ago by marto1_ With the exception of the two FIXMEs this should work comment:7 Changed 16 months ago by marto1_ - Keywords review added - Owner marto1_ deleted comment:8 Changed 16 months ago by marto1_ Interestingly enough this works quite fine . In the worst of scenarios, with callLatters commented out, it just hangs, again, expected, but in the test case it just seems to 'ignore' the last two callbacks, also calling cbClientSend only once. comment:9 Changed 16 months ago by marto1_ Forgive my ignorance, apparently you must execute self.runReactor(reactor) so the thing could actually work in test enviroment. Changed 16 months ago by marto1_ Changed 16 months ago by marto1_ Got the patch wrong, here is without the old stuff comment:10 Changed 15 months ago by habnabit - Branch set to branches/ipv6-listenUDP-5086 comment:11 Changed 15 months ago by habnabit comment:12 Changed 15 months ago by habnabit comment:13 Changed 15 months ago by habnabit comment:14 Changed 15 months ago by habnabit Hi marto1_! I had to make some changes to make your patch cleanly apply, but it was clean at the time you submitted it. I think everything is good now, though. This will make it easier on the reviewer. I myself don't feel comfortable reviewing something that uses ReactorBuilder. comment:15 Changed 15 months ago by habnabit comment:16 Changed 15 months ago by habnabit What a beautiful buildbot result. It passes on windows now! comment:17 Changed 15 months ago by exarkun - Owner set to exarkun - Status changed from new to assigned comment:18 Changed 15 months ago by exarkun - Keywords review removed - Owner changed from exarkun to habnabit - Status changed from assigned to new Thanks! - twisted/internet/test/test_udp.py - The docstring test_getHostIPv6 could be improved. The sentence structure is a bit jumbled and the content covers more ground than the test itself actually does (there's no ports dealt with anywhere in the method). - test_bindToIPv6Interface appears to be a near-exact duplicate of test_getHostIPv6. - test_writeToIPv6Interface and test_connectedWriteToIPv6Interface look like they provide good test coverage of the relevant functionality. However, it looks like they'll fail in annoying ways - probably by timing out after a custom (and uncustomizable) timeout expires. I think that ReactorBuilder-style tests that gather a bunch of information while the reactor is running, then stop it, then make assertions about the gathered information a somewhat nicer. These tend to complete more quickly in more cases and provide test failures by raising exceptions rather than logging errors. - I think I can guess the motivation for the explicit test for /no/ warning being logged in these two tests, but if I didn't know about that I'd definitely find these checks to be weird inclusions. What about warnings that are blamed on other functions? And what if something changes so that the current behavior of these tests triggers some /other/ warning? I think that everywhere else in the project, we content ourselves with looking at trial output and seeing that no unexpected warnings appear. That might be the way to go here as well. - twisted/internet/udp.py / twisted/internet/iocpreactor/udp.py - Looks like iocpreactor doesn't handle the magic "<broadcast>". Can you file a ticket for fixing this? - As long as you're changing some lines directly above a raise X, y style exception, can you change those to raise X(y)? - What happens to _setAddressFamily if the address family is neither AF_INET nor AF_INET6? - The ticket summary and description explicitly calls for support of embedded scope ids, but I don't see that feature implemented here. It seems like it may be fine to delay this work until a later ticket, but please file that ticket and link to it from somewhere relevant (I hope it will be resolved before the next release, otherwise the discrepancy between TCP and UDP support of IPv6 will be a little annoying). Thanks again! comment:19 Changed 15 months ago by habnabit comment:20 Changed 15 months ago by habnabit comment:21 Changed 15 months ago by habnabit - Point by point: - Will fix this. - Will look into consolidating the two. - That does sound like a better structure. What will do the timeout if not the test itself, though? It seemed like the trial test runner does not have a configurable timeout and defaults to 300 seconds. - Makes sense. I'll remove the assertions. - Also point by point: - #6647. comment:22 Changed 15 months ago by habnabit comment:23 Changed 15 months ago by habnabit comment:24 Changed 15 months ago by habnabit comment:25 Changed 15 months ago by habnabit - Keywords review added - Owner habnabit deleted - Status changed from assigned to new comment:26 Changed 15 months ago by habnabit shows the changes made in the last iteration. The buildbot has some failures which don't look related to these changes, though I'm slightly suspicious. comment:27 Changed 14 months ago by rwall - Keywords review removed - Owner set to habnabit Thanks habnabit and marto1, This is looking good. I'm excited about being able to run an IPv6 Twisted DNS server. Notes: - Merges cleanly - I did some functional testing as follows and it seemed to work perfectly. - Server: [richard@zorin ipv6-listenUDP-5086]$ twistd -n dns --port=10053 --interface='::1' --recursive 2013-08-24 15:14:16+0100 [-] Log opened. 2013-08-24 15:14:16+0100 [-] twistd 13.1.0 (/usr/bin/python 2.7.5) starting up. 2013-08-24 15:14:16+0100 [-] reactor class: twisted.internet.epollreactor.EPollReactor. 2013-08-24 15:14:16+0100 [-] DNSServerFactory starting on 10053 2013-08-24 15:14:16+0100 [-] DNSDatagramProtocol starting on 10053 2013-08-24 15:14:16+0100 [-] Starting protocol <twisted.names.dns.DNSDatagramProtocol object at 0x165e350> 2013-08-24 15:14:20+0100 [DNSDatagramProtocol (UDP)] DNSDatagramProtocol starting on 55008 2013-08-24 15:14:20+0100 [DNSDatagramProtocol (UDP)] Starting protocol <twisted.names.dns.DNSDatagramProtocol object at 0x165ecd0> 2013-08-24 15:14:20+0100 [-] (UDP Port 55008 Closed) 2013-08-24 15:14:20+0100 [-] Stopping protocol <twisted.names.dns.DNSDatagramProtocol object at 0x165ecd0> - Client from twisted.internet.task import react from twisted.names import dns def main(reactor): p = dns.DNSDatagramProtocol(controller=None) reactor.listenUDP(0, p, interface='::0') d = p.query( ('::1', 10053), [dns.Query('open.nlnet.nl', dns.AAAA)]) def printResults(res): print 'ANS', res.answers print 'AUTH', res.authority print 'ADD', res.additional d.addCallback(printResults) return d react(main) Points: - source:branches/ipv6-listenUDP-5086/twisted/internet/iocpreactor/udp.py - {{{73 raise ValueError(self.interface, 'is not an IPv4 or IPv6 address.')}}} results in the slightly unusual error message {{{ValueError: ('foo.bar', 'is not an IPv4 or IPv6 address.')}}}. Should it be a single string? I suppose it's useful to be able to access the bad address in isolation. - {{{190 if not isIPAddress(addr[0]) and not isIPv6Address(addr[0]):}}} looks like its followed by a deprecation warning which is now over five years old. Perhaps now is the time to disallow hostnames - especially since the gai hostname endpoint just got merged. - {{{224 raise ValueError("please pass only IP addresses, not domain names") }}} It would be useful to include the offending hostname in the error message. - 282 def getHost(self):: Needs an @return and @rtype annotation. - source:branches/ipv6-listenUDP-5086/twisted/internet/udp.py - {{{249 @param addr: A tuple of (I{stringified dotted-quad IP address}, }}}: docstring needs updating. - 268 if (not abstract.isIPAddress(addr[0]): See previous comment about removing the following deprecation warning. - Does "<broadcast>" have any IPv6 equivalent? - {{{299 raise ValueError("please pass only IP addresses, not domain names") }}}: consider adding the offending hostname to the error message. - {{{364 Returns an L{IPv4Address} or L{IPv6Address}. }}}: Also add an @return and @rtype annotation. - source:branches/ipv6-listenUDP-5086/twisted/internet/test/test_udp.py - Use standard docstrings - even for nested fake classes - {{{255 self.assertEqual(packet[1][:2], (cAddr.host, cAddr.port)) }}}: It's not clear what the slice is for. Perhaps add an explanatory comment mentioning what is being omitted. - Same comment for test_connectedWriteToIPv6Interface - source:branches/ipv6-listenUDP-5086/twisted/internet/interfaces.py - Add full docstrings to IReactorUDP.listenUDP including info about IPv6 - modelled on listenTCP - Tested Coverage: All the changes seem to be covered, but there is quite a lot of uncovered adjacent code. Consider improving coverage of various udp write errors [richard@zorin ipv6-listenUDP-5086]$ coverage run --branch --source twisted.internet ./bin/trial twisted.internet.test.test_udp [richard@zorin ipv6-listenUDP-5086]$ coverage html - Are there any IPv6 specific socket errors that need to be handled? - Add some documentation (however terse) to explaining how to use the interface parameter to force IPv6. Please answer or address the numbered points above and re-submit for review with a link to clean build results. Thanks. -RichardW. comment:28 Changed 14 months ago by habnabit - Point by point. - Yes, I think it's valuable to be able to index the exception to get the thing out instead of having to do string munging. - Replaced with a ValueError raise. - Done. - Done. - Point by point. - Done. - Also replaced with a ValueError raise. - I don't believe IPv6 has the concept of a broadcast address at all. - Done. - Done. - All done. The slice was removed as part of the regression fix discussed on IRC. - Isn't this already done? - I think this should be ticketed separately. I can file a separate ticket unless you strongly disagree. - I checked through the sendto(2) documentation for windows/linux/OS X/freebsd/openbsd, but didn't see anything IPv6-specific. - Done. comment:29 Changed 14 months ago by rwall Those changes look good: - branches/ipv6-listenUDP-5086/twisted/internet/udp.py - Create a ticket about the "flow and scope ID" issue and add a link alongside your new comment. - We talked about what happens if you attempt to write to an IPv4 address from an IPv6 socket (or vice versa). Should there be an extra check for this in the write method and raise a ValueError in that case? Or in the case of <broadcast> keyword being used with an IPv6 port? - "please pass only IP addresses, not domain names" it would be nice all the error messages were consistent. Later there's yet another message "self.interface, 'is not an IPv4 or IPv6 address.')" and this time the offending address / host is args[0] - (nit) "from which I am connecting" I think that style of documentation is frowned upon these days. (first person?) instead something like "the source address from which datagrams will be sent" - branches/ipv6-listenUDP-5086/twisted/internet/test/test_udp.py - (nit) "Writing to a connected IPv6 UDP socket on the loopback interface succeeds." isn't a great test docstring. Perhaps instead "An IPv6 address can be passed as the C{interface} argument to L{listenUDP}. The resulting Port accepts IPv6 datagrams." - You didn't say anything about the remaining deprecation warnings in listenUDP? Do you think it's worth removing those at the same time as removing them from the write() method? It's a +1 from me if you address or answer the points above. Might be worth getting a second opinion from someone else though. comment:30 Changed 14 months ago by rwall A few more things... - The new IPv6 documentation probably shouldn't talk about "connections", just sending and receiving IPv6 datagrams. - The new documentation is a bit too server specific. Make it clear that the same thing applies if you just want to send IPv6 datagrams. - Here's the Bert Hubert blog post for posterity. Maybe the trick he uses to get the destination address of received datagrams can be added to Twisted some day (its a valid use of the sendmsg api right?): comment:31 Changed 14 months ago by exarkun raise ValueError(self.interface, 'is not an IPv4 or IPv6 address.') Yes, I think it's valuable to be able to index the exception to get the thing out instead of having to do string munging. Please don't introduce index-based interfaces. Always provide documented attributes, instead. comment:32 Changed 12 months ago by satis - Cc satis added I worked a bit on habnabit's branch and implemented the remarks from rwall and exarkun: - I replaced the ValueError with a new InvalidAddressError, which has an address attribute and an optional message. Where it made sense I removed value errors and replaced it with this one, which is in my opinion a clearer interface. However, this is not 100% backwards compatible, when a hostname is passed to listenUDP, even for v4 previously a ValueError was thrown. The other places where I introduced this are either new or were deprecation warnings before. - The documentation is now about sockets/ports and sending/receiving datagrams, I avoided words like connections and servers. - I added checks to the write methods as rwall suggested and added tests for this. You now cannot write IPv4 when the socket is IPv6 and vice-versa. - I took a look for remaining deprecations but the only one I can see is in the loseConnection, which is not IPv6-related. I tested this locally and saw no failures (centos machine), but I couldn't test on windows so the IOCP changes might still have issues, though they should be an almost identical copy from the generic changes. Changed 12 months ago by satis patch based on ipv6-listenUDP-5086 branch and review comments comment:33 Changed 12 months ago by satis - Keywords review added - Owner habnabit deleted Changed 12 months ago by satis Variation of last patch where InvalidAddress now inherits from ValueError comment:34 Changed 12 months ago by satis To give an alternative for the backwards incompatibility I mentioned before, I made a second patch where InvalidAddressError is derived from ValueError. I'm not 100% convinced this is semantically correct, but it won't break code that explicitly catched ValueError before. comment:35 Changed 12 months ago by habnabit I think that the semantics of deriving InvalidAddressError from ValueError are fine—ValueError means that there was an "inappropriate argument value", which seems to be exactly the case here. comment:36 Changed 12 months ago by habnabit The patch you posted appears to be based on trunk and not the ipv6-listenUDP-5086 branch. I've applied it anyway, but in the future, please submit patches against the current branch. comment:37 Changed 12 months ago by habnabit comment:38 Changed 12 months ago by habnabit comment:39 Changed 12 months ago by habnabit comment:40 Changed 11 months ago by rwall - Owner set to rwall - Status changed from new to assigned Reviewing... comment:41 Changed 11 months ago by rwall - Keywords review removed - Owner changed from rwall to habnabit - Status changed from assigned to new Thanks habnabit, satis and everyone else involved in this branch. The new consistent exception looks great except that the version in the branch doesn't actually inherit from ValueError (see below). There are one or two other issues and suggestions in the notes below. Points: - branches/ipv6-listenUDP-5086/twisted/internet/error.py - InvalidAddressError - I thought it was supposed to inherit from ValueError? - Looks like the wrong patch was applied. - Missing method docstrings in - The @ivar docstrings should be references to docstrings in init - See "It is not necessary to have a second copy" in - branches/ipv6-listenUDP-5086/twisted/internet/test/test_udp.py - nit C{InvalidAddressError} should be L{} ...but pydoctor doesn't look at the test docstrings so it doesn't really matter. - test_writeToIPv6Interface - nit The two assertions could be combined into one by putting all the expected and actual values in tuples....which would make a test failure easier to debug if it happened to fail. - It's a style that exarkun encourages. - Same applies to test_connectedWriteToIPv6Interface - It seems a shame that the legacy IPv4 and IPv6 UDP write tests are now split. - twisted.test.test_udp.UDPTestCase.test_sendPackets (and related tests) could be moved to the new ReactorBuilder testcase. - It might also be an idea to write a test builder that automatically performs certain tests using both IPv4 and IPv6 (sendPacket, connectionRefused, bindError, rebind etc) - Raise a ticket to consolidate the UDP tests. - test_writingToIPv6OnIPv4RaisesInvalidAddressError - nit It would be clearer if we specified an interface address explicitly. - Should be two blank lines between these test methods. - branches/ipv6-listenUDP-5086/twisted/internet/iocpreactor/udp.py - Consider adding a @ivar docstring for addressFamily - 191 if not isIPAddress - For consistency, this should also check for '<broadcast>' like the posix UDP port - branches/ipv6-listenUDP-5086/twisted/internet/udp.py - # Remove the flow and scope ID from the address tuple, - As noted in previous review, this comment should be marked as TODO and needs a ticket reference. - The same reference should also be added to IOCP udp probably - setAddressFamily - It would be nice if this function could be shared with the IOCP port - it's identical. - branches/ipv6-listenUDP-5086/twisted/internet/interfaces.py - getHost: Consider adding an @return and @rtype I'm keen to see this branch land and it's been through 5 rounds of review by three reviewers. So please merge after: - fixing the inheritance of InvalidAddressError - merging forward - addressing or answering the numbered points above - and checking for clean build results. -RichardW. comment:42 Changed 11 months ago by habnabit comment:43 Changed 11 months ago by habnabit comment:44 Changed 11 months ago by habnabit - Branch changed from branches/ipv6-listenUDP-5086 to branches/ipv6-listenUDP-5086-2 comment:45 Changed 11 months ago by habnabit comment:46 Changed 11 months ago by habnabit comment:47 Changed 11 months ago by habnabit - Fixed. Not sure how I ended up applying the wrong patch. :( - Point by point. - Done. Even added the @ivar to t.i.udp.Port. - Point by point. - Done. Forced a build which looks good so far. comment:48 Changed 11 months ago by exarkun Thanks for the work on this. I have a few more comments: - Please update the doc formatting to comply with the standard being set out in #6537 - The __str__ for InvalidAddressError looks unnecessarily confusing. Why should this exception be formatted in this unusual way? - The type information for the initializer arguments to InvalidAddressError are undocumented. - The extra blank line at the beginning of *some* tests in this branch isn't dictated by the coding standard. The inconsistency is a little annoying. Personally I don't see how this blank line helps readability and would prefer not to see it. - example.com is the canonical example domain (even better, perhaps, is example.invalid the canonical invalid domain). eggs.com is a real domain name that someone owns and hosts some content. I would hate for a bug in the implementation to start sending traffic there when the tests are run. - It doesn't look like there's a test for the ValueError compatibility that's being provided by the new exception. Perhaps this isn't necessary... but I'm not sure. Thanks again, all. comment:49 follow-up: ↓ 50 Changed 11 months ago by habnabit comment:50 in reply to: ↑ 49 Changed 11 months ago by rwall Okay, I've addressed all of the points above. I'm slightly unsure about [40787], but I think it's good. If it looks fine to everyone else, I'll go ahead and merge. Yeah, I think those changes look fine. And it's nice to split out the subclass tests. Maybe it would be nice to have an assertIsSubclass method, which gave a useful failure message, but that can be done another time. I might have re-written those test docstrings as "x is a *subclass* of y" but that's a real nitpick. Who knows which is better. Thanks habnabit. Please merge. comment:51 Changed 11 months ago by habnabit - Resolution set to fixed - Status changed from new to closed . Fixed patch, so all tests run, although twisted.scripts.test.test_tap2rpm fail sometimes.
http://twistedmatrix.com/trac/ticket/5086
CC-MAIN-2014-42
refinedweb
4,269
55.24
Hi, I often implement an object namespace (visible to the application at run-time) for primary objects managed by the RTOS. This is invaluable in distributed applications as it allows the implementations to be more effectively decoupled (I really can't imagine any other way of supporting such applications that wouldn't be incredibly brittle!). But, the RTOS I'm currently writing doesn't export any objects beyond the local host. And, I can let threads pass handles directly for objects of interest so there is no need to provide the formal namespace (a luxury that space can't afford). But, being able to "tag" key objects with a suitable "name" often helps with debugging (you can inspect the memory image associated with the object to "see" it's name embedded therein). This has very minimal impact (depending on how much space the developer wants to waste on these tags). However, without the formality of the (active) namespace manager, there is nothing to guarantee these tags are meaningful, unique, etc. And, at (production) run-time, they would be completely useless (since I see no reason to add support for querying them via the API). I.e., they seem like they *might* only have value at DEBUG time. Is this sort of hack helpful? Or, just a silly decoration that distracts rather than assists?? Named RTOS objects Started by ●April 6, 2010
https://www.embeddedrelated.com/showthread/comp.arch.embedded/116677-1.php
CC-MAIN-2020-29
refinedweb
231
52.7
Alfred V. Aho Mentioned 58. What concepts in Computer Science do you think have made you a better programmer? My degree was in Mechanical Engineering so having ended up as a programmer, I'm a bit lacking in the basics. There are a few standard CS concepts which I've learnt recently that have given me a much deeper understanding of what I'm doing, specifically: Language Features Data Structures Algorithms Obviously, the list is a little short at the moment so I was hoping for suggestions as to: I find it a little funny that you're looking for computer science subjects, but find wikipedia too academic :D Anyway, here goes, in no particular order: As a recent graduate from a computer science degree I'd recommend the following: As mentioned in various posts Big O notation OO Design Data structures & Algorithms (can't remember the exact title of the book I used will update if i remember) Operating Systems NP Problems Some of the OS concepts ( memory, IO, Scheduling, process\Threads, multithreading ) [a good book "Modern Operating Systems, 2nd Edition, Andrew S. Tanenbaum"] Basic knowledge of Computer networks [a good book by Tanenbaum OOPS concepts Finite autometa A programming language ( I learnt C first then C++) Algorithms ( Time\space complexity, sort, search, trees, linked list, stack, queue ) [a good book Introduction to Algorithms] So I found out that C(++) programs actually don't compile to plain "binary" (I may have gotten some things wrong here, in that case I'm sorry :D) but to a range of things (symbol table, os-related stuff,...) but... Does assembler "compile" to pure binary? That means no extra stuff besides resources like predefined strings, etc. If C compiles to something else than plain binary, how can that small assembler bootloader just copy the instructions from the HDD to memory and execute them? I mean if the OS kernel, which is probably written in C, compiles to something different than plain binary - how does the bootloader handle it? edit: I know that assembler doesn't "compile" because it only has your machine's instruction set - I didn't find a good word for what assembler "assembles" to. If you have one, leave it here as comment and I'll change it. Let's take a C program. When you run 'gcc' or 'cl' on the c program, it will go through these stages: In practice, some of these steps may be done at the same time, but this is the logical order. Note that there's a 'container' of elf or coff format around the actual executable binary. You will find that a book on compilers(I recommend the Dragon book, the standard introductory book in the field) will have all the information you need and more. As Marco commented, linking and loading is a large area and the Dragon book more or less stops at the output of the executable binary. To actually go from there to running on an operating system is a decently complex process, which Levine in Linkers and Loaders covers. I've wiki'd this answer to let people tweak any errors/add information. know that one of the differences between classes and structs is that struct instances get stored on stack and class instances(objects) are stored on the heap. Since classes and structs are very similar. Does anybody know the difference for this particular distinction? How the compiler and run-time environment handle memory management has grown up over a long period of time. The stack memory v.s. heap memory allocation decision had a lot to do with what could be known at compile-time and what could be known at runtime. This was before managed run times. In general, the compiler has very good control of what's on the stack, it gets to decide what is cleaned up and when based on calling conventions. The heap on the other hand, was more like the wild west. The compiler did not have good control of when things came and went. By placing function arguments on the stack, the compiler is able to make a scope -- that scope can be controlled over the lifetime of the call. This is a natural place to put value types, because they are easy to control as opposed to reference types that can hand out memory locations (pointers) to just about anyone they want. Modern memory management changes a lot of this. The .NET runtime can take control of reference types and the managed heap through complex garbage collection and memory management algorithms. This is also a very, very deep subject. I recommend you check out some texts on compilers -- I grew up on Aho, so I recommend that. You can also learn a lot about the subject by reading Gosling. I'm reading through the dragon book and trying to solve an exercise that is stated as follows Write regular definitions for the following languages: - All strings of digits with no repeated digits. Hint: Try this problem first with a few digits, such as { 0, 1, 2 }. Despite having tried to solve it for hours, I can't imagine a solution, beside the extremely wordy d0 -> 0? d1 -> 1? d2 -> 2? d3 -> 3? d4 -> 4? d5 -> 5? d6 -> 6? d7 -> 7? d8 -> 8? d9 -> 9? d10 -> d0d1d2d3d4d5d6d7d8d9 | d0d1d2d3d4d5d6d7d9d8 | ... Hence having to write 10! alternatives in d10. Since we shall write this regular definition, I doubt that this is a proper solution. Can you help me please? So the question didn't necessarily ask you to write a regular expression, it asked you to provide a regular definition, which I interpret to include NFA's. It turns out it doesn't matter which you use, as all NFA's can be shown to be mathematically equivalent to regular expressions. Using the digits 0, 1, and 2, a valid NFA would be the following (sorry for the crummy diagram): Each state represents the last digit scanned in the input, and there are no loops on any of the nodes, therefore this is an accurate representation of a string with no repeated digits from the set {0,1,2}. Extending this is trivial (although it requires a large whiteboard :) ). NOTE: I am making the assumption that the string "0102" IS valid, but the string "0012" is not. This can be converted to a regular expression (although it will be painful) by using the algorithm described here. I've been given a job of 'translating' one language into another. The source is too flexible (complex) for a simple line by line approach with regex. Where can I go to learn more about lexical analysis and parsers? After taking (quite) a few compilers classes, I've used both The Dragon Book and C&T. I think C&T does a far better job of making compiler construction digestible. Not to take anything away from The Dragon Book, but I think C&T is a far more practical book. Also, if you like writing in Java, I recommend using JFlex and BYACC/J for your lexing and parsing needs. Yet another textbook to consider is Programming Language Pragmatics. I prefer it over the Dragon book, but YMMV. If you're using Perl, yet another tool to consider is Parse::RecDescent. If you just need to do this translation once and don't know anything about compiler technology, I would suggest that you get as far as you can with some fairly simplistic translations and then fix it up by hand. Yes, it is a lot of work. But it is less work than learning a complex subject and coding up the right solution for one job. That said, you should still learn the subject, but don't let not knowing it be a roadblock to finishing your current project. How do recursive ascent parsers work? I have written a recursive descent parser myself but I don't understand LR parsers all that well. What I found on Wikipedia has only added to my confusion. Another question is why recursive ascent parsers aren't used more than their table-based counterparts. It seems that recursive ascent parsers have greater performance overall. The clasical dragon book explains very well how LR parsers work. There is also Parsing Techniques. A Practical Guide. where you can read about them, if I remember well. The article in wikipedia (at least the introduction) is not right. They were created by Donald Knuth, and he explains them in his The Art of Computer Programming Volume 5. If you understand spanish, there is a complete list of books here posted by me. Not all that books are in spanish, either. Before to understand how they work, you must understand a few concepts like first, follows and lookahead. Also, I really recommend you to understand the concepts behind LL (descendant) parsers before trying to understand LR (ascendant) parsers. There are a family of parsers LR, specially LR(K), SLR(K) and LALR(K), where K is how many lookahead they need to work. Yacc supports LALR(1) parsers, but you can make tweaks, not theory based, to make it works with more powerful kind of grammars. About performance, it depends on the grammar being analyzed. They execute in linear time, but how many space they need depends on how many states do you build for the final parser. I'm interested in writing an x86 assembler for a hobby project. At first it seemed fairly straight forward to me but the more I read into it, the more unanswered questions I find myself having. I'm not totally inexperienced: I've used MIPs assembly a fair amount and I've written a toy compiler for a subset of C in school. My goal is to write a simple, but functional x86 assembler. I'm not looking to make a commercially viable assembler, but simply a hobby project to strengthen my knowledge in certain areas. So I don't mind if I don't implement every available feature and operation. I have many questions such as: Should I use a one-pass or two-pass method? Should I use ad-hoc parsing or define formal grammars and use a parser-generator for my instructions? At what stage, and how do I resolve the addresses of my symbols? Given my requirements, can anyone suggest some general guidelines for the methods I should be using in my pet-project assembler? You may find the dragon book to be helpful. The actual title is Compilers: Principles, Techniques, and Tools (amazon.com). Check out the Intel Architectures Software Developer's Manuals for the complete documentation of the IA-32 and IA-64 instruction sets. AMD's architecture technical documents are available on its website as well. Linkers and Loaders (amazon.com) is a good introduction to object formats and linking issues. (The unedited original manuscript is also available online.) Suppose I crafted a set of classes to abstract something and now I worry whether my C++ compiler will be able to peel off those wrappings and emit really clean, concise and fast code. How do I find out what the compiler decided to do? The only way I know is to inspect the disassembly. This works well for simple code, but there're two drawbacks - the compiler might do it different when it compiles the same code again and also machine code analysis is not trivial, so it takes effort. How else can I find how the compiler decided to implement what I coded in C++? You want to know if the compiler produced "clean, concise and fast code". "Clean" has little meaning here. Clean code is code which promotes readability and maintainability -- by human beings. Thus, this property relates to what the programmer sees, i.e. the source code. There is no notion of cleanliness for binary code produced by a compiler that will be looked at by the CPU only. If you wrote a nice set of classes to abstract your problem, then your code is as clean as it can get. "Concise code" has two meanings. For source code, this is about saving the scarce programmer eye and brain resources, but, as I pointed out above, this does not apply to compiler output, since there is no human involved at that point. The other meaning is about code which is compact, thus having lower storage cost. This can have an impact on execution speed, because RAM is slow, and thus you really want the innermost loops of your code to fit in the CPU level 1 cache. The size of the functions produced by the compiler can be obtained with some developer tools; on systems which use GNU binutils, you can use the size command to get the total code and data sizes in an object file (a compiled .o), and objdump to get more information. In particular, objdump -x will give the size of each individual function. "Fast" is something to be measured. If you want to know whether your code is fast or not, then benchmark it. If the code turns out to be too slow for your problem at hand (this does not happen often) and you have some compelling theoretical reason to believe that the hardware could do much better (e.g. because you estimated the number of involved operations, delved into the CPU manuals, and mastered all the memory bandwidth and cache issues), then (and only then) is it time to have a look at what the compiler did with your code. Barring these conditions, cleanliness of source code is a much more important issue. All that being said, it can help quite a lot if you have a priori notions of what a compiler can do. This requires some training. I suggest that you have a look at the classic dragon book; but otherwise you will have to spend some time compiling some example code and looking at the assembly output. C++ is not the easiest language for that, you may want to begin with plain C. Ideally, once you know enough to be able to write your own compiler, then you know what a compiler can do, and you can guess what it will do on a given code. I am given two functions for finding the product of two matrices: void MultiplyMatrices_1(int **a, int **b, int **c, int n){ for (int i = 0; i < n; i++) for (int j = 0; j < n; j++) for (int k = 0; k < n; k++) c[i][j] = c[i][j] + a[i][k]*b[k][j]; } void MultiplyMatrices_2(int **a, int **b, int **c, int n){ for (int i = 0; i < n; i++) for (int k = 0; k < n; k++) for (int j = 0; j < n; j++) c[i][j] = c[i][j] + a[i][k]*b[k][j]; } I ran and profiled two executables using gprof, each with identical code except for this function. The second of these is significantly (about 5 times) faster for matrices of size 2048 x 2048. Any ideas as to why? I believe that what you're looking at is the effects of locality of reference in the computer's memory hierarchy. Typically, computer memory is segregated into different types that have different performance characteristics (this is often called the memory hierarchy). The fastest memory is in the processor's registers, which can (usually) be accessed and read in a single clock cycle. However, there are usually only a handful of these registers (usually no more than 1KB). The computer's main memory, on the other hand, is huge (say, 8GB), but is much slower to access. In order to improve performance, the computer is usually physically constructed to have several levels of caches in-between the processor and main memory. These caches are slower than registers but much faster than main memory, so if you do a memory access that looks something up in the cache it tends to be a lot faster than if you have to go to main memory (typically, between 5-25x faster). When accessing memory, the processor first checks the memory cache for that value before going back to main memory to read the value in. If you consistently access values in the cache, you will end up with much better performance than if you're skipping around memory, randomly accessing values. Most programs are written in a way where if a single byte in memory is read into memory, the program later reads multiple different values from around that memory region as well. Consequently, these caches are typically designed so that when you read a single value from memory, a block of memory (usually somewhere between 1KB and 1MB) of values around that single value is also pulled into the cache. That way, if your program reads the nearby values, they're already in the cache and you don't have to go to main memory. Now, one last detail - in C/C++, arrays are stored in row-major order, which means that all of the values in a single row of a matrix are stored next to each other. Thus in memory the array looks like the first row, then the second row, then the third row, etc. Given this, let's look at your code. The first version looks like this: for (int i = 0; i < n; i++) for (int j = 0; j < n; j++) for (int k = 0; k < n; k++) c[i][j] = c[i][j] + a[i][k]*b[k][j]; Now, let's look at that innermost line of code. On each iteration, the value of k is changing increasing. This means that when running the innermost loop, each iteration of the loop is likely to have a cache miss when loading the value of b[k][j]. The reason for this is that because the matrix is stored in row-major order, each time you increment k, you're skipping over an entire row of the matrix and jumping much further into memory, possibly far past the values you've cached. However, you don't have a miss when looking up c[i][j] (since i and j are the same), nor will you probably miss a[i][k], because the values are in row-major order and if the value of a[i][k] is cached from the previous iteration, the value of a[i][k] read on this iteration is from an adjacent memory location. Consequently, on each iteration of the innermost loop, you are likely to have one cache miss. But consider this second version: for (int i = 0; i < n; i++) for (int k = 0; k < n; k++) for (int j = 0; j < n; j++) c[i][j] = c[i][j] + a[i][k]*b[k][j]; Now, since you're increasing j on each iteration, let's think about how many cache misses you'll likely have on the innermost statement. Because the values are in row-major order, the value of c[i][j] is likely to be in-cache, because the value of c[i][j] from the previous iteration is likely cached as well and ready to be read. Similarly, b[k][j] is probably cached, and since i and k aren't changing, chances are a[i][k] is cached as well. This means that on each iteration of the inner loop, you're likely to have no cache misses. Overall, this means that the second version of the code is unlikely to have cache misses on each iteration of the loop, while the first version almost certainly will. Consequently, the second loop is likely to be faster than the first, as you've seen. Interestingly, many compilers are starting to have prototype support for detecting that the second version of the code is faster than the first. Some will try to automatically rewrite the code to maximize parallelism. If you have a copy of the Purple Dragon Book, Chapter 11 discusses how these compilers work. Additionally, you can optimize the performance of this loop even further using more complex loops. A technique called blocking, for example, can be used to notably increase performance by splitting the array into subregions that can be held in cache longer, then using multiple operations on these blocks to compute the overall result. Hope this helps! Recently, I was going around looking for ideas on what I can build using C this summer and I came across this post: Interesting project to learn C? Implement a programming language. This doesn't have to be terribly hard - I did the language that must not be named - but it will force you to learn a lot of the important parts of C. If you don't want to write a lexer and/or parser yourself, you can use lex/flex and yacc/bison, but if you plan on that you might want to start with a somewhat smaller project. I was kinda intrigued about the implementing a programming language answer and I'm wondering how do I go about starting this? I've gone through the whole K&R book and I've done some of the exercises as well. I also have a bit of experience in C++ and Java if that matters. Any tips? Thanks! Well, I think something like that is really hard to do but also it would be a great pet project. You should have notions of parsers, lexers, flow control, paradigms (imperative, functional, OO) and many other things. Many people says the Dragon Book is one of the best books for this. Maybe you can take a look at it :) Good Luck! This is just a question out of curiosity since I have been needing to get more and more into parsing and using regex lately.. it seems, for questions I come across in my searches regarding parsing of some sort, someone always ends up saying, when asked something relating to regex, "regex isn't good for that, use such and such parser instead"... as I have come to better understand regex, I think most stuff is possible, just its rather complex and time consuming since you have to account for many different possiblities, and of course, it has to be combined with conditional statements and loops to build any sort of parser.. so I'm wondering if regex is what is used to build most parsers or is there some other method being used.. I am just wondering since I may have the need to build some fairly complex custom parsers coming up where there isn't necessarily an existing one to use. thanks for any info as I can't seem to find a direct answer to this. Well, building a parser is pretty complex and you can use regex but that's not the only things you use. I suggest to read the Dragon Book These days, in my opinion, you should use a parser generator because you can do it from scratch but it's not simple nor quick to do. You have to consider, generally speaking, regex and finite state automata for the lexical analysis; context-free grammars, LL parsers, bottom-up parsers, and LR parsers for Syntax analysis etc...etc... I'm preparing for an exam concerning languages, grammars, parsing and compilers. It's not really my cup of tea and most resources I find use the language of mathematics to define the different terms of the trade and explain the different concepts I need to know rather than stick with English or French, which I would very much prefer. Therefore, I'm having some trouble both with finding the motivation to continue studying and with simply understanding the theory. So here is my question: Do any of you know where I could find a "fun" way of learning all this? Or at the very least, maybe a more "concrete" and less "mathematical" way of handling this subject. I need to cover the following so anything on these subjects is welcome! Here are some resources which could be considered "fun" (with an emphasis on the quotation marks) ways to learn about a technical subject, just to get a sense of what I'm looking for. How long do you have to prepare? The "best" way to learn compilers is to dig into them and the best way to do that is to use the best book on compilers EVER WRITTEN: The Dragon Book It's old, but awesome. It's not cheap but it is, quite possibly, the most concrete and least mathematical way to learn about the magical compiler. It doesn't have any flashing lights and it won't be in an awesome font like the Ruby guide, but it's in the top 10 Books Every Programmer Should Read I am trying to modify the value of local variable through another called function, but I am not able to figure out what all values pushed onto the stack are. #include <stdio.h> #include <string.h> void fun() { int i; int *p=&i; int j; for(j=0;*(p+j)!=10;j++); printf("%d",j); /* Stack Frame size is j int pointers. */ *(p+j)=20; } main() { int i=10; fun(); printf("\n %d \n",i); } How exactly does j in fun() equal to 12? I am trying to understand what values are pushed onto stack. More specifically, can we change the value of i which is in main() without using a for loop in fun() and is it possible to predict the value of j inside fun()? When you have to access local variables from other function calls, I think you had better redesign your code. Theoretically, you can direct to modify the i of main() in fun() if you can completely understand how the compilers deal with the activation records of function calls on the run-time stack. You can read "Compilers: Principles, Techniques, and Tools" for details( ) The value of j depends on the run-time stack address between the int i; in the fun() and int i = 10; in main() . In this case, when fun() is called, their relative distance on stack is just 12. That is why the j is 12. So the *(p + j) = 20; actually changed the i of the main(). If you change your code by adding int a = 14; as following, you will find the value of j changed for the activation record on the run-time stack has been changed. #include <stdio.h> void fun() { int i; int *p=&i; int j; for(j=0;*(p+j)!=10;j++); printf("%d",j); /* Stack Frame size is j int pointers. */ *(p+j)=20; } main() { int i=10; int a=14; /* add this for example to change the relative address i in the main and i in the fun*/ fun(); printf("\n %d \n",i); } I would like to write an SQL Parser. I was considering a high level language (probably Python). Is there a good starting point for some theoretical concepts, a tutorial or something more generic on writing parsers ? ANTLR can be a good choice. Or GOLD Parsing System with SQL grammar. Dragon book has good theoretical background if you want to build your own parser. Is there any way to interpret Reverse Polish Notation into "normal" mathematical notation when using either C++ or C#? I work for an engineering firm, so they use RPN occasionally and we need a way to convert it. Any suggestions? One approach is to take the example from the second chapter of the dragon book which explains how to write a parser to convert from infix to postfix notation, and reverse it. Possible Duplicates: Methodologies for designing a simple programming language Learning to write a compiler I would like to write a programming language with a syntax similar to QBasic but even simpler. I want it to be for beginning programmers. Its simplicity will encourage aspiring programmers not to give up and get them interested in programming. For example: Instead of QBasic's PRINT "Hello World!" I would use Write "Hello World!" or a little more like VB Write ("Hello World") How would I go about adapting the basic syntax to make my language? This is not a simple task. Language parsing and compiler theory are pretty hefty subjects. Lots o' math. You also have to decide what platform you want to target, which will also determine whether your language is fully compiled (eg. C/C++, Pascal), compiled into bytecode (e.g. Python, Java), or interpreted at runtime (eg. VBScript, JavaScript). For specifying the language itself, brush up on the Backus-Naur format. To help you along, there are several robust parser generators out there, including: And many more. A comparison can be found here, while yet another list can be found here If you're really interested in the full theory, you want to check out The Dragon Book. But I must reiterate: This is a big subject. There are many, many tools to help you along the way, but the rabbit hole goes pretty deep. Possible Duplicate: Learning to write a compiler I know this is a broad question to ask, but where could I start learning how compilers actually work, how programming languages are made, I mean not how you use Java or Ruby but how people actually are making them. I will not try to replicate these languages in any ways but I want to understand the concepts and theory behind it. So what I need is either some directions on what I should search for, or even better and more appriciated are book recommendations. Regards, Jonathan Nash. You could take a look at the Dragon Book: Remember, this is using python. Well, I was fiddling around with an app I made called Pyline, today. It is a command line-like interface, with some cool features. However, I had an idea while making it: Since its like a "OS", wont it have its own language? Well, I have seen some articles online on how to make a interpreter, and parser, and compiler, but it wasn't really readable for me. All I saw was a crapload of code. I am one of those guys who need comments or a readme or SOME form or communication towards the user without the code itself, so I think that Stack Overflow would be great for a teenager like me. Can I get some help? You need some grounding first in order to actually create a programming language. I strongly suggest picking up a copy of Programming Language Pragmatics, which is quite readable (much more so than the Dragon book) and suitable for self study. Once you are ready to start messing with parsers, ANTLR is the "gold" standard for parser generators in terms of usability (though flex+bison/yacc are quite capable). The project: I want to build a LaTeX-to-MathML translator in PHP. Why? Because I'm a mathematician, and I want to publish math on my Drupal site. It doesn't have to translate all of LaTeX, since the basic document-level stuff is ably handled by the CMS and wouldn't be written in LaTeX to begin with; it just has to translate math written in LaTeX into math written in MathML. Although I feel as though I've done my due diligence, this doesn't seem to exist already. Maybe I'm wrong---if you know of something that would serve this purpose, by all means let me know, and thank you in advance. But assuming it doesn't exist, I guess I have to go write it myself. Here's the thing, though: I've never done anything this ambitious. I don't really know where to begin. I've used PHP for years, but just to do the standard "build a CMS with PHP and MySQL"-type of stuff. I've never attempted anything as seemingly sophisticated as translation from one language to another. I'm just dumb enough to consider doing it with regex---after all, LaTeX is a much more formal language, and it doesn't allow for nearly the kinds of pathological edge-cases, as say, HTML. But on the other hand, I'm just smart enough to realize this is probably a terrible idea: now I have two problems, and I sure don't want to end up like this guy. So if that's not the way to go (right?), what is? How should I start thinking about this problem? Am I essentially writing a LaTeX compiler in PHP, and if so, what do I need to know to do that (like, should I just go read the Purple Dragon book first?)? I'm both really excited and pretty intimidated by the prospect of this project, but hey, this is how we all learn to be programmers, right? If something we need doesn't exist, we go and build it, necessity is the mother of... you get the point. Tremendous thanks to everyone in advance for any and all guidance you can offer. Possible Duplicate: Learning to write a compiler I need to come up with a dummy SQL like language which has very limited features. I have never done any compiler or parsing stuff before. Can anyone let me know a good point to start may be a link or a example of the same. I am so clueless. I will be using this dummy language with C/C++ as my primary language. Thanks I did a compiler construction course last year and we used the book Compiler Construction by Kenneth C. Louden It is very detailed with a good theoretical background. At the same time the author gives enough examples and uses very informative figures, so that you're never lost while learning. Eventually a compiler in C for a toy language is listed in the later chapters. I really liked it! The Dragon Book is often considered a good starting point. However, I will also recommend the ANTLR book Does anyone know where to find good online resources with examples of how to make grammars and parse trees? Preferably introductory materials. Info that is n00b friendly, haven't found anything good with Google myself. Edit: I'm thinking about theory, not a specific parser software. Not online, but maybe you should take a look at Compilers: Principles, Techniques, and Tools (2nd Edition) by Aho et al. This is a standard text that has been evolving for 30 years (if you count the 1st Dragon Book, published in 1977 This project is for educational use and I am very well aware that excellent compilers already exist. I am currently fighting my way through the famous Dragon Book and just started to implement my own Lexer. It works suprisingly well except for literals. I do not understand how to handle literals using symbol (lookup) tables and the book doesn't seem to cover that very well: In the following code 60 is a numeric literal: int myIdentifier = 60; The Dragon Book says: Technically speaking, for the lexeme 60 we should make up a token like (number,4), where 4 points to the symbol table for the internal representation of integer 60 [...] Understood - I created the following Token: <enum TokenType, int lookupIndex> //TokenType could be 'number' and lookupIndex could be any int And stored the literal in a dictionary like this: Dictionary<int literal, int index> //literal could be '60' and index could be anything Since the literal itself is the key in the Dictionary, that allows me to quickly check if future literals have already been added to the symbol table (or not). The Parser then recieves the Tokens from the Lexer and should be able to identify the literals in the symbol table. Questions: Dictionary<int literal, int index> Dictionary<double literal, int index> Dictionary<char literal, int index>etc. Why should my Token contain a lookup-index instead of containing the literal itself? Wouldn't that be quicker? Sure, it would probably be quicker. But then every literal would be a different value. Now, most programmers have the expectation that if they use, for example, "this longish string" twice in the same program, the compiler will be clever enough to only emit a single copy of that string in the final executable. And it would also be, shall we say, surprising if when you decompiled the code, you found 273 different storage locations for the constant 1, because every time the compiler saw a += 1, it created a new constant. The easiest way to ensure that constant literals are only emitted once is to keep them in an associative container indexed by the value of the literal. As @sepp2k points out in a commment, most hardware allows the use of small integer constants as direct operands, and sometimes even not-so-small constants. So the statement about the constant 1 above is a bit of an exagerration. You might well be able to handle integers differently, but it might not be worth the trouble. How should the Parser be able to quickly find the literal values inside the symbol-table when the lookup-index is the value of the dictionary? That depends a lot on the precise datastructure you use for literal tables (which I don't like to call symbol tables, but admittedly the concepts are related.) In many languages, you will find that your standard library containers are not a perfect match for the problem, so you will either need to adapt them to the purpose or write replacements. Still, it's not terribly complicated. One possibility is to use the combination of a map<literalType, int> and a vector<literalType>. Here the map associates literal values with indices into the vector. When you find a new literal value, you enter it into the map associated with the current size of the vector, and then push the value onto the vector (which will make its index correspond to the index you just inserted into the map.) That's not entirely ideal for large constants like strings because between the key in the map and the value in the vector, the constant is stored twice. When you're starting, I'd recommend just suppressing your annoyance about this duplication; later, if it proves to be a problem, you can find a solution. If you were using C++, you could use an (unordered) set instead of a map, and use a reference (pointer) to the newly-added element instead of an index. But I don't think that feature is available in many languages, and also pointers are sometimes awkward in comparison to indices. In some languages you could put all the values into the vector and then keep a set whose keys were indices into the vector. This requires that a lookup of the set can be done with something other than the key type; for some reason, this feature is available in very few datastructure libraries. And, yes, a doubly-indexed datastructure could be used, if you have one of those handy. (In effect, the map+vector solution is a doubly-indexed datastructure.) Must I create a symbol-table for every type of literal then? Maybe. How many kinds of literals do you have? You'll probably end up using type-tagged enumerations ("discriminated unions"), both for variables and for constants. (Again, not all languages have discriminated unions in their standard library, which is truly sad; if your implementation language lacks this basic feature, you'll need to implement it.) It should certainly be possible for a discriminated union instance to be used as a key in an associative data structure, so there is nothing stopping you, in principle, from keeping all your literals in a single data structure. If you have appropriate types, that's definitely what I'd recommend, at least when starting. Note that when you are ultimately emitting the literals as object code, you're really more interested in their bit representation and alignment than their semantics. If two constants of completely different types happen to have the same bit representation, then you could use the same storage location for both of them. If you have multiple widths of integer datatypes, then you'd probably want to keep all of them in a single literal table, precisely to take advantage of this optimization. No need to store a 1 of every width :). Occasionally you will find other cases where two literals of different types have the same representation, but it's probably not common enough to go out of your way to deal with it. (However, on IEEE hardware, floating point and integer zeros have the same representation, and that is usually the same representation as a NULL pointer, so you might want to special case zeros.) All in all, it's a judgement call. How complicated is it to use a discriminated union as a key? How much storage could you save by having associative containers with specific key types, and does it matter? Will you want to iterate over all literal constants of the same type (answer: probably) and how easy is that to do with your datastructures? If you use a well-designed internal API, you will be able to change your mind about the internal representation of your literal tables. So I'd use this experiment as an opportunity to try good API design. Anything else? Good luck with your project. Learn and enjoy! I want to write an interpreter for a scripting language in javascript. Something that could run this script: set myVariable to "Hello World" repeat 5 times with x begin set myVariable to myVariable plus " " plus x end popup "myVariable is: " plus myVariable The equivalent javascript of the above would be: var myVariable = "Hello World"; for (var x=1; x<=5; x++) { myVariable += " " + x; } alert("myVariable is: " + myVariable); I don't want to translate from one to the other, I want to write a javascript program to interprete and execute the script directly. How can this be done? Update: I'm looking for a tutorial (preferably in javascript but C would do) that will walk me through this. I guess I'm looking for one that does not use any external tools as the tools seem to be my issue. I don't want to use something that calls libraries and a bunch of pre-built code. I want to see the whole thing, done from scratch. OK, I'll actually try to tackle this question somewhat... although there is no way I could possibly distill everything you need to know into a few sentences or even paragraphs. First, you should gain an understanding / familiarity with what's involved in building a compiler. You say you want to "interpret" the code - but, I think what you really want is to compile the code to Javascript (and in Javascript as well). Wikipedia has a great page on the topic: The gist of the thing is: 1.) Convert the text (source code) into some sort of in-memory data structure (abstract syntax tree - AST) that actually lets you reason about the structure of the program you've been given. 2.) Given that structure, produce your output (Javascript, in this case). To break down step 1 a bit further - Define your grammar e.g.; what is valid syntax in this new language of yours, and what is not? Typically, it's best to reason about this sort of thing with BNF on paper (or whatever syntax the tools you use prefer - although (E)BNF is the standard). The challenging part about this step is not only doing the grunt work of parsing the source code - but also making sure you've come up with a grammar that is unambiguous and readily parsable. Those two requirements are actually somewhat more difficult to nail down than you might think. I've built an LALR parser generator in C# - and, I can tell you, unless you've built one before, it's not a trivial task. Beyond that, there are so many good ones, that, unless you are really wanting to know how it works for the fun of it or because you're into that kind of thing, it makes a whole lot more sense to use a parser-generator someone else wrote. The great thing about a parser generator is that it will take that syntax definition you've come up with convert it into a program that will spit out an AST the other end. That's a HUGE amount of work that was just done for you. And, in fact, there are a few for Javascript: PEG.js – Parser Generator for JavaScript JS/CC Parser Generator Project Homepage On to step 2. This step can be very basic for something like infix expressions - or it can get very complex. But, the idea is, given the AST, "convert" it into your output format (Javascript). Typically you need to check for things that aren't checked for by the "simple" syntax checking that occurs in the parser. For example, even in your sample code there is a whole number of things that could possibly go wrong. In the part where you say plus x what would happen if the developer never defined x? Should this be an error? Should x default to some value? This is where your language really comes to life. And, to back-track for a minute - your time needs to be spent on this step - not on the parser. Use a tool for that - seriously. You're talking about starting a large and challenging project - don't make it even harder for yourself. To add to all this - there is often a need to make multiple "passes" through the AST. For example, the first pass may look for and setup "module" definitions, the second pass may look for and setup "namespaces", another pass may setup classes, etc. These further refinements of the structure of the final application are used in later steps to determine if a reference to a particular class/variable/module/etc is valid (it actually exists or can be referenced). There are a few really great books on compilers. The infamous "dragon book" is one. Say I define, instantiate, and use an adder functor like so: class SomeAdder { public: SomeAdder(int init_x): x(init_x) {} void operator()(int num) { cout << x + num <<endl; } private: int x; }; SomeAdder a = SomeAdder (3); a(5); //Prints 8 SomeAdder b(5); b(5); //Prints 10 The constructor and the overloaded () operator are both called using double parenthesis and have the same types of parameters. How would the compiler determine which function to use during the instantiations of SomeAdder and the "function calls", as to implement the correct behavior? The answer seems like it would be obvious on the surface, but I just can't wrap my head around this thought. Thanks for your time! C++ has a grammar and from that the compiler will know(gross simplification) when a type is being instantiated and therefore a constructor should be called from the case where an overloaded operator () is being called on an instance of a class. How the grammar is used to determine this probably requires a course on compilers which the Dragon Book is probably the standard. If you are curious you can also check out the C++ Grandmaster Certification whose goal is to build a C++ compiler. I am working on a project to automatically convert a custom language to Java and have been asked to do some basic optimizations of the code during the conversion process. For example, the custom code may have something like: if someFunction(a, b) > x: do something else: return someFunction(a, b) + y in this instance, someFunction is called multiple times with the same inputs, so additional performance can be obtained by caching the value of someFunction() and only calling it once. Thus, an "optimized" version of the above code may look something like: var1 = someFunction(a, b) if var1 > x: do something else: return var1 + y Currently, this is done by hand during the conversion process. I run a program to convert the code in the custom language to Java and then manually examine the converted code to see what can be optimized. I want to automate the optimization process since these problems creep up again and again. The people who are writing the code in the custom language do not want to worry about such things, so I can't ask them to just make sure that the code they give me is already optimized. What are some tutorials, papers, etc... that details how such things are done in modern compilers? I don't want to have to re-invent the wheel too much. Thanks in advance. Edit 1: It can be assumed that the function is pure. This is known as Common subexpression elimination. Normally, this would require you pretty much implement a full compiler in order to do the data flow analysis. An algorithm is given in Dragon Book, "6.1.2 The Value-Number Method for Constructing DAG's" (for the local CSE at least). I'm looking for a good explanation of the definitions of the FIRST, FOLLOW, and PREDICT sets of a RDP when given a grammar. You can automatically calculate first, follow, and predict sets using Calculate Predict, First, and Follow Sets from BNF (Backus Naur Form) Grammar Specification without having to download anything. It's a good way to verify answers or automate the tedium. If you want to do it manually, the Dragon Book (2nd ed) covers it on pages 221-222. Possible Duplicate: What are the stages of compilation of a C++ program? I find that understanding how a given software language is compiled can be key to understanding best practices and getting the most out of that language. This seems to be doubly true with C++. Is there a good primer or document (for mortals) that describes C++ from the point of view of the compiler? (Obviously every compiler is a little different.) I thought there may be something along those lines in the beginning of Stroustrup's book. I've head Compilers: Principles, Techniques, and Tools is solid. Does anyone have references to documents and research specific on the inner workings of shader compilers/graphics drivers compilers? There's no big difference between writing an ordinary C compiler and writing a shader compiler. The standard book on writing compilers is the so called "Dragon Book": Guys what are the best Online resources for learning Compiler Design ? Would Perl be a viable language to write a Compiler ? Rather than online, as mentioned in the above answer, grab yourself a copy of the Dragon book Compilers: Principles, Techniques, and Tools. A copy of the first edition shouldn't set you back too much. Not sure about Perl as a language of choice for implementing a compiler though. It's been ratling in my brain for a while. I've had some investigation on Compilers/Flex/Byson and stuff but I never found a good reference that talked in detail about the "parsing stack", or how to go about implementing one. Does anyone know of good references where I could catch up? Edit: I do appreciate all the compiler references, and I'm going to get some of the books listed, but my main focus was on the Parsing itself and not what you do with it after. The Dragon book! I used it quite recently to write a compiler (in PHP!) for a processing language for template files written in RTF... My MIPS Assembly class required me to read in an expression of unknown size into a Parse Tree. I've never had to deal with trees, so this is how I went around storing values: Lets say the user entered the expression 1 + 3 - 4 (each operand could only be a digit 1-9) My leftmost child node would be the starting point and contain 2 pieces of data 1. The operand 2. Pointer to the next node (operator) This is how I constructed the tree. I would point from operand to operator to next operand to next operator until there were no more values left to be read in. My next task was to traverse the tree recursively and output the values in infix/prefix/postfix notation. Infix traversal was no problem considering how I constructed my tree. I'm stuck on the prefix. Firstly, I don't fully understand it. When I output our expression (1 + 3 - 4) in prefix should it come out - + 1 3 4? I'm having trouble following the online examples. Also do you think my tree is constructed properly? I mean, I have no way to go to a previous node from the current node which means I always have to begin traversal from the leftmost child node which instinctively doesn't sound right even though my TA said that was the way to go. Thank you for any help. This is an instance of the general problem of compiling, which is a solved problem. If you do a google on compiling techniques, you will find out all kinds of information relating to your problem. Your library should have a copy of Compilers: Principles, Techniques, and Tools by Aho, Sethi, and Ullman. If it doesn't have it, request it for purchase(it's the Standard Work in the field). The first part of it should help you. My teacher told me that if I wanted to get the best grade in our programming class, I should code a Simple Source Code Converter. Python to Ruby (the simplest he said) Now my question to you: how hard is it to code a simple source code converter for python to ruby. (It should convert file controlling, Control Statements, etc.) Do you have any tips for me? Which language should I use to code the converter (C#, Python or Ruby)? There is a name for a program which converts one type of code to another. It's called a compiler (even if the target language is not in fact machine or byte code). Compilers are not the easiest part of computer science, and this is project that, if it were to be anything more than a toy implementation of a converter, would be a massive project. Certainly larger than what one would normally do for a class project in most university courses. (Even many/most compilers courses have fairly modest project assignments. As to what language to use? Well, whichever one you know best is probably the answer. Though if you want to learn something new, Haskell would be a good choice, with its pattern matching features. (Disclaimer: I'm new to haskell.) (Yacc could also be used, if you're really serious about getting into compilers.) You'll also want to consult: The Dragon Compiler Book, which is worth studying even if you don't plan to write compilers. Compilation generally occur in several stages:lexical analysis, syntax analysis, etc. Say, in C language, I wrote a=24; without declaring a as int. Now, at what stage of compilation an error is detected? At syntax analysis stage? If that is the case, then what does lexical analyzer do? Just tokenizing the source code? If talking about a general form of compiler,it is obvious that the error will occur at the syntax analysis phase when the parser will look for the symbol searching in symbol table entries ,and the subsequent phases - only if processed further after recovering from error. The dragon book also clearly tells that. It is mentioned in the page where the types of error are mentioned. The topic to be studied thoroughly to understand this issue is given in 4.1.3 - Syntax Error Handling . a = 24; // without declaring a as an int type variable. Here, the work of lexical phase is simply to access characters and form tokens and subsequently pass them to the further phases,i.e., to the parse in the syntax analysis phase,etc. I'm trying to optimize my simple C interpretter that I made just for fun, I am doing parsing like this - firstly I parse file into tokens inside doubly linked list, then I do syntax and semantic analysis. I want to optimize function with this prototype: bool parsed_keyword(struct token *, char dictionary[][]); Inside the function I basically call strcmp against all keywords and edit token type. This of course lead to 20 strcmp calls for each string that is being parsed (almost). I was thinking Rabin-Karp would be best, but it sounds to me like it isn't best suited for this job (matching one word against small dictionary). What would be the best algoritm to do this work? Thanks for any suggestions. A hash table would probably be my choice for this particular problem. It will provide O(1) lookup for a table of your size. A trie would also be a good choice though. But, the simplest to implement would be to place your words in an array alphabetically, and then use bsearch from the C library. It should be almost as fast as a hash or trie, since you are only dealing with 30 some words. It might actually turn out to be faster than a hash table, since you won't have to compute a hash value. Steve Jessop's idea is a good one, to layout your strings end to end in identically sized char arrays. const char keywords[][MAX_KEYWORD_LEN+1] = { "auto", "break", "case", /* ... */, "while" }; #define NUM_KEYWORDS sizeof(keywords)/sizeof(keywords[0]) int keyword_cmp (const void *a, const void *b) { return strcmp(a, b); } const char *kw = bsearch(word, keywords, NUM_KEYWORDS, sizeof(keywords[0]), keyword_cmp); int kw_index = (kw ? (const char (*)[MAX_KEYWORD_LEN+1])kw - keywords : -1); If you don't already have it, you should consider acquiring a copy of Compilers: Principles, Techniques, and Tools. Because of its cover, it is often referred to as The Dragon Book. I want to parse text with javascript. The syntax i want to parse is a markup language. this language has 2 main kind of markup: $f56 mean the following characters will be of color #F56. Until the following $ with 3 hex char it is using this color. $i Mean until the following $z (closing tag) the text is in italic. They are other one letter So basically this language is composed of 3 character long hexa tags for color and one letter long tags. I can craft something ugly to parse my text, storing char position and current status of tags (formatting and color) but i'd like to learn proper parsing. Could you give me a few tips/principle to make a clean parser for this language ? If you really want to learn about parsing, pick up this book: Compilers: Principles, Techniques, and Tools aka The Dragon book. It is very dense, but offers the most complete take on parsing. I've heard good things about ANTLR (mentioned above) but have not used it. I have used Bison though, which worked pretty well for me to define the grammar. I want to write a program that scans javascript code, and replace variable names with short, meaningless identifiers, without breaking the code. I know YUI compresser and google's closure compiler can do this. I am wondering how can I implement this? Is it necessary to build the abstract syntax tree? If not, how can I find the candidate variables for renaming? Most modern javascript compressors are actually compilers. They parse javascript input into an abstract syntax tree, perform operations on the tree (some safe, some not) and then use the syntax tree to print out code. Both Uglify and Closure-Compiler are true compilers. Implementing your own compiler is a large project and requires a great knowledge of computing theory. The dragon book is a great resource from which to get started. You may be able to leverage existing work. I recommend starting from a non-optimizing compiler for reference. Let's say I have a programming language where I can write: x = f(g(1), h(1)) in this case the directed acyclic graph will show the dependencies of calculation like in a spreadsheet (assuming non recursive expressions): 1 | \ g h \ / f This is a simple example but it turns interesting trying to "compress" more complex expressions within a DAG. The goal here is optimizing the number of recalculations based on the dependencies. What algorithms and papers are available for dealing with this problem? A bit more specific, it's Local Common Subexpression Elimination. An algorithm is given in Dragon Book, "6.1.2 The Value-Number Method for Constructing DAG's" There is a question on stackoverflow about Learning to write a compiler. I have looked at it and I think it's an undertaking I want to tackle. I think (like many others) the knowledge will be invaluable to a programmer. However, my skills are primarily in C++ and I would say I am very comfortable with the syntax of the language and some basic algorithms and design concepts, but I am by no means a seasoned programmer. My only programming experience comes from academic textbooks at the college level and the completion of introductory/intermediate courses (300 level classes). Hence, the rise of my question. With only a general knowledge of the C++ language and no Assembly knowledge, would a book aimed at the theories and workings of a compiler and the implementation of those theories, such as the book Compilers: Principles, Techniques, and Tools (2nd Edition), be difficult for me to understand? I would recommend you start with an interpreter first as you don't need proprietary hardware knowledge to implement it. The key concepts are usually how is the language defined, what makes a statement, building parse trees, etc... The hardware knowledge to me is secondary than actually understanding how to read and evaluate the statements. What I did when learning was write a small interpreter for a Pascal like language and started small with simple statements and variable storage and slowly added different things to it as I got better. I am looking to write an interpreted language in C#, where should I start? I know how I would do it using fun string parsing, but what is the correct way? Checkout the Phoenix compiler from Microsoft. This will provide many of the tools you will need to build a compiler targeting native or managed environments. Among these tools us a optimizing back end. I second Cycnus' suggestion on reading Aho Sethi and Ullman's "Dragon Book" (Wikipedia, Amazon). RGR Assume that we are given input file in the following form: 12 a -5 T 23 -1 34 R K s 3 4 r a a 34 12 -12 y Now, we need to read the entire file and print the following: number of integers number of lowercase char number of uppercase char sum of all integers Questions like this have always been a thorn in my flesh and I want to get this over with once and for all. You need to parse the file: 1) separate raw text into tokens, then 2) "decide" if the token is a string, an integer, or "something else". 3) Count the results (#/integers, #/strings, etc) Here's an example: Parse A Text File In C++ Here's the canonical textbook on the subject: The Dragon Book Can lex and yacc be used for making a programming language? and any recommendation for some books. some references ? So far i have found some like : Build code with lex and yacc, Part 1: Introduction Yes, you can certainly use lex and yacc to build a compiler/translator for a programming or scripting language. There are GNU variants of these tools, called flex and bison. John Levine's lex & yacc was for many years the gold standard for books about these tools. That book may be out of print, but I expect that the successor book, Flex & Bison, is just as good. To dig deeper into building a compiler, start with Aho et al., Compilers: Principles, Techniques, and Tools, 2/e. (Again, my recommendation is based on the first edition of this book.) I am confused between Syntax Directed Translation and parser written using Bison. (The main confusion is whether parser written in Bison already consists of syntax directed translator.)I rephrase the above sentence in parenthesis as (How does Bison implement Syntax Directed Translation, is it by attaching for E.g. $$ = $1 + $3).) And also in chapter 5 (Syntax Directed Analysis) of the book says Grammar + Semantic rules = Syntax Directed Translation PRODUCTION SEMANTIC RULE →1 + {. = 1. ┤| . |′+′} When looking at the following snippet of translation rules for a simple parser from the book Flex and Bison %% E: F default $$ = $1 | E ADD F { $$ = $1 + $3; } | E SUB F { $$ = $1 - $3; } ; %% Is the .code equavelent to $$ I am so confused. Is syntax directed analysis the same as semantic analysis? The more I read more I get confused. Someone please help me sort this out. Your understanding seems correct, but is confused by the fact that your example from the Dragon book and example parser are doing two different things -- the Dragon book is translating the expression into code, while the simple parser is evaluating the expression, not translating (so this is really syntax directed evaluation, not syntax directed translation). In the semantic rules described in the Dragon book, symbols can have multiple attributes, both synthesized and inherited. That's what the .code suffix means -- its an attribute of the symbols it is applied to. Bison on the other hand allows each symbol to have a single synthesized attribute -- no more, and no inherited attributes. If you want multiple attrbutes, you can gather them together into a struct and use that as your attribute (requires some careful management). If you want inherited attributes you can use $0 and even more careful management, or you can use globals to get the same effect. The bison snippet that would correspond to your Dragon book example snippet would be something like: E : E ADD F { $$ = AppendCode($1, $3, PLUS); } using the single bison attribute for the .code attribute and doing the append operation for the code being generated as a function. I am new to Visual C++ and I am using Microsoft Visual C++ 6.0 to build an application. The application for now has to generate a .cpp file from a proprietory .cfg file. Can anyone please guide how this can be achieved. Any help or guidance is much appreciated. Thanks, Viren Your question is a little vague, however it sounds like you need to develop some kind of parser to read in the cfg files and translate it into some form of intermediate language or object graph, optimize it, and then output it to c++. Sounds to me like a job for a home-grown compiler. If you aren't familar with the different phases of a compiler I would highly recommend you check out the infamous dragon book Then again, if this is for an important project with a deadline you probably don't have a lot of time to spend in the world of compiler theory. Instead you might want to check out antlr. It is really useful for creating a lexar and parser for you based on grammar rules that you define from the syntax of the cfg files. You can use the antlr parser to translate the cfg files into an AST or some other form of object graph. At that point you are going to be responsible for manipulating, optimizing and outputting the c++ syntax to a new file. I haven't read it yet but this is supposed to be an excellent book for novice and experienced antlr users plus there a plenty of antlr tutorials and examples online that I've used to help learn it. Hope that helps put you in the right direction. I am trying to create a lexical analyzer program using java.Program must have the concept of tokenization .I have beginner level knowledge in compiler programming.I know there are lot of Lexical generators exist on internet.I can use them to test my own lexical analyzer out put .But i need to do my own lexical analyzer .Can any one please give some best references or articles or ideas to start my cording ? I would try taking a look at the source code for some of the better ones out there. I have used Sablecc in the past. If you go to this page describing how to to set you your environment, there is a link to the source code for it. Antlr is also a really commonly used one. Here is the source code for it. Also, The Dragon Book is really good. As Suggested by SK-logic I am adding Modern Compiler Implementation as another option. If i have a String as follow ( (a || b) && c) || (d && e) How can i split them into diffrent string based on the brackets and form a tree like that? ( (a || b) && c) || (d && e) ---> Root / \ / \ ( (a|| b) || c) (d && e) / \ / \ / \ / \ (a || b) c d e The problem you are suggesting probably falls into computer science's branch of parsers and formal languages. A parser program based on an arbitrary grammar for an arbitrary string can be generated with tools like lex & yacc. Lex is a lexical analyzer program generation tool, which takes as input a text file that defines the lexical rules of your grammar as regexp, and outputs a program capable of recognize tokens from an arbitrary input string as you defined them in the rules. Yacc is a syntax parser program generation tool, which takes as input a lexer, a text file that represents the grammar of your language (in your case, that would be an expression-like grammar), and outputs a program called parser which will be able to transform your expression string into a tree as you mention (i.e. parse the string into a parse-tree). Yacc and lex can be easily used together to generate a parser program that creates a parse-tree based on so-called semantic-actions with which you instruct the parser to build the tree in the way you want. I suggest you the following as an introductory reading: If you are interested in the matter, a more challenging reading would be: Yacc and Lex are made only for the C language, equivalent tools exist for the Java. My favorite parser-generator tool in java would be: I have a list of segments (15000+ segments), I want to find out the occurence of segments in a given string. The segment can be single word or multiword, I can not assume space as a delimeter in string. e.g. String "How can I download codec from internet for facebook, Professional programmer support" [the string above may not make any sense but I am using it for illustration purpose] Bascially i am trying to do a query reduction. I want to achieve it less than O(list length + string length) time. As my list is more than 15000 segments, it will be time consuming to search entire list in string. The segments are prepared manully and placed in a txt file. Regards ~Paul What your basically asking how to do is write a custom lexer/parser. Some good background on the subject would be the Dragon Book or something on lex and yacc (flex and bison). Take a look at this question: Now of course, alot of people are going to say "just use regular expressions". Perhaps. The deal with using regex in this situation is that your execution time will grow linearly as a function of the number of tokens you are matching against. So, if you end up needing to "segment" more phrases, your execution time will get longer and longer. What you need to do is have a single pass, popping words on to a stack and checking if they are valid tokens after adding each one. If they aren't, then you need to continue (disregard the token like a compiler disregards comments). Hope this helps. I'm reading Compilers: Principles, Techniques, and Tools (2nd Edition) and I'm trying to compute the FOLLOW() sets of the following grammar: S → iEtSS' | a S' → eS | ε E → b where S, S', E are non-terminal symbols, S is the start symbol, i, t, a, e, b are terminal symbols, and ε is the empty string. What I've done so far FOLLOW(S) = {$} ∪ FOLLOW(S') FOLLOW(S') = FOLLOW(S) FOLLOW(E) = FIRST(tSS') - {ε} = FIRST(t) - {ε} = {t} - {ε} = {t} where $ is the input right endmaker. Explanation $ ∈ FOLLOW(S), since S is the start symbol. We also know that S' → eS, so everything in FOLLOW(S') is in FOLLOW(S). Therefore, FOLLOW(S) = {$} ∪ FOLLOW(S'). We also know that S → iEtSS', so everything in FOLLOW(S) is in FOLLOW(S'). Therefore, FOLLOW(S') = FOLLOW(S). The problem is that I can't compute FOLLOW(S), since I don't know FOLLOW(S'). Any ideas? The simple algorithm, described in the text, is a least fixed-point computation. You basically cycle through the nonterminals, putting terminals into the follow sets, until you get through an entire cycle without changes. Since nothing is ever removed from any follow set, and the number of terminals is finite, the algorithm must terminate. It usually only takes a few cycles. In my project I have a view where I write words in some textfield, when I press a button these string must be stored in a csv file as this example: (example with 5 textfield) firststring#secondstring#thirdstring#fourthstring#fifthstring; this is an example of the result that I want. How can I do? Edited to add: code for the string NSMutableString *csvString = [NSMutableString stringWithString:textfield1.text]; [csvString appendString:@"#"]; [csvString appendString:textfield2.text]; [csvString appendString:@"#"]; [csvString appendString:dtextfield3.text]; [csvString appendString:@"#"]; [csvString appendString:textfield4.text]; [csvString appendString:@"#"]; [csvString appendString:textfield5.text]; [csvString appendString:@"#"]; [csvString appendString:textfield6.text]; [csvString appendString:@"#"]; [csvString appendString:textfield7.text]; [csvString appendString:@"#"]; if (uiswitch.on) { //switch [csvString appendString:@"1"]; } else [csvString appendString:@"0"]; [csvString appendString:@";"]; finally csvString NSLog(@"string = %@", csvString); is exactly my string Just as noted in my answer to another almost identical question of yours from earlier today: Do NOT do that. As soon as a user enters a "#" or ";" into one of the text fields your csv file (or rather: what you call a CSV file, but actually isn't one at all) will get corrupted and crash your code once read in again (or at least result in malformed data). Again: Do NOT do that. Instead: stick with real CSV and a parser/writer written by a professional. Generally speaking: Unless you have very good knowledge of Chomsky's hierarchy of formal languages and experience in writing language/file-format parsers do NOT (as in NEVER!) attempt to write one. Neither for your personal projects, let alone public ones. (Do the latter and I'll hunt you down! ;) ) Languages/Formats such as CSV look trivial at first glance but aren't by any means (as in type-2-language). I searched the internet to find an answer, but I couldn't. Is there anyone who will help me? expr ->term moreterms moreterms -> +term {print(‘+’)} moreterms |‐term {print(‘‐’)} moreterms |ε term ->factor morefactors morefactors ->*factor {print(‘*’)} morefactors |/factor {print(‘/’)} morefactors |ε factor ->(expr) |id {print(id)} |num {print(num)} I will use this code for a very basic calculator compiler and a interpreter. How can I convert this grammer into C++ or Java? There are many tools that take grammars and generate parsers, ranging from Yacc to boost spirit. The art of writing parsers has been widely studied. It isn't trivial. One approach is to determine if you can make your BNF into an LR(1) grammar and write a LR parser for it. An easy way to parse is to split your parsing into tokenizing (where you bundle things into identifiers), and syntax tree generation. Wikipedia has a cursory description of LR parsing. Knuth's Canonical LR(1) parser is also worth looking at. Teaching how to write an LR(1) parser (with whatever restrictions, let alone an LR(k) parser) is a matter of a short college course or a book chapter, not a stack overflow post. But the general idea is you read from left to right. You look ahead k tokens (typically 1) to determine which rule to apply to the next token you encounter. You build the parse tree from the bottom-up. There are lots of technical details, techniques, quirks and problems. Not every BNF grammar can be turned into a LR(1) grammar, let alone the restricted ones that many parse generators can handle. As mentioend by @UnholySheep, The Dragon Book is the book that most people learn these techniques from. It might sound stupid, but i decided to take the challenge to program the Translation Algorithm with help of OOP NetBeans - Java, having only basic knowledge of Java, and the theory only in the Translation Algorithm (Compiler). I am here to ask for your assistance, if somehow any of you did something like Translation from one programming language into another I happy if you could provide me with the links to the information you've used or set me on to the right direction so I could start correctly! Thank you in Advance Best Armani Theory of compilation is a huge field of research, that among others include formal languages, graph theory, low level optimizations and more. A good place to start learning about it is the Dragon Book . If you are using java, a useful tool that helps you do most of the front-end tasks of a compiler is JavaCC I want to experiment with Programming Language Design. The feature set that I imagined would be doable in C++, meaning you could rewrite anything from "MyLang" in C++. I thought it would be great to have a two-way-converter, from MyLang to C++ and the other way around. This way I can avoid writing a Compiler/Optimizer/Linker/VirtualMachine/whatever and just use all the good stuff which is available for C++. In my preliminary search I came across LLVM/Clang and thought that it would be a great ease of work to use its underlying parsing and AST generation to do what I want. But closer looks have shown me that it is a gigantic beast of a project where getting started is not an easy thing to do. My current point of entry in clang is the clang-modernizer, since it looks nice, small enough and pluggable, but I imagine it would break as soon as I break anywhere with C++ syntax. I want to stay on a higher level than LLVM IR, since MyLang would be very similar to C++ on a high level. An example of conversion would be something that takes a my.cpp and a my.hpp file and combines it into a my.lang file, at this time it may be beeing 100% valid C++ in the output file. Later the my.lang file shall be reconverted, splitting the definitions and inline methods into the my.hpp file and the non-inline methods into my.cpp again. Later on I plan to add more deviations from C++ syntax, but this might be a good start. The Questions: The Not-Questions: Thank you for your time! Please be gentle, this is my first Question here. Do You know of a Project/Framework/Toolkit that does exacly supply a two way converter, which is Open Source or maybe completely configurable to be allow what I want? I believe LLVM can do just what you want. However, I can't guarantee the resulting translation would be human readable. I would create a front-end that compiles to LLVM IR. The IR can be easily converted to C++ with the the llvm static compiler, by targeting the C++ backend (llc -march=C++). If you just want your new language to execute, there is no reason to convert it to C++ and then recompile it. You can JIT/Interpret utilizing the LLVM framework. If you want any LLVM IR to be able to convert to your language, you can create a compiler target, that handles the generation. Do You think LLVM/Clang is the best option for creating a MyLanguage to C++ converter? Do You have good Alternatives? I believe the LLVM framework is the way to go. If all you want to do is focus on the compiler front-end, you can do just that. You will get all the back-end optimizations and all the targets included in the framework. This is nice to scope your focus. In terms of developing the front-end for your language, you can take advantage of the ANTLR parser generator. This will help you develop to an AST. In addition, perform any optimizations and validations that you can do to an AST. After you have your AST you can create a visitor that navigates the AST to generate LLVM IR. There already exists a grammar file for C++ to start with here. Any (web-)literature that helps getting my foot in the right spot in the door for a Framework/Clang/YourAlternative? Compilers are awesome and extremely complex. I suggest you at least have the purple dragon book. To get going on LLVM I would go through their tutorial. You go through the development of a language, from the front-end all the way to JITing. I'd like to learn more about the LLVM system, as I use the compiler a lot. I have no background in compiler technology. Is the Dragon Book still a must read in order to understand LLVM or is it outdated? Is there anything better (and shorter) at this moment? The Dragon book is arguably THE book for compiler concepts. The level of familiarity with compiler concepts that you should have before digging into LLVM depends on what exactly do you want to achieve and where do you want to contribute. For example, to build a new LLVM front-end you should probably be first familiar with the concepts of lexical and semantics analysis. Further, to implement optimizations and/or instrumentation you should probably be familiar with the concepts of data-flow analysis to apply them on LLVM IR. I need a simple lexical analyzer that reports for-loop errors in C/C++. For purely lexical analysis, you could use regular expressions, or any of dozens scanner generators (flex/lex, ANTLR). For syntactic analysis on the other hand, you'd probably need a parser generator that could read a context-free grammar. However, from what I understand, most C++ parsers are hand written. I'm not sure if even an LALR parser would do the trick; you might need to bring out the big guns and use something like Bison's GLR support. Also, for a ton more information on lexical/syntactic analysis, I'd recommend 'The Dragon Book'. Good luck! Recently I have been extremely interested in language development, I've got multiple working front ends and have had various systems for executing the code. I've decided I would like to try to develop a virtual machines type system. (Kind of like the JVM but much simpler of course) So I've managed to create a basic working instruction set with a stack and registers but I'm just curious about how some things should be implemented. In Java for example after you've written a program you compile it with the java compiler and it creates a binary (.class) for the JVM to execute. I don't understand how this is done, how does the JVM interpret this binary, what's the transition from human readable instructions to this binary, how could I create something similar? Thanks for any help/suggestions! Alright, I'll bite on this generic question. Implementing an compiler/assembler/vm combo is a tall order, especially if you're doing it by yourself. That being said: If you keep your language specification simple enough, it is quite doable; also by yourself. Basically, to create a binary, the following is done (this is a tad bit simplified*: 1) Input source is read, lexed, and tokenized 2) The program logic is analyzed for semantical correctness. E.g. while the following C++ would parse & tokenize, it would fail semantic analysis float int* double = const (_identifier >><<) operator& * 3) Build an Abstract Syntax Tree to represent the statements 4) Build symbol tables and resolve identifiers 5) Optional: Optimization of code 6) Generate code in an output format of your choice; for example binary opcodes/operands, string tables. Whatever format suits your needs best. Alternatively, you could create bytecode for an existing VM, or for a native CPU. EDIT If you want to devise your own bytecode format, you can write, for example: 1) File Header DWORD filesize DWORD checksum BYTE endianness; DWORD entrypoint <-- Entry point for first instruction in main() or whatever 2) String table DWORD numstrings <strings> DWORD stringlen <string bytes/words> 3) Instructions DWORD numinstructions <instructions> DWORD opcode DWORD numops <--- or deduce from opcode DWORD op1_type <--- stack index, integer literal, index to string table, etc DWORD operand1 DWORD op1_type DWORD operand2 ... END Overall, the steps are managable, but, as always, the devil is in the details. Some good references are: The Dragon Book - This is heavy on theory, so it's a dry read, but worthwhile Game Scripting Mastery - Guides you along while developing all three components in a more practical matter. However, the example code is rife with security issues, memory leaks, and overall lousy coding style (imho). However, you can take a lot of concepts away from this book, and it's worth a read. The Art of Compiler Design - I have not read this one personally, but heard positive things about it. If you decide to go down this road, be sure you know what you're getting yourself into. This is not something some the faint of heart, or someone new to programming. It requires a lot of conceptual thinking and prior planning. It is, however, quite rewarding and fun I have been thinking about building my own compiler for a while and a few days ago I finally started on it. My compiler works like this: Now I am having difficulties with finding the best way to parse my code. I haven't really made this yet but I will put my ideas here. Now I was simply wondering if someone can improve my ideas. Or if someone has a better idea to make some kind of compiler and your own programming language. Get yourself a copy of A. V. Aho, M. S. Lam, R. Sethi, J. D. Ullman: Compilers: Principles, Techniques, and Tools and start studying The book covers the necessary theoretical background, especially: .netalgorithmassemblybinarybisonbnfcc#c++callclangcompilationcompiler-constructioncompiler-errorscompiler-optimizationcompiler-theorycomputer-scienceconceptconcrete-syntax-treeconstructorconvertercsvdictionarydirected-acyclic-graphsfunctiongoogle-closure-compilergprofgrammarinterpretedinterpreterjavajavascriptjvmlanguage-agnosticlanguage-designlatexlexlexerlexical-analysislinkerllvmmarkupmatrixmatrix-multiplicationmipsnetbeans-7notationobfuscationobjective-coptimizationoverloadingparsingphppointersprefixprogramming-languagesprojectpythonrecursionrecursive-descentregexrpnrubyscriptingsemantic-analysisshadersqlstringtokenizertext-filestheorytokenizetraversaltreeuitextfieldvisual-c++vm-implementationwindows-7x86yacc
http://www.dev-books.com/book/book?isbn=0321486811&name=Compilers
CC-MAIN-2019-09
refinedweb
14,582
61.16
Go language has a strong type system unlike C, and some times this will be head ache when we want to interact with C data types with Cgo or just to convert a Go type to lets say byte slice. I recently faced the same problem and after poking around things, I learned the Go provides unsafe package which can be used to work around the Go's type system. Problem Recently I started using Cgo to use some C library I had to write some tools for development and testing. The reason I chose Go for this was prototyping and writing some quick tools is much easier in Go than done in C. The C library had some function which takes pointer to array types and fills it with some values. The problem I was facing here was how to create a C array in Go. Go does have array but its largely used as internal representation for much efficient type called slice and I can't directly cast a byte slice into an C array. Second problem I had was I had to store arbitrary Go types like float (float32,float64) and int (int32, int64) etc. into C array. So in brief, the problems that needed to solve are - Find a way to convert byte slice from Go into C array and vice versa. - Find a way to convert and store Go types into a C array. Solution Basically C array are sequence of memory location which can be statically or dynamically allocated. In Cgo its possible to access C standard library functions for memory allocation, so why not use it. The memory allocation function then returns the pointer to starting of allocated memory, we can use this pointer to write Go's bytes into the memory location and bytes from memory location into Go's slice. The pointer returned by C allocation functions are not directly usable for memory dereferencing in Go, here is where unsafe package kicks in. We will cast the return of C allocation function as unsafe.Pointer type and from the documentation of unsafe package, - A pointer value of any type can be converted to a Pointer. - A Pointer can be converted to a pointer value of any type. - A uintptr can be converted to a Pointer. - A Pointer can be converted to a uintptr. So we can then cast unsafe.Pointer to uintptr which is the Go type which is large enough to hold any memory address in Go and can be used for pointer arithmetic just like we do in C (of course with some more castings). Below I'm pasting a simplified code in C which I wrote for this post. #ifndef __BYTETEST_H__ #define __BYTETEST_H__ typedef unsigned char UBYTE; extern void ArrayReadFunc(UBYTE *arrayout); extern void ArrayWriteFunc(UBYTE *arrayin); #endif #include "bytetest.h" #include <stdio.h> #include <string.h> void ArrayReadFunc(UBYTE *arrayout) { UBYTE array[20] = {1, 2, 3, 4,5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20}; memcpy(arrayout, array, 20); } void ArrayWriteFunc(UBYTE *arrayin) { UBYTE array[20]; memcpy(array, arrayin, 20); printf("Byte slice array received from Go:\n"); for(int i = 0; i < 20; i ++){ printf("%d ", array[i]); } printf("\n"); } Functions are written just for this post and they don't really do anything. As you can see ArrayReadFunc takes a pointer to array and fills it with content of another array using memcpy. Function ArrayWriteFunc on other hand takes pointer to array and copies its content to internal array. I've added print logic to ArrayWriteFunc just to show that values passed from Go are making it here. Below is the Go code which uses the above C files passes byte slice to get value out of C code and array made of byte slice to C function to send values in. package main /* #cgo CFLAGS: -std=c99 #include "bytetest.h" #include <stdlib.h> */ import "C" import ( "fmt" "unsafe" ) func ReadArray() unsafe.Pointer { var outArray = unsafe.Pointer (C.calloc(20,1)) C.ArrayReadFunc((*C.UBYTE)(outArray)) return outArray } func WriteArray(inArray unsafe.Pointer) { C.ArrayWriteFunc((*C.UBYTE)(inArray)) } func CArrayToByteSlice(array unsafe.Pointer, size int) []byte { var arrayptr = uintptr(array) var byteSlice = make([]byte, size) for i := 0; i < len(byteSlice); i ++ { byteSlice[i] = byte(*(*C.UBYTE)(unsafe.Pointer(arrayptr))) arrayptr ++ } return byteSlice } func ByteSliceToCArray (byteSlice []byte) unsafe.Pointer { var array = unsafe.Pointer(C.calloc(C.size_t(len(byteSlice)), 1)) var arrayptr = uintptr(array) for i := 0; i < len(byteSlice); i ++ { *(*C.UBYTE)(unsafe.Pointer(arrayptr)) = C.UBYTE(byteSlice[i]) arrayptr ++ } return array } func main(){ carray := ReadArray() defer C.free(carray) carraybytes := CArrayToByteSlice(carray, 20) fmt.Println("C array converted to byte slice:") for i := 0; i < len(carraybytes); i ++ { fmt.Printf("%d ", carraybytes[i]) } fmt.Println() gobytes := []byte{21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40} gobytesarray := ByteSliceToCArray(gobytes) defer C.free(gobytesarray) WriteArray(gobytesarray) } Functions ReadArray and WriteArray are just wrapper to the calls to C counter parts ArrayReadFunc and ArrayWriteFunc. ReadArray returns unsafe.Pointer which is allocated C array and should be freed by caller. WriteArray takes unsafe.Pointer which is pointing to memory location containing C array. Now the functions of interest are CArrayToByteSlice and ByteSliceToCArray. It should be pretty clear from the above code to understand what is happening in these functions. Still I will just put explain them briefly. ByteSliceToCArray allocates a C array using calloc from C standard library. It then creates a uintptr, a pointer type in Go which is used to dereference the each memory location and store bytes from the input byte slice in them. CArrayToByteSlice on other hands creates a uintptr type by casting input unsafe.Pointer type and then uses this pointer type to dereference values from memoy and store it in byte slice with suitable casting. So lets build the code and run it and see the output: C array converted to byte slice: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Byte slice array received from Go: 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 So yes it actually works and values are moving across C and Go. This solves first problem in hand next is converting arbitrary Go types into byte slices. There are many cases where we would like to convert an arbitrary Go types like (int, float) into bytes. One such use case I found was when writing a TCP client for communicating with a Server written using C speaking custom protocol. Here I'm just going to show how to convert types like float, int to byte slice, I've not tried converting structures but it is certainly possible. Below is the function which can convert int32,float32 into byte slice it can also be extended for other types. } This function is generic which can take various types of value. First we will use Go's type assertion to determine the type and creates a uintptr pointer for the value and allocates byte slice depending on the size of the value as calculated using unsafe.Sizeof. Later it uses the pointer to dereference value from memory location and copies each byte into byte slice. The idea used here is every type is represented as certain number of bytes in the memory. Below is the entire program. package main import ( "fmt" "unsafe" "os" ) } func main() { a := float32(-10.3) floatbytes := CopyValueToByte(a) fmt.Println("Float value as byte slice:") for i := 0; i < len(floatbytes); i++ { fmt.Printf("%x ", floatbytes[i]) } fmt.Println() b := new(float32) bptr := uintptr(unsafe.Pointer(b)) for i := 0; i < len(floatbytes); i++ { *(*byte)(unsafe.Pointer(bptr)) = floatbytes[i] bptr++ } fmt.Printf("Byte value copied to float var: %f\n", *b) } The above conversion can also be achieved using encoding/binary package provided by Go. But its been told to me that it makes things pretty slow. Conclusion So goes unsafe.Pointer is really powerful thing which allows us to work around the Go's type system but as package documentation says it should be used with care PS: I'm not really sure if its recommended to use allocation functions from C standard library, I will wait for expert gophers to comment on that.
https://copyninja.info/blog/workaround-gotypesystems.html
CC-MAIN-2018-13
refinedweb
1,406
63.7
We just released update to Azure Mobile Services in which new tables created in the services have a different layout than what we have right until now. The main change is that they now have ids of type string (instead of integers, which is what we’ve had so far), which has been a common feature request. Tables have also by default three new system columns, which track the date each item in the table was created or updated, and its version. With the table version the service also supports conditional GET and PATCH requests, which can be used to implement optimistic concurrency. Let’s look at each of the three changes separately. String ids The type of the ‘id’ column of newly created tables is now string (more precisely, nvarchar(255) in the SQL database). Not only that, now the client can specify the id in the insert (POST) operation, so that developers can define the ids for the data in their applications. This is useful on scenarios where the mobile application wants to use arbitrary data as the table identifier (for example, an e-mail), make the id globally unique (not only for one mobile service but for all applications), or is offline for certain periods of time but still wants to cache data locally, and when it goes online it can perform the inserts while maintaining the row identifier. For example, this code used to be invalid up to yesterday, but it’s perfectly valid today (if you update to the latest SDKs): - private async void Button_Click(object sender, RoutedEventArgs e) - { - var person = new Person { Name = "John Doe", Age = 33, EMail = "john@doe.com" }; - var table = MobileService.GetTable<Person>(); - await table.InsertAsync(person); - AddToDebug("Inserted: {0}", person.Id); - } - - public class Person - { - [JsonProperty("id")] - public string EMail { get; set; } - [JsonProperty("name")] - public string Name { get; set; } - [JsonProperty("age")] - public int Age { get; set; } - } If an id is not specified during an insert operation, the server will create a unique one by default, so code which doesn’t really care about the row id (only that it’s unique) can still be used. And as expected, if a client tries to insert an item with an id which already exists in the table, the request will fail. Additional table columns (system properties) In addition to the change in the type of the table id column, each new table created in a mobile service will have three new columns: - __createdAt (date) – set when the item is inserted into the table - __updatedAt (date) – set anytime there is an update in the item - __version (timestamp) – a unique value which is updated any time there is a change to the item The first two columns just make it easier to track some properties of the item, and many people used custom server-side scripts to achieve it. Now it’s done by default. The third one is actually used to implement optimistic concurrency support (conditional GET and PATCH) for the table, and I’ll talk about it in the next section. Since those columns provide additional information which may not be necessary in many scenarios, the Mobile Service runtime will not return them to the client, unless it explicitly asks for it. So the only change in the client code necessary to use the new style of tables is really to use string as the type of the id property. Here’s an example. If I insert an item in my table using a “normal” request to insert an item in a table: POST HTTP/1.1 User-Agent: Fiddler Content-Type: application/json Host: myservice.azure-mobile.net Content-Length: 37 x-zumo-application: my-app-key {"text":"Buy bread","complete":false} This is the response we’ll get (some headers omitted for brevity): HTTP/1.1 201 Created Cache-Control: no-cache Content-Length: 81 Content-Type: application/json Location: Server: Microsoft-IIS/8.0 Date: Fri, 22 Nov 2013 22:39:16 GMT Connection: close {"text":"Buy bread","complete":false,"id":"51FF4269-9599-431D-B0C4-9232E0B6C4A2”} No mention of the system properties. But if we go to the portal we’ll be able to see that the data was correctly added. If you want to retrieve the properties, you’ll need to request those explicitly, by using the ‘__systemProperties’ query string argument. You can ask for specific properties or use ‘__systemProperties=*’ for retrieving all system properties in the response. Again, if we use the same request but with the additional query string parameter: POST HTTP/1.1 User-Agent: Fiddler Content-Type: application/json Host: myservice.azure-mobile.net Content-Length: 37 x-zumo-application: my-app-key {"text":"Buy bread","complete":false} Then the response will now contain that property: HTTP/1.1 201 Created Cache-Control: no-cache Content-Length: 122 Content-Type: application/json Location: Server: Microsoft-IIS/8.0 Date: Fri, 22 Nov 2013 22:47:50 GMT {"text":"Buy bread","complete":false,"id":"36BF3CC5-E4E9-4C31-8E64-EE87E9BFF4CA","__createdAt":"2013-11-22T22:47:51.819Z"} You can also request the system properties in the server scripts itself, by passing a ‘systemProperties’ parameter to the ‘execute’ method of the request object. In the code below, all insert operations will now return the ‘__createdAt’ column in their responses, regardless of whether the client requested it. - function insert(item, user, request) { - request.execute({ - systemProperties: ['__createdAt'] - }); - } Another aspect of the system columns is that they cannot be sent by the client. For new tables (i.e., those with string ids), if an insert of update request contains a property which starts with ‘__’ (two underscore characters), the request will be rejected. The ‘__createdAt’ property can, however, be set in the server script (although if you really don’t want that column to represent the creation time of the object, you may want to use another column for that) – the code below shows one way where this (rather bizarre) scenario can be accomplished. If you try to update the ‘__updatedAt’ property, it won’t fail, but by default that column is updated by a SQL trigger, so any updates you make to it will be overridden anyway. The ‘__version’ column uses a read-only type in the SQL database (timestamp), so it cannot be set directly. - function insert(item, user, request) { - request.execute({ - systemProperties: ['__createdAt'], - success: function () { - var created = item.__createdAt; - // Set the created date to one day in the future - created.setDate(created.getDate() + 1); - item.__createdAt = created; - tables.current.update(item, { - // the properties can also be specified without the '__' prefix - systemProperties: ['createdAt'], - success: function () { - request.respond(); - } - }); - } - }); - } Finally, although those columns are added by default and have some behavior associated with them, they can be removed from any table which you don’t want. As you can see in the screenshot of the portal below, the delete button is still enabled for those columns (the only one which cannot be deleted is the ‘id’). Conditional retrieval / updates (optimistic concurrency) Another feature we added in the new style tables is the ability to perform conditional retrieval or updates. That is very useful in the case where multiple clients are accessing the same data, and we want to make sure that write conflicts are handled properly. The MSDN tutorial Handling Database Write Conflicts gives a very detailed, step-by-step description on how to enable this (currently only the managed client has full support for optimistic concurrency and system properties; support for the other platforms is coming soon) scenario. I’ll talk here about the behind-the-scenes of how this is implemented by the runtime. The concept of conditional retrieval is this: if you have the same version of the item which is stored in the server, you can save a few bytes of network traffic (and time) by having the server reply with “you already have the latest version, I don’t need to send it again to you”. Likewise, conditional updates work by the client sending an update (PATCH) request to the server with a precondition that the server should only update the item if the client version matches the version of the item in the server. The implementation of conditional retrieval / updates is based on the version of the item, from the system column ‘__version’. That version is mapped in the HTTP layer to the ETag header responses, so that when the client receives a response for which it asked for that system property, the value will be lifted to the HTTP response Content-Length: 0 x-zumo-application: my-app-key The response body will contain the ‘__version’ property, and that value will be reflected in the HTTP header as well: HTTP/1.1 200 OK Cache-Control: no-cache Content-Length: 108 Content-Type: application/json ETag: "AAAAAAAACBE=" Server: Microsoft-IIS/8.0 Date: Fri, 22 Nov 2013 23:44:48 GMT {"id":"2F6025E7-0538-47B2-BD9F-186923F96E0F","__version":"AAAAAAAACBE=","text":"Buy bread","complete":false} Now, if we want to update that record, we can make a conditional GET request to the server, by using the If-None-Match HTTP If-None-Match: "AAAAAAAACBE=" Content-Length: 0 x-zumo-application: my-app-key And, if the record had not been modified in the server, this is what the client would get: HTTP/1.1 304 Not Modified Cache-Control: no-cache Content-Type: application/json Server: Microsoft-IIS/8.0 Date: Fri, 22 Nov 2013 23:48:24 GMT If however, if the record had been updated, the response will contain the updated record, and the new version (ETag) for the item. HTTP/1.1 200 OK Cache-Control: no-cache Content-Length: 107 Content-Type: application/json ETag: "AAAAAAAACBM=" Server: Microsoft-IIS/8.0 Date: Fri, 22 Nov 2013 23:52:01 GMT {"id":"2F6025E7-0538-47B2-BD9F-186923F96E0F","__version":"AAAAAAAACBM=","text":"Buy bread","complete":true} Conditional updates are similar. Let’s say the user wanted to update the record shown above but only if nobody else had updated it. So they’ll use the If-Match header to specify the precondition for the update to succeed: PATCH /tables/todoitem/2F6025E7-0538-47B2-BD9F-186923F96E0F?__systemProperties=version HTTP/1.1 User-Agent: Fiddler Content-Type: application/json Host: myservice.azure-mobile.net If-Match: "AAAAAAAACBM=" Content-Length: 71 x-zumo-application: my-app-key {"id":"2F6025E7-0538-47B2-BD9F-186923F96E0F","text":"buy French bread"} And assuming that it was indeed the correct version, the update would succeed, and change the item version: HTTP/1.1 200 OK Cache-Control: no-cache Content-Length: 98 Content-Type: application/json ETag: "AAAAAAAACBU=" Server: Microsoft-IIS/8.0 Date: Fri, 22 Nov 2013 23:57:47 GMT {"id":"2F6025E7-0538-47B2-BD9F-186923F96E0F","text":"buy French bread","__version":"AAAAAAAACBU="} If another client which had the old version tried to update the item: PATCH /tables/todoitem/2F6025E7-0538-47B2-BD9F-186923F96E0F?__systemProperties=version HTTP/1.1 User-Agent: Fiddler Content-Type: application/json Host: ogfiostestapp.azure-mobile.net If-Match: "AAAAAAAACBM=" Content-Length: 72 x-zumo-application: wSdTNpzgPedSWmZeuBxXMslqNHYVZk52 {"id":"2F6025E7-0538-47B2-BD9F-186923F96E0F","text":"buy two baguettes"} The server would reject the request (and return to the client the actual version of the item in the server) HTTP/1.1 412 Precondition Failed Cache-Control: no-cache Content-Length: 114 Content-Type: application/json ETag: "AAAAAAAACBU=" Server: Microsoft-IIS/8.0 Date: Sat, 23 Nov 2013 00:19:30 GMT {"id":"2F6025E7-0538-47B2-BD9F-186923F96E0F","__version":"AAAAAAAACBU=","text":"buy French bread","complete":true} That’s how conditional retrieval and updates are implemented in the runtime. In most cases you don’t really need to worry about those details – as can be seen in the tutorial on MSDN, the code doesn’t need to deal with any of the HTTP primitives, and the translation is done by the SDK. Creating “old-style” tables Ok, those are great features, but you really don’t want to change anything in your code. You still want to use integer ids, and you need to create a new table with that. It cannot be done via the Windows Azure portal, but you can still do that via the Cross-platform Command Line Interface, with the “--integerId” modifier in the “azure mobile table create” command: azure mobile table create --integerId [servicename] [tablename] And that will create an “old-style” table, with the integer id and none of the system properties. Next up: clients support for the new features In this post I talked about the changes in the Mobile Services runtime (and in its HTTP interface) with the new table style. In the next post I’ll talk about the client SDK support for them – both system properties and optimistic concurrency. And as usual, please don’t hesitate in sending feedback via comments or our forums for those features. Great post, as usual! Your blog is my reference for Mobile Services! I have quoted this post in my article marcominerva.wordpress.com/…/using-the-new-system-properties-of-azure-mobile-services-tables-from-a-client-application. Hi Carlos, I created a table and noticed the new columns. Thinking what was going on I found your blog. The GUID is a great because before I always deleted the Id from the response, just because I don't want the give the world insight in the size of my tables. In some more complex cases you want to use SQL statements for querying the DB, but then the columns do become part of the response. Most notable is the __version column, it is represented as a json object !? "id": "F8981A35-2FAB-4553-8980-**********", "__createdAt": "2013-11-26T16:08:29.851Z", "__updatedAt": "2013-11-26T16:08:29.851Z", "__version": { "0": 0, "1": 0, "2": 0, "3": 0, "4": 0, "5": 0, "6": 7, "7": 236, "length": 8 } To delete the new columns from the response (not the DB) one can just do something like this: var sql = "SELECT c.* FROM Companies c INNER JOIN ********'"; request.service.mssql.query(sql, { success: function(companies) { for (var i=0; i<companies.length; i++) { delete companies[i].__createdAt; delete companies[i].__updatedAt; delete companies[i].__version; } request.respond(200, companies); } }); Great job Microsoft! Hi Carlos, Do you have any pointers on the best way to alter an existing table to replace the auto-increment integer id with the string id column? Keep up the great work! Eric @Marco, thanks! @Freddy, the __version column is read by the node-sql driver as a Buffer object in JavaScript (node.js). What you see is the default serialization of that object, but you can change it by passing a function to JSON.stringify if you want. @Eric, I tried a few things without success – I don't think it will work. The main problem is that every table in SQL Azure requires one, and only one column with a clustered index; for tables with integer ids, that column is the "id", and we can't drop the index to change the column type (since it needs one of those), nor add a second index to then drop the index from the "id" column (since it needs exactly one). I'll ask around and post if I find someone who knows how this is done, but my instict tells me it's not possible… So if you want to really change one table from int to string id, you may consider copying the data to a second table, deleting the original, recreating it (this time with string ids), and copying the data back. This will mean that your service will be down while this happens, so whether it works depends on your scenario… Hi Carlos, Can you provide a link to the latest iOS SDK that supports GUIDs for record id? The official link:…/downloads contains still the old SDK dated June 2013. I'm trying to get the update() function to work with the new tables that have switched type of record id from integer to guid string. Hi Jerry, the SDK from that page is actually the latest one (version 1.1.1) – the "last updated" date is incorrect. I'll try to get it fixed soon. Thanks for reporting that. This sounds great. Why not use uniqueIdentifier though? Carlos, Great info as always…I noticed when trying to use model-first, the new columns are not auto-generated, but when I uses code-first the entities do create the _* fields. It looks like the CF table inherit from EntityData which is why this is happening but should we steer clear from model-first until this is more consistent across both methodologies? We are using the .Net backend. Carlos, it looks like I can add Id as bigint PK via the Manage SQL db as well vs the command line. With the type of the id column being nvarchar(255), you cannot create a pure join table between two of those tables, because the primary key of that table will be a clustered index of size 1020 bytes (255 * 2 + 255 * 2). SQL Server has a limit of 900 bytes. You will get an error like this when you try to create the table: "Warning! The maximum key length is 900 bytes. The index 'xyz' has maximum length of 1020 bytes. For some combination of large values, the insert/update operation will fail." So I am unable to use a combination of Azure Mobile Services tables along with EF6 generated pure join tables. Other than reverting back to integer ids, what other options do we have? Carlos, when doing an update using the mobile services client to a .net backend, the __updatedat field is not being updated, however the __version field is being updated correctly. Your article mentions that __updatedat is updated via a trigger. There don't appear to be any triggers for this table (code first created). Is that the issue, or does that only apply to the js backend? @Bob, if you define your model type deriving from the EntityData base class, then the trigger should be generated for you. If you don't, then the framework will not create that trigger for you and you'll need to do it instead. If you're using EntityData, and you still don't get the trigger, can you post a question in the MSDN forum at social.msdn.microsoft.com/…/home Thanks! What if I have a database with data that I want to use in azure with mobile services, do I have to add the id, _created fields to all my existing tables? Yes. Also you can map your entities to a DTO and use the DTOs, in this way you avoid much changes to your database.
https://blogs.msdn.microsoft.com/carlosfigueira/2013/11/22/new-tables-in-azure-mobile-services-string-id-system-properties-and-optimistic-concurrency/
CC-MAIN-2018-39
refinedweb
3,121
59.84
Try the new System.Text.Json APIs Immo. We also have a video: Getting the new JSON libraryGetting the new JSON library - If you’re targeting .NET Core. Install the latest version of the .NET Core 3.0 preview. This gives you the new JSON library and the ASP.NET Core integration. -. The future of JSON in .NET Core 3.0The future of JSON in .NET Core 3.0: - Provide high-performance JSON APIs. We needed a new set of JSON APIs that are highly tuned for performance by using Span<T>and can process UTF-8 directly without having to transcode to UTF-16 stringinstances. Both aspects are critical for ASP.NET Core, where throughput is a key requirement. We considered contributing changes to Json.NET, but this was deemed close to impossible without either breaking existing Json.NET customers or compromising on the performance we could achieve. With System.Text.Json, we were able to gain 1.3x – 5x speed up, depending on the scenario (see below for more details). And we believe we can still squeeze out more. - Remove Json.NET dependency from ASP.NET Core. Today, ASP.NET Core has a dependency on Json.NET. While this provides a tight integration between ASP.NET Core and Json.NET, it also means the version of Json.NET is dictated by the underlying platform. However, Json.NET is frequently updated and application developers often want to — or even have to — use a specific version. Thus, we want to remove the Json.NET dependency from ASP.NET Core 3.0, so that customers can choose which version to use, without fearing they might accidentally break the underlying platform. - Provide an ASP.NET Core integration package for Json.NET. Json.NET has basically become the Swiss Army knife of JSON processing in .NET. It provides many options and facilities that allow customers to handle their JSON needs with ease. We don’t want to compromise on the Json.NET support customers are getting today. For example, the ability to configure the JSON serialization in ASP.NET Core via the AddJsonOptionsextension method. Thus, we want to provide the Json.NET integration for ASP.NET Core as a NuGet package that developers can optionally install, so they get all the bells and whistles they get from Json.NET today. The other part of this work item is to ensure we have the right extension points so that other parties can provide similar integration packages for their JSON library of choice. For more details on the motivation and how it relates to Json.NET, take a look at the announcement we made back in October. Using System.Text.Json directlyUsing System.Text.Json directly For all the samples, make sure you import the following two namespaces: Using the serializerUsing the serializer The System.Text.Json serializer can read and write JSON asynchronously and is optimized for UTF-8 text, making it ideal for REST API and back-end applications. By default, we produce minified JSON. If you want to produce something that is human readable, you can pass in an instance of JsonSerializerOptions to the serializer. This is also the way you configure other settings, such as handling of comments, trailing commas, and naming policies. Deserialization works similarly: We also support asynchronous serialization and deserialization: You can also use custom attributes to control serialization behavior, for example, ignoring properties and specifying the name of the property in the JSON: We currently don’t have support for F# specific behaviors (such as discriminated unions and record types), but we plan on adding this in the future. Using the DOMUsing the DOM Sometimes you don’t want to deserialize a JSON payload, but you still want structured access to its contents. For example, let’s say we have a collection of temperatures and want to average out the temperatures on Mondays: The JsonDocument class allows you to access the individual properties and values quite easily. Using the writerUsing the writer The writer is straight forward to use: The reader requires you to switch on the token type: Integration with ASP.NET CoreIntegration with ASP.NET Core Most use of JSON inside of ASP.NET Core is provided via the automatic serialization when accepting or returning object payloads, which in turn means that most of your application’s code is agnostic to which JSON library ASP.NET Core is using. That makes it easy to switch from one to another. You can see the details on how you can enable the new JSON library in MVC and SignalR later on in this post. Integration with ASP.NET Core MVCIntegration with ASP.NET Core MVC In Preview 5, ASP.NET Core MVC added support for reading and writing JSON using System.Text.Json. Starting with Preview 6, the new JSON library is used by default for serializing and deserializing JSON payloads. Options for the serializer can be configured using MvcOptions: If you’d like to switch back to the previous default of using Newtonsoft.Json, do the following: - Install the Microsoft.AspNetCore.Mvc.NewtonsoftJson NuGet package. - In ConfigureServices()add a call to AddNewtonsoftJson() Known issues - Support for OpenAPI / Swagger when using System.Text.Jsonis ongoing and unlikely to be available as part of the 3.0 release. Integration with SignalRIntegration with SignalR System.Text.Json is now the default Hub Protocol used by SignalR clients and servers starting in ASP.NET Core 3.0 Preview 5. If you’d: - On the server add .AddNewtonsoftJsonProtocol()to the AddSignalR()call: PerformancePerformance Since this feature is heavily motivated by performance, we’d like to share some high-level performance characteristics of the new APIs. Please keep in mind that these are based on preview builds and the final numbers will most likely differ. We’re also still tweaking default behaviors which will affect performance (for example, case sensitivity). Please note that these are micro benchmarks. Your mileage will most certainly differ, so if performance is critical for you, make sure to make your own measurements for scenarios that best represent your workload. If you encounter scenarios you’d like us to optimize further, please file a bug. Raw System.Text.JsonRaw System.Text.Json Just doing micro benchmarks to compare System.Text.Json with Json.NET yields the following output: System.Text.Json in ASP.NET Core MVCSystem.Text.Json in ASP.NET Core MVC We’ve written an ASP.NET Core app that generates data on the fly that is then serialized and deserialized from MVC controllers. We then varied the payload sizes and measured the results: JSON deserialization (input) JSON serialization (output) For the most common payload sizes, System.Text.Json offers about 20% throughput increase in MVC during input and output formatting with a smaller memory footprint. SummarySummary In .NET Core 3.0, we’ll ship the new System.Text.Json APIs, which provide built-in support for JSON, including reader/writer, read-only DOM, and serializer/deserializer. The primary goal was performance and we see typical speedups of up to 2x over Json.NET, but it depends on your scenario and your payload, so make sure you measure what’s important to you. ASP.NET Core 3.0 includes support for System.Text.Json, which is enabled by default. Give System.Text.Json a try and send us feedback! {"happy": "coding!"} Sometimes it’s good to have native support for json. Like php, for exemple. Seems nice, a couple questions: * Will data contract attributes work like in Json.NET? * Does it support deserialization using the constructor (for immutable objects)? * Why GetString/GetInt32? Is there a generic Get overload that also supports custom/complex types? * Can we write to DOM? I love LINQ to XML – it’s my favorite way to build an XML document via code, so the ability to create JSON the same way would make me very happy Thanks for sharing. The less nuget packages the better. For those who don’t want it, they can stick with newtonsoft json and ignore this namespace entirely, no harm done. For me I will be switching to it and reducing my nuget package count by 1 🙂 Is it possible to allow single quote around property name or value? { ‘prop’: ‘value’ }I get the following error. System.Text.Json.JsonException: ”’ is an invalid start of a property name. Expected a ‘”‘. Path: $ | LineNumber: 0 | I’m curious about how easy is would be to make code that can be serialized by either? Like I don’t care what serializer is used, but I want to control the name of the json properties? Is there merge method? So I can use the object pool. Some very good speed improvements! Congratulations.This will make the Azure Functions even snappier. Every time you show System.Text.Json you take the most simple example possible. No need to have different name in JSON compared to C#, no need to serialize enums as strings. How does System.Text.Json handle this? Also, is camelCase finally the default? The examples you listed are supported (for example: JsonNamingPolicy, JsonStringEnumConverter). Exact match is the default. You can opt-in to camel case. I understand that the System.Text.Json is still in development. However, I would like to point out that the new deserializer produces very different results than the previous one. Details – JsonDemo jsonObj = new JsonDemo() { Age = 12, Name = “方法” };string jsonStr = System.Text.Json.Serialization.JsonSerializer.ToString(jsonObj); //the jsonStr = {“Name”:”\u65b9\u6cd5″,”Age”:12} You can address this by specifying an Encoder in the JsonSerializerOptions (or provide your own). See and
https://devblogs.microsoft.com/dotnet/try-the-new-system-text-json-apis/comment-page-2/
CC-MAIN-2020-40
refinedweb
1,598
51.95
DigitalB20 to measure the Temperature. So here we are building a Thermometer with the following specification using PIC16F877A microcontroller unit from microchip. - It will show Full range of temperature from -55 degree to +125 degree. - It will only display the temperature if temperature changes + / – .2 degree. Components Required:- - Pic16F877A – PDIP40 package - Bread Board - Pickit-3 - 5V adapter - LCD JHD162A - DS18b20 temperature sensor - Wires to connect peripherals. - 4.7k Resistors – 2pcs - 10k pot - 20mHz Crystal - 2 pcs 33pF ceramic capacitors DS18B20 Temperature Sensor: DS18B20 is an excellent sensor to accurately sense the temperature. This sensor provide 9bit to 12bit resolution on temperature sensing. This sensor communicates with only one wire and does not need any ADC to acquire analog temperatures and converting them in digitally. The specification of the sensor is:- - Measures Temperatures from -55°C to +125°C (-67°F to +257°F) - ±0.5°C Accuracy from -10°C to +85°C - Programmable Resolution from 9 Bits to 12 Bits - No External Components Required - The sensor use 1-Wire® Interface If we look at above pinout image from datasheet, we can see that the sensor looks exactly same like BC547 or BC557 package, TO-92. The first pin is Ground, Second pin is DQ or the data and the Third pin is VCC. Below is the electrical specification from Datasheet which will be needed for our design. The rated supply voltage for the sensor is +3.0V to +5.5V. It’s also need pull up supply voltage which is same as the supply voltage stated above. Also, there is an accuracy margin which is +-0.5 degree Celsius for the range of -10 Degree C to +85 Degree Celsius, and the accuracy changes for the full range margin, which is +-2 Degree for -55 Degree to +125 Degree range. If we again look at the datasheet, we will see the connection specification of the sensor. We can connect the sensor in parasitic power mode where two wires are needed, DATA and GND, or we can connect the sensor using external power supply, where three separate wires are needed. We will use the second configuration. As we are now familiar with the power ratings of the sensor and connection related areas, we can now concentrate on making the schematic. Circuit diagram:- If we see the circuit diagram we will see that:- 16×2 character LCD is connected across PIC16F877A microcontroller, in which RB0, RB1, RB2 are connected to LCD pin RS, R/W , and E. And RB4, RB5, RB6 and RB7 are connected across LCD’s 4 pin D4, D5, D6, D7. The LCD is connected in 4bit mode or nibble mode. A crystal Oscillator of 20MHz with two ceramic capacitor of 33pF is connected across OSC1 and OSC2 pin. It will provide constant 20Mhz clock frequency to the microcontroller. DS18B20 is also connected as per the pin configuration and with a 4.7k pull up resistor as discussed before. I have connected all this in the breadboard. If you are new to PIC Microcontroller than follow our PIC Microcontroller Tutorials stating with Getting started with PIC Microcontroller. Steps or code flow:- - Set the configurations of the microcontroller which include Oscillator configuration. - Set the Desired port for LCD including TRIS register. - Every cycle with ds18b20 sensor start with reset, so we will reset the ds18b20 and wait for the presence pulse. - Write the scratchpad and set the resolution of the sensor 12bit. - Skip the ROM read followed by a reset pulse. - Submit convert temperature command. - Read the temperature from the scratchpad. - Check the temperature value whether negative or positive. - Print the temperature on 16×2 LCD. - Wait for the temperature changes for +/-.20 degree Celsius. Code Explanation: Full code for this Digital Thermometer is given at the end of this tutorial with a Demonstration Video. You will be needing some header files to run this program which can be downloaded from here. First, we need to set the configuration bits in the pic microcontroller and then start with void main function. Then below four lines are used for including library header file, lcd.h and ds18b20.h. And xc.h is for microcontroller header file. #include <xc.h> #include <string.h> #include "supporting c files/ds18b20.h" #include "supporting c files/lcd.h" These definitions are used for sending command to the temperature sensor. The commands are listed in the sensor’s datasheet. #define skip_rom 0xCC #define convert_temp 0x44 #define write_scratchpad 0x4E #define resolution_12bit 0x7F #define read_scratchpad 0xBE This Table 3 from the sensor’s datasheet is showing all commands where macros are used to sending respective commands. The temperature will only display in the screen if the temperature changes +/- .20 degree. We can change this temperature gap from this temp_gap macro. By changing the value at this macro, the specification will be changed. Other two float variables used for storing the displayed temperature data and differentiate them with the temperature gap #define temp_gap 20 float pre_val=0, aft_val=0; . In void main() function, the lcd_init(); is a function to initialize LCD. This lcd_init() function is called from the lcd.hlibrary. TRIS registers are used to select I/O pins as input or output. Two unsigned short variable TempL and TempH are used for storing the 12bit resolution data from temperature sensor. void main(void) { TRISD = 0xFF; TRISA = 0x00; TRISB = 0x00; //TRISDbits_t.TRISD6 = 1; unsigned short TempL, TempH; unsigned int t, t2; float difference1=0, difference2=0; lcd_init(); Let’s see the while loop, here we are breaking the while(1) loop into small chunks. Those lines are used to sense the temperature sensor is connected or not. while(ow_reset()){ lcd_com(0x80); lcd_puts ("Please Connect "); lcd_com (0xC0); lcd_puts("Temp-Sense Probe"); } By using this segment of code we initialize the sensor and send command to convert the temperature. lcd_puts (" "); ow_reset(); write_byte(write_scratchpad); write_byte(0); write_byte(0); write_byte(resolution_12bit); // 12bit resolution ow_reset(); write_byte(skip_rom); write_byte(convert_temp); This code is for storing the 12bit temperature data in two unsigned short variables. while (read_byte()==0xff); __delay_ms(500); ow_reset(); write_byte(skip_rom); write_byte(read_scratchpad); TempL = read_byte(); TempH = read_byte(); Then if you check the complete code below, we have create if-else condition to find out the temperature sign whether it is positive or negative. By using the If statement code, we manipulate the data and see whether the temperature is negative or not and determine the temperature changes is in +/- .20 degree range or not. And in else part we checked whether the temperature is positive or not and temperature changes detection. code Getting Data from DS18B20 Temperature Sensor: Let’s see the time gap of 1-Wire® Interface. We are using 20Mhz Crystal. If we look inside the ds18b20.c file, we will see #define _XTAL_FREQ 20000000 This definition is used for XC8 compiler delay routine. 20Mhz is set as the crystal frequency. We made five functions - ow_reset - read_bit - read_byte - write_bit - write_byte 1-Wire® protocol needs strict timing related slots to communicate. Inside the datasheet, we will get perfect time-slot related information. Inside the below function we created the exact time slot. It is important to create the exact delay for hold and release and control the TRIS bit of the respective sensor’s port. unsigned char ow_reset(void) { DQ_TRIS = 0; // Tris = 0 (output) DQ = 0; // set pin# to low (0) __delay_us(480); // 1 wire require time delay DQ_TRIS = 1; // Tris = 1 (input) __delay_us(60); // 1 wire require time delay if (DQ == 0) // if there is a presence pluse { __delay_us(480); return 0; // return 0 ( 1-wire is presence) } else { __delay_us(480); return 1; // return 1 ( 1-wire is NOT presence) } } // 0=presence, 1 = no part Now as per the below time slot description used in Read and Write, we created the read and write function respectively. unsigned char read_bit(void) { unsigned char i; DQ_TRIS = 1; DQ = 0; // pull DQ low to start timeslot DQ_TRIS = 1; DQ = 1; // then return high for (i=0; i<3; i++); // delay 15us from start of timeslot return(DQ); // return value of DQ line } void write_bit(char bitval) { DQ_TRIS = 0; DQ = 0; // pull DQ low to start timeslot if(bitval==1) DQ =1; // return DQ high if write 1 __delay_us(5); // hold value for remainder of timeslot DQ_TRIS = 1; DQ = 1; }// Delay provides 16us per loop, plus 24us. Therefore delay(5) = 104us Further check all the related header and .c files here. So this is how we can use DS18B20 sensor to get the temperature with PIC Microcontroller. Read More Detail:Digital Thermometer using a PIC Microcontroller and DS18B20 JLCPCB – Prototype 10 PCBs for $2 (For Any Color) China’s Largest PCB Prototype Enterprise, 600,000+ Customers & 10,000+ Online Orders Daily See Why JLCPCB Is So Popular:
https://pic-microcontroller.com/digital-thermometer-using-a-pic-microcontroller-and-ds18b20/
CC-MAIN-2019-18
refinedweb
1,452
63.59
. Story API In the last part, the “Ink for Web” functionality was used as part of Inky to make make three JavaScript files: ink.js, main.js, and the story.js. In reviewing the inkjs NPM module, the connection was made between the module and ink.js file created by “Ink for Web”. They were the same. In reviewing the main.js, there were were usages of a story object to check if the story could continue, what the next chunk of text was, and what any of the choices were. Runtime API There isn’t direct documentation for the JavaScript API. However, the C# documentation shows the API for using the Story object. Creating a JSON Story file The Story API is created from reading a JSON file with a compiled story. In order to get that, an Ink file must be run through either inklecate, a command-line tool, or via the Inky editor using the File -> “Export to JSON..” option. Browser and Node.js Differences The difference between the story.js and the JSON file is actually only that the story.js has its JSON contents set as the value for variable called storyContent. It is often easier to simply load this file into the global namespace via a SCRIPT tag and then parse the object when working in a browser. For node.js, the JSON file can be loaded via a require() as shown in the inkjs README. Using the Story API index.js var Story = require('inkjs').Story; var json = require('./ExampleStory.json'); var inkStory = new Story(json); console.log(inkStory.Continue()); Calling the Continue() function on the above example story will produce the following: It was our first date and we hadn’t made dinner plans. Calling Continue() again would produce the following: He turned to me. “What should we eat?” Finally, checking currentChoices would produce and array of four entries with properties text and index. Revised index.js var Story = require('inkjs').Story; var json = require('./ExampleStory.json'); var inkStory = new Story(json); inkStory.Continue(); /* It was our first date and we hadn't made dinner plans. */ inkStory.Continue(); /* He turned to me. "What should we eat?" */ for(var choice of inkStory.currentChoices) { console.log("Text: " + choice.text + " : Index: " + choice.index); } /* Text: Pizza? : Index: 0 Text: Sushi? : Index: 1 Text: Salad? : Index: 2 Text: Nothing? : Index: 3 */
https://videlais.com/2019/05/27/javascript-ink-part-2-story-api/
CC-MAIN-2021-04
refinedweb
394
70.09
by James Webster, Test Manager, .NET Micro Framework and Jerry Kindall, Programmer/Writer Many programs running on the Microsoft® .NET Micro Framework get user input through General Purpose Input/Output (GPIO)-type buttons on their respective hardware devices. The .NET Micro Framework hardware emulator provides five built-in GPIO buttons – four directional buttons and a Select button. Of course, some devices have additional buttons. When you press or release a button, the GPIO interface generates an interrupt. The .NET Micro Framework translates these interrupts into events called ButtonUp and ButtonDown. You can register your own handler to receive either or both of these events. Microsoft Visual Studio® can provide the code you need to get started handling button events when you create a new .NET Micro Framework project. To start a new project in Visual Studio, perform the following steps. 1 On the File menu, point to New and then click Project. 2 In the New Project window, expand the Visual C# node in the Project Types pane if it is not already expanded. 3 Under Visual C#, click Micro Framework. You’ll see the standard .NET Micro Framework project templates displayed in the Templates pane. 4 In the Templates pane, double-click Window Application. This article shows you how to write code that handles button events using the .NET Micro Framework. Keep in mind that you won’t always have to do all this work yourself. Many .NET Micro Framework user interface elements can handle buttons for you. For example, the ListBox class supports using the Up and Down buttons to move the selection bar through a list; you do not need to handle this task yourself. Even in this case, though, you will need to handle the Select button, so it is still important to be familiar with techniques for handling button input. A new MFWindowApplication project like the one you’ve just created contains an automatically generated class called GPIOButtonInputProvider. This class establishes a mapping scheme between the device’s GPIO pins and the buttons they represent. The GPIOButtonInputProvider class works with the .NET Micro Framework emulator, which uses GPIO pins 0 through 4 for buttons. If you are running your applications on the emulator (or on a hardware device with the same button mappings), no changes are needed. If you are using a hardware device with other button mappings, you can modify the class to map the correct pins to the buttons. A recent article on this site, titled “Getting Started with Freescale i.MXS,” provides an example of adapting the GPIOButtonInputProvider class for the Freescale i.MXS platform. The same concept applies to other hardware, although the details might differ. The code in this article uses the default mappings. The Program.cs file in a fresh MFWindowApplication project includes a stub event handler method, OnButtonUp, which is called when any button is released. The stub prints the name of the pressed button in the Output window in Visual Studio. This method is where you will add your own code, probably using a switch statement to provide an action for each button, as in the following sample method: private void OnButtonUp(object sender, ButtonEventArgs e) { switch (e.Button) // e is the event record { case Button.Left: ((Text)mainWindow.Child).TextContent = "Left"; break; case Button.Right: ((Text)mainWindow.Child).TextContent = "Right"; case Button.Up: ((Text)mainWindow.Child).TextContent = "Up"; case Button.Down: ((Text)mainWindow.Child).TextContent = "Down"; case Button.Select: ((Text)mainWindow.Child).TextContent = "Select"; default: ((Text)mainWindow.Child).TextContent = "Button " + e.Button; } } Our example handler places the name of any pressed button in the middle of the screen (where the text “Hello World!” initially appears). Try this now by replacing the stub OnButtonUp method with the one shown here, then pressing F5 to run the project in the .NET Micro Framework emulator. The stub handler in the Window Application template responds to ButtonUp events, which means that your code does not receive an event until the user has already released the button. This is the conventional way to handle simple button events, but in some cases, it is too late to be of any use. For example, you might want a button to repeatedly perform some action as long as it is held down. This requires code to run when the button is first pressed. The .NET Micro Framework provides a ButtonDown event that you can use in conjunction with a timer to implement this functionality. The Timer object in the System.Threading namespace periodically calls a method that you specify when you instantiate the timer. This callback method actually does most of the work; the ButtonDown and ButtonUp handlers primarily serve to instantiate and dispose of the timer. In this section, you will learn how to program the Up and Down buttons to continuously increment or decrement a counter. Additionally, the longer the buttons are held down, the faster the counter changes. You can apply similar techniques to graphical elements or controls, such as sliders. If you have not used timers before, you may not know that a method called by a timer runs in its own thread. For this reason, you should use the lock statement to synchronize the timer thread and the main thread whenever you are entering a critical section of code—that is, any code that touches the counter or the timer object while more than one thread might be running. You can only lock on object references, so in this example, the timer will be used for locking. To get started, create a new .NET Micro Framework Window Application as before, then open the Program.cs file in the new project. Near the top of the file, with the other using directives, add the following code to facilitate easier use of the Timer class. using System.Threading; Next, find the following lines in the CreateWindow class. // Connect the button handler to all of the buttons. mainWindow.AddHandler(Buttons.ButtonUpEvent, new ButtonEventHandler(OnButtonUp), false); After these lines, insert the following line to add a ButtonDown event handler. By convention, this method is named OnButtonDown. You will add the code for this method later. mainWindow.AddHandler(Buttons.ButtonDownEvent, new ButtonEventHandler(OnButtonDown), false); Now, somewhere in the Program class, add the following declarations. (They can be anywhere in the class as long as they are not in a method, but you might want to put them right before the stub OnButtonUp method so they will be close to the methods that use them.) The variable counterValue is the number we will be adjusting with the buttons. The rest of these variables have to do with the timer that will provide the repeat function. // the value to be adjusted using Up/Down buttons private int counterValue = 50; // initial, minimum, and current settings for timer in milliseconds private const int initialInterval = 500; private const int minimumInterval = 50; private int currentInterval; // the active timer private Timer repeatTimer = null; With the preceding code in place, insert the following lines to add the OnButtonDown method. // handle a button press, starting a timer for repeats private void OnButtonDown(object sender, ButtonEventArgs e) HandleButtons(e); lock (repeatTimer) // dispose of existing timer (in case more than one button is down) if (repeatTimer != null) repeatTimer.Dispose(); // start the timer firing every initialInterval milliseconds currentInterval = initialInterval; if (e.Button == Button.Up || e.Button == Button.Down) repeatTimer = new Timer(OnTimer, e, currentInterval, currentInterval); The OnButtonDown method receives control whenever any button is pressed. First, it calls the HandleButtons method (passing the event record, e, so that HandleButtons knows which button was pressed and can adjust counterValue appropriately). If the Up button or Down button was pressed, this method instantiates a Timer object, specifying the OnTimer method (which we will discuss later in this article) as the callback function. Timers can pass a state object to the callback; in this example we specify the event record, e, because OnTimer needs the event record to pass to HandleButtons. It is possible for a user to press two or more buttons simultaneously on an actual hardware device. Although this is not possible with the emulator, you should address this possibility in your code so that the application does not behave in a way the user does not expect. If the timer already exists (as indicated by a non-null value in repeatTimer), meaning that another button is already pressed, the existing timer is disposed of before a new one is instantiated. This results in “last button wins” behavior that users will find natural. The ButtonUp event handler, OnButtonUp, merely disposes of the timer to stop the repeating behavior. You should replace the stub OnButtonUp handler with the following: // stop the repeat timer when the user releases a button // if a timer exists, dispose of it repeatTimer = null; The OnButtonUp method is called when any button is released, not just the Up button or the Down button, although the latter two buttons are the only ones that start the timer. Therefore, we dispose of the timer only if it actually exists (that is, if the repeatTimer reference is non-null). Otherwise, releasing the Select button, for example, would throw an exception trying to dispose of a timer that does not exist. (If a user has pressed more than one button on the device at the same time, this means that the first button that is released will stop the timer. Releasing any other buttons afterward will have no effect.) After disposing of the timer, we set repeatTimer to null to record that the timer no longer exists. The OnTimer method, which follows, is called periodically by the timer, initially every 500 milliseconds (every half second). // called by the timer to repeat a button action private void OnTimer(Object e) // decrease timer interval for the next repeat if (currentInterval > minimumInterval) { currentInterval -= 50; if (currentInterval < minimumInterval) currentInterval = minimumInterval; repeatTimer.Change(currentInterval, currentInterval); } The first thing OnTimer does is call HandleButtons to make sure that the counter gets updated appropriately for the button being held. Then the timer interval is reduced so that the repeated increment or decrement action goes faster (up to a prescribed limit) the longer the button is held down. The HandleButtons method is the last method needed for our press-and-hold functionality. It contains a switch statement much like the one you saw in the “Basic Button Fun” section earlier in this article. Because OnButtonDown and OnTimer both need this functionality to process button actions, the necessary code has been “broken out” into HandleButtons, as follows: // button handler called from OnButtonDown and by timer private void HandleButtons(Object e) switch (((ButtonEventArgs)e).Button) case Button.Up: if (counterValue < 99) counterValue++; break; case Button.Down: if (counterValue > 0) counterValue--; case Button.Select: counterValue = 50; // display the new value of the counter on the screen ((Text)mainWindow.Child).TextContent = counterValue.ToString(); mainWindow.Child.Invalidate(); The only slightly tricky aspect of the HandleButtons method is the cast of the event record, e, to type ButtonEventArgs, which is necessary because the method is used as a callback function for a timer, which can pass any object reference. Note also that we need to invalidate the text object to force it to be redrawn immediately. The rest of the code should be familiar to you by now. After making the preceding changes to Program.cs, save it and run it in the .NET Micro Framework emulator by pressing F5. Hold the emulator’s Up button and observe how the displayed value changes slowly at first, but then changes faster and faster as you continue to hold the button down. Also notice how you can easily make fine adjustments by pressing and releasing the buttons quickly. This behavior comes for “free”—as long as you release the button before the initial timer interval, it acts much like it would have if you had programmed only a ButtonUp handler. Behind the scenes, of course, a timer is being instantiated by OnButtonDown, then disposed of by OnButtonUp before it has time to activate. The emulator screen initially displays “Hello World!” until you press the Up button, the Down button, or the Select button. This is a leftover behavior from the Window Application template. You can eliminate it by opening the Resources.resx file and changing the value of String1 to 50, the initial value of the counter, as in the following illustration. As an exercise, you might add functionality for the Left and Right buttons. Perhaps the Left button could set the counter to 0 and the Right button could set it to 99, providing easier access to the top and bottom of the counter’s range. This requires adding just a few lines of code to one method. A more ambitious addition would be to give the Left and Right buttons repeat functionality as well, perhaps having them add or subtract 5 from the counter. Copyright © 2007 Microsoft Corporation. All rights reserved. This article was originally published on MSDN Blogs. The? This.
http://blogs.msdn.com/aldenl/
crawl-002
refinedweb
2,159
54.52
At 11:37 13/04/2005, Jozsef Kovacs wrote: Many thanks Jozsef, This is an excellent suggestion and we accept your motivation and commitment. Public discussion is extremely important and very valuable. We will respond positively, hopefully rapidly, and enthusiastically to questions raised here. We expect that there will be contributions from a range of people. The process of creating CML rests on an unwritten process (rather like the British Constitution, which is never written). There are certain fundamentals, rather similar to the IETF's "general consensus and running code". The essentials include: - CML must be conforming XML. (I have seen things calling themselves "CML" which did not even parse in generic XML tools). - where possible CML uses emerging W3C technology rather than inventing its own. Thus we use DOM, SAX, RDF. RSS, XSLT, XSD, etc. - CML interoperates with other XML languages through XML namespaces - the definition of CML is taken from the publications in peer-reviewed literature. This means that the latest formal specification is JCICS 2003. We all intend to conform to that - CML is an Open process to the extent that it is published, we receive contributions and acknowledge them, we promote and applaud interoperability. Where possible everything, including discussions like this, is openly visible and much should be made available through Open redistribution license such as Creative Commons and Budapest/Berlin/Bethesda declarations of Open Access. However it is not like Open source as you are not allowed to modify the definition of the specification. CML welcomes non-Open conformant implementations - and does not regard them as morally inferior. However CML cannot use closed source for its conformance testing. In the design of CML there are certain principles. - explicit semantics are preferable to implicit semantics, even at the cost of some verbosity - a feature should have been exposed to the community before being incorporated in the publication - new features are resisted until it is impossible to refuse them. We avoid bloat - therefore there is usually an experimental specification before the next formalisation. - features are not removed - this would be difficult for existing applications - but they may be obsoleted. - elements and attributes should - as far as possible - be context-independent. This means subsets of the schema can be used. For example a theochem application might only use molecule and atoms (no bonds, or anything else). - in all CML applications the default semantics of elements and attributes must be identical. Thus formalCharge represents an integral number of electrons removed from or added to an atom. However the convention attribute allows additional semantics to be added. For example there is little communal systematisation of bond orders and types. CML uses "1"/"2"/"3" (or "S/D/T" for "normal single/double/triple" bonds. Other values are allowed but should have a convention. Thus "4" could mean aromatic for convention="MDL" and quadruple for others. - CML applications may ignore foreign namespaces. For example a cml:molecule could contain an SVG element, or an SVG document could contain a cml:molecule - prefixes (e.g. cml:) are NOT hardcoded. They must be accompanied by a namespace declaration. - additional elements and attributes in the CML namespace are NOT allowed. It would be easy for files to collide if this were allowed. In the development of CML software there are also certain principles. - CML itself is not software. The equivalent of a bug is an inconsistency, and of a feature is an unhappy piece of design which is ugly or difficult to use. - software should strive to be conformant. We intend to produce conformance tools in the near future. - it should be easy to develop simple applications of CML. A CML processor may ignore elements and attributes if the interpretation does not depend on them. For example some current CML software does not interpret reaction or spectrum. - in principle all XML input should be passed to the output if required. However this requires significant DOM programming and the W3C DOM is not user-friendly. Therefore some CML processing may lose information. Ideally a roundtrip of readCML->writeCML->readCML should be lossless, but this is difficult to achieve. - All CML software should interpret information in the same way (unless it ignores it). It should not invent local semantics. Thus if 100 molecules are concatenate in a CML document the semantics are just that - 100 concatenated molecules. They are not necessarily snapshots on a dynamics trajectory, different experimental observations, etc. We are developing RDF as the method to annotate complex compound documents. - No conformant CML file should cause CML-aware software to crash, and error messages should be as informative as possible - e.g. "FOOBAR does not support the CML reaction element | the CML array syntax | the CML map/link vocabulary, etc. and this document will not be processed", "PLINGE has detected multiple CML molecule elements and displayed each in a separate panel. CML spectrum elements are ignored". Then users know slightly better - CML documents range over a very wide variety - molecules, comp chem output, instruction manuals, synthetic recipes, journal articles, etc. Multiple namespaces will be common. There is no default "best" way to display or process these and there is unlikely to be a "CML browser" that does everything. However there are likely to be generic tools which manage compound documents and which can accept CML plugins to display chemistry in foreign contexts. In general the CML in JCICS2003 has stood the test and there are very few immediate needs to change the vocabulary in major ways. About 2-3 (unintentionally) implicit semantics have been formalised by a new attribute. The creation of "CMLReact" has involved 2-3 additional elements and these will be submitted for publication shortly. CMLComp is being informed by many marked up outputs and the main need is for the semantics of basisSet to be enlarged. The solid state will be explored over the next year in a funded project. Most of the work involves consolidating and firming up the semantics on current vocabulary. This is hard, because chemistry is very sloppy over its information, but we are making progress. The primary mechanism is the JUMBO toolkit which is element/attribute centred. The semantics of every element is explored and most have been done. For each we create a range of unit tests - currently over 400. This will be amplified by conformance tests. We also now need to create communal dictionaries for the common uses of dictRef. Some of these have been collected for reactions, but they are also required for common CML concepts. Here again anyone cane create their own namespaced dictionaries; if there is communal agreement, terms in these may be raised to the communal CML area. Some areas of CML are more explored than others. Thus we have intensively explored reactions and are reasonably confident that the specifications is robust. We are exploring spectra but have some way to go. We have much experience with comp chem calculations on geometry optimisation and properties, but little on dynamics and ensembles. Recently we have made major advances in crystallography. CML is a meritocracy and participants are honoured by their contribution - see Eric Raymond's "Homesteading the Noosphere". Our methods are Open and we aim for interoperability. Very recently we have decided to define Web-service and related APIs to build large networks applications - see for a summary of some of this. We intend to summarise these, probably on the QSAR list. We cannot include non-Open software under "Blue Obelisk" but we can - if resources allow - highlight non-Open software that interoperates via CML. Thank you very much for catalysing this discussion. All members of this list are equally welcome and all contributions are taken in a positive spirit. Henry and I have moderated 30,000 emails in XML-DEV without a flamewar or spam. You may wish to ask questions, make suggestions, recount your experiences, etc. You may make product announcements *to the extent that they inform the list community about CML and steer clear of hype and vapourware*. For example it would be very useful to know that FOOBAR's parser could read 10,000 CML files per minute, or that they had a CML-compliant format for publishing logP, or that they had an Openly accessible dictionary of properties, but not that they could calculate 10,000 logP per minute or a secret algorithm for clustering molecules. Please also note that JUMBO is Open source and interoperates with other Open (Blue Obelisk) groups (e.g. CDK/JOELib/QSAR/JChemPaint/Jmol/QSAR/Octet/Openbabel). Messages are sometimes crossposted there, but should generally be consistent with the core philosophy of those lists. There are many things I have not written and this may be a good time to start introducing CML from scratch to some list members. P. PS. My part of this mail is re-usable under Creative Commons. Other parts might be re-usable under "fair use" (please note I am no longer at Nottingham, but at Cambridge). Peter Murray-Rust Unilever Centre for Molecular Informatics Chemistry Department, Cambridge University Lensfield Road, CAMBRIDGE, CB2 1EW, UK Tel: +44-1223-763069 Fax: +44 1223 763076 -- Jozsef Kovacs Software developer ChemAxon Ltd. Maramaros koz 3/A, Budapest, 1037 Hungary mailto:jozsef.kovacs@... We spent an hour yesterday discussing how to manage different types of inputs and the modularisation of the code to support this. Whatever we come up with has to support GUIs, commandline, workflows, WebServices and programmatic calls. It also has to support files, URLs, inputStreams, etc. I suggest we do this through InputSource and alternatively through File (if we wish to preserve the filename). I would value comments on this, including from those who aren't directly involved in programming JUMBO/Java as it may affect the XML result (the inclusion of the CMLName element). (AbstractBase is the seriously unfortunate name chosen for a CML Object - I would like to change this to CMLBase (we can't call it CMLObject or CMLElement as they are part of the language). Note that we are coming close to the need for a definitive dictionary of CML dictRef names such as cml:filename. My proposed API is: public final static String CML_FILENAME = "cml:filename"; /** reads files in a directory and transforms to AbstractBase. * each file must be XML and the documentElement taken from CML Schema. * files must conform to a regex * the filename (as java.io.File.getCanonicalPath()) * can be saved as a CMLName child of the documentElement with * dictRef=CML_FILENAME and the content=filename * files that cannot be read are skipped without comment * * @param dir directory containing the files * @param regex (as in java.lang.String.matches(regex)) filters the files * @param saveFilename add filename as CMLName child of documentElement * @return an array of the top level AbstractBase elements * @throws IOException cannot find directory */ public static AbstractBase[] readCMLObjectsFromDirectory (File dir, String regex, boolean saveFilename) throws IOException; /** reads file and transform to AbstractBase. * the file must be XML and the documentElement taken from CML Schema. * the filename (as java.io.File.getCanonicalPath()) * can be saved as a CMLName child of the documentElement with * dictRef=CML_FILENAME and the content=filename * * @param file the file * @param saveFilename add filename as CMLName child of documentElement * @return an AbstractBase element or null if read fails * @throws IOException cannot find file * @throws CMLException cannot interpret file as CML */ public static AbstractBase readCMLObject (File file, boolean saveFilename) throws IOException; /** reads InputSource and transform to AbstractBase. * each file must be XML and the documentElement taken from CML Schema. * * @param source the InputSource * @return an AbstractBase element or null if read fails * @throws CMLException cannot interpret source as CML */ public static AbstractBase readCMLObject(InputSource inputSource) throws IOException, CMLException; Peter Murray-Rust Unilever Centre for Molecular Informatics Chemistry Department, Cambridge University Lensfield Road, CAMBRIDGE, CB2 1EW, UK Tel: +44-1223-763069 Fax: +44 1223 763076 [Crossposted to 3 lists, please be considerate] [John Irwin] >>... Can you recommend software for >>preparing and manipulating CML files? If OE offered CML, we could and might >>offer CML tomorrow. There are many good tools for converting files to CML. First, some words about strategy. CML is powerful enough to hold compound documents such as compound data cards, computational chemistry output and (when combined with XHTML) complete scientific documents. So "converting to CML" can involve components such as molecules, reactions, their properties, spectra, eigenproperties, etc. In general CML can hold any information composed of simple datatypes (numbers, strings, array, matrixes, etc.) and predefined schema elements (reactions, spectra...). We are devising a mechanism for building complex datatypes (e.g. critical point, phase diagrams). Most people currently want to manage molecular data and I'll stick with that. (JohnI and I have already corresponded usefully so I believe that a Zinc entry consists of at least: * a molecule * its provenance * published names * published properties * calculated properties * intellectual property rights CML can manage all of this except the IPR. To summarise John's mail, Zinc consists of molecular information supplied by compound suppliers under contract, for which properties are calculated using software made available under contract and then collected in a database which itself has restrictions on use (e.g. only limited subsets can be distributed, and for restricted use]). CML is not capable of managing the complexity of this IPR so the converter would have to add this, preferable in RDF. [Note that this problem does not occur for Open data since we can simply add a BOAI or Creative Commons license.] The provenance (without rights) is managed by the DublinCore dc:creator and dc:publisher in CML: <metadataList> <metadata name="dc:creator" content="Foobarchem"/> <metadata name="dc:publisher" content="ji@..."/> </metadataList> CML can, in principle, hold everything else without loss. Since I don't know the range of properties I don't know which are complex, but assuming that most are scalar, then the simple approach is to render them as: <property dictRef="zinc:mpt"> <scalar units="units:celsius" min="121" max="123.5" errorBasis="range"/> </property> === OK, most people weren't expecting that! BUT provenance and redistribution is increasingly important. That is why the default action of OpenBabel when outputting CML is to add metadata. We would hope that if users add metadata to the input (only possible in CML) it would be transported through === I suspect the question could be rephrased as "how do I convert a file containing small-molecule information and produce a CML file which contains the atoms, bonds and their properties without loss? Each molecule is separately identifiable and there is no contextual linkage between them (e.g. they aren't poses, supramolecules, etc. The file(s) may contain many independent molecules and batch conversion is required" I currently know of the current tools, and would approach them in this order: * Openbabel. This has the widest range of file types and can deal with lists of molecules. Billy Tyrrell, Chris Morley, Geoff Hutchison and I have variously developed this and Henry Rzepa has carried out roundtripping. We intend to maintain this a flagship for CML conversion - i.e. if there is a problem we will try to respond. * JUMBO. We have concentrated on complex formats and currently offer * MDL Molfile, SDF (and RXN). This attempts to follow the published spec for V2000 files. However since some of the spec appears to be specific to MDL programs it is necessarily a subset, albeit a fairly comprehensive one. * MOL2 format taken from the Tripos spec. This again is a subset and does not address recognition of atom type and fragments. Not validated. * CDX and CDXML. Most of the spec relating to molecules and reaction, but not graphic layout, has been implemented. Since CD is a very graphically oriented format it is extremely easy to create objects which do not formally represent the semantics of the molecule. Conversion of any CDX file is likely to be lossy and fuzzy. * CIF. This is a complete interpretation of DDL1 with manual coding of some of the core dictionary. Although CIF can contain chemical structure information this is virtually never used. Hence we have to use heuristics to calculate the chemistry and this is almost lossless for GOOD CIFs (as published by Acta Cryst.) * SMILES. I think this is fairly complete and should include stereochemistry. * CDK. This has a range of file readers and a CML writer. We haven't been directly involved in the coding but correspond daily with the group. If there are any problems then I am sure the CDK group would be keen to address them and we'll help in the discussions. * JOELib. This has a wide range of functionality, including the calculation of properties. Again we are in frequent touch, and although I haven't used it for CML I am sure the authors are responsive. * BlueObelisk, WebServices and Taverna. () This is a recent movement among a number of OpenSource and Open Data groups to ensure interoperability. "File conversions" will increasingly be packaged as WebServices () or workflows (such as). Scientists can then select the services they require and compose their own application. This will include conversion, validation, checks for uniqueness, submission to repository, etc. I suspect that Zinc actually requires a Taverrna-like workflow for its maintenance. Taverna can be used to warp closed source programs, but of course these cannot be distributed. We offer WebServices for OpenBabel and JUMBO as above so anyone can link their conversion requirements. Also our WS are Open so anyone can clone them to avoid connection problems. We do not currently offer WebServices that use close source programs because there are usually license restrictions by the suppliers and WS cannot yet deal with complex IPR negotiations. There is no reason why we might not create some in the future - if so the WebService wrapper would probably be OpenSource. There are some other Open Source programs (with whose authors we have had discussions) which read and/or emit CML including: * BKChem * Ghemical I don't know whether these can be used in batch but as they are open source then anyone can add this. I am also sure they'd be keen to help. I don't know the degree of conformance. There are an increasing number of computational chemistry programs which emit (and often read) CML but this is out of scope in this thread. We welcome implementations and use of CML by for-profit organisations. CML itself is an openly published, read-only specification and does not require implementations to be OpenSource. It does, however, require best efforts to conform and we shall write more of this later. Although, in principle, it is possible to write conformant software by reading the spec, in practice no spec is completely watertight and we encourage discussion. Obviously any posting to this list advertises the origin of the poster, so companies may wish to mail privately and will get a private reply. However we have limited resources and cannot generally give extended free private advice. There are some closed source tools which read/emit CML. Some of their authors have not approached us at all. Others have approached us but expected us to provide complete CML implementations at our own expense. Since, at present, this not an attractive business proposition, we haven't been able to accept these offers. We note that some of them (unidentified) have since added "CML". We do not know the degree of conformance or comprehensiveness. Note that some of them are only available through purchase and we may not have access to them. We do know that some of them do not conform to the published CML specification and shall be advising them that this is inconsistent with the use of the term and mark "CML". Other list readers might like to comment, but please make sure that statements are factually correct and avoid political discussions. * ACDLabs. No public information on conformance. * CambridgeSoft. No public information on conformance. * Chemaxon (Marvin). We have had no contact from them. This company lists the CML elements they supports and adds many others in the same namespace which are not CML. The "CML" is therefore not conformant to the published Schema. There are also semantics which are incompatible with CML (e.g. the order of atoms may be important). This is "semantic pollution". We shall write to them soon, advising them that this is unacceptable. There are technical fixes to some of this such as the use of proprietary namespaces for attributes, elements and datatypes. * Foo. private communication. * Bar. private communication * Xyzzy. private communication. I shall write separately on compchem and semantics. P. Peter Murray-Rust Unilever Centre for Molecular Informatics Chemistry Department, Cambridge University Lensfield Road, CAMBRIDGE, CB2 1EW, UK Tel: +44-1223-763069 Fax: +44 1223 763076 Dear Prof. Irwin, Your message was forwarded to the CML-discuss mailing list. While=20 OpenEye itself doesn't offer CML support, Open Babel most certainly=20 does. (Open Babel grew out of the old OELib, though there are many=20 differences between OB and OELib/OEChem these days.) Babel supports a=20 wide variety of chemical file formats, with more formats added every=20 release. And of course, if you find something lacking in OpenBabel or discover a=20= bug -- unlike OpenEye, Babel has an open bug tracking system: Cheers, -Geoff -- -Dr. Geoffrey Hutchison <grh25@...> Cornell University, Department of Chemistry and Chemical Biology Abru=F1a Group On Apr 6, 2005, at 10:28 AM, Eugen Leitl wrote: > ----- Forwarded message from John Irwin <jji@...> ----- > > From: "John Irwin" <jji@...> > Date: Wed, 6 Apr 2005 07:22:48 -0700 > To: <e.willighagen@...>, <zinc-fans@...> > Cc: > Subject: RE: [Zinc-fans] database formats? > X-Mailer: Microsoft Office Outlook, Build 11.0.6353 > > Hi Egon > > ZINC depends extensively on current tools in the field for preparing=20= > and > manipulating chemical structure files. OpenEye and Cactvs, for=20 > example, do > not support CML as far as I know. I'm also not aware of any docking=20 > program > that reads CML files. (Please let me know if you know of one). Since=20= > ZINC > was primarily designed to serve the virtual screening community, it=20 > should > offer files in format that are used in that field. > > All that said, I recognize that ZINC can be useful beyond the=20 > computational > ligand discovery community, and so I take your suggestion of CML=20 > seriously. > It may well be the next format we offer. Can you recommend software = for > preparing and manipulating CML files? If OE offered CML, we could and=20= > might > offer CML tomorrow. > >=20 > interested > in new methods of macromolecular crystallography please see the Erice > Crystallography page. Coming soon: > 2006: Structure and Function of Large Molecular Assemblies > 2007: Engineering of Crystalline Materials Properties > 2008: =46rom Molecules to Medicine - Integrating Crystallography (Drug=20= > Design) > > >> -----Original Message----- >> From: zinc-fans-bounces@... >> [mailto:zinc-fans-bounces@...] On Behalf Of Egon Willighagen >> Sent: Wednesday, April 06, 2005 12:46 AM >> To: zinc-fans@... >> Subject: Re: [Zinc-fans] database formats? >> >> On Wednesday 06 April 2005 12:52 am, John Irwin wrote: >>> Correct. In ZINC SDF is currently generated from authoritative mol2 >>> and not the other way around. Current plans include augmenting SDF >>> files with numerous tagged data - not sure when that will appear. >> >> Have you thought about Chemical Markup Language (CML)? >> >> Egon >> _______________________________________________ >> Zinc-fans mailing list >> Zinc-fans@... >> >> > > _______________________________________________ > > ----- Forwarded message from John Irwin <jji@...> ----- From: "John Irwin" <jji@...> Date: Wed, 6 Apr 2005 07:22:48 -0700 To: <e.willighagen@...>, <zinc-fans@...> Cc:=20 Subject: RE: [Zinc-fans] database formats? X-Mailer: Microsoft Office Outlook, Build 11.0.6353 Hi Egon ZINC depends extensively on current tools in the field for preparing and manipulating chemical structure files. OpenEye and Cactvs, for example, do not support CML as far as I know. I'm also not aware of any docking program that reads CML files. (Please let me know if you know of one). Since ZINC was primarily designed to serve the virtual screening community, it should offer files in format that are used in that field. All that said, I recognize that ZINC can be useful beyond the computational ligand discovery community, and so I take your suggestion of CML seriously. It may well be the next format we offer. Can you recommend software for preparing and manipulating CML files? If OE offered CML, we could and might offer CML tomorrow.=20 interested in new methods of macromolecular crystallography please see the Erice Crystallography page. Coming soon: 2006: Structure and Function of Large Molecular Assemblies 2007: Engineering of Crystalline Materials Properties 2008: From Molecules to Medicine - Integrating Crystallography (Drug Design) =20 > -----Original Message----- > From: zinc-fans-bounces@...=20 > [mailto:zinc-fans-bounces@...] On Behalf Of Egon Willighagen > Sent: Wednesday, April 06, 2005 12:46 AM > To: zinc-fans@... > Subject: Re: [Zinc-fans] database formats? >=20 > On Wednesday 06 April 2005 12:52 am, John Irwin wrote: > > Correct. In ZINC SDF is currently generated from authoritative mol2=20 > > and not the other way around. Current plans include augmenting SDF=20 > > files with numerous tagged data - not sure when that will appear. >=20 > Have you thought about Chemical Markup Language (CML)? >=20 > Egon > _______________________________________________ > Zinc-fans mailing list > Zinc-fans@... > >=20 _______________________________________________ At 15:32 01/04/2005, Egon Willighagen wrote: >Hi all, > >another question: how are radical representated in CML? In light have radical >reactions? I've seen the <electron> element, but this does not quite seem to >cover it... The simplest is to use the spinMultiplicity attribute on atom or molecule. This should be able to distinguish between a molecular ion (e.g. C6H6+., C6H6-., and a phenyl radical C6H5.). If there are many electrons with different symmetry properties (e.g. in transition metal ions) then <electron> will have to be used, but there aren't any semantics yet. Chris Morley of Openbabel fame uses radicals a lot and I think these are sufficient. P. >Egon > > >------------------------------------------------------- >This SF.net email is sponsored by Demarc: >A global provider of Threat Management Solutions. >Download our HomeAdmin security software for free today! > >_______________________________________________ >cml-discuss mailing list >cml-discuss@... > Peter Murray-Rust Unilever Centre for Molecular Informatics Chemistry Department, Cambridge University Lensfield Road, CAMBRIDGE, CB2 1EW, UK Tel: +44-1223-763069 Fax: +44 1223 763076 Billy has done a great job in investigating XOM as a an alternative DOM and has fitted this to CIFDOM. This is a sufficiently major change that it shouldn't be in Jumbo4.6 which is primarily a maintenance branch. (My fault - I should have flagged this up earlier. Our intention is that continuing extensions should be in odd-numbered branches, the next one being the (unpopulated) Jumbo4.7. Gemma and I are working on CMLReact and will shortly be committing to Jumbo4.7. Can we add the CIFDOM to that as well? And presumably we can unwind the CVS to before XOM/CIFDOM. The longer term strategy is to move to Java 1.5 and this will be reflected in JUMBO5. At present the only work on JUMBO5 is me and until it does something useful it will probably stay within UCC. The main thrusts are: * support for Java5 constructs, especially templating and collection management * reducing the weight of CMLDOM (few methods, and maybe based on XOM) * unit tests (I have ca 450 units tests so far, almost all for Tools). The tests can hopefully be retrofitted to Jumbo4.* (probably 4.7) If anyone has useful thoughts on moving from W3CDOM, please let us know. P. Peter Murray-Rust Unilever Centre for Molecular Informatics Chemistry Department, Cambridge University Lensfield Road, CAMBRIDGE, CB2 1EW, UK Tel: +44-1223-763069 Fax: +44 1223 763076 Hi all, another question: how are radical representated in CML? In light have radical reactions? I've seen the <electron> element, but this does not quite seem to cover it... Egon
https://sourceforge.net/p/cml/mailman/cml-discuss/?viewmonth=200504
CC-MAIN-2017-51
refinedweb
4,622
56.05
- Code: Select all if obj.image_rect.collidepoint(pygame.mouse.get_pos()): i know of checking if a mask is overlapping another mask: - Code: Select all if obj.mask.overlap(second_mask, (offset_x, offset_y)): but how do you check if the mouse is within the mask? To be honest i would of thought this is long drawn out, but would of at least worked, but yet nope. - Code: Select all import pygame pygame.init() screensize = (400,400) screen = pygame.display.set_mode(screensize) image = pygame.Surface([50,50]) image.fill((255,255,255)) image_rect = image.get_rect() mask = pygame.mask.from_surface(image) mouse = pygame.Surface([1,1]) mouse_rect = mouse.get_rect() mouse_mask = pygame.mask.from_surface(mouse) run = True while run: screen.fill((0,0,0)) for event in pygame.event.get(): if event.type == pygame.QUIT: run = False #if image_rect.collidepoint(pygame.mouse.get_pos()): # print('mouse over box') mouse_rect = pygame.mouse.get_pos() x = mouse_rect[0] - image_rect.x y = mouse_rect[1] - image_rect.y if mask.overlap(mouse_mask, (x,y)): print('mouse over box') screen.blit(image, image_rect) pygame.display.flip()
http://python-forum.org/viewtopic.php?p=5978
CC-MAIN-2015-27
refinedweb
173
57.74
It’s a tradition in programming books to start with a canonical ‘Hello World’ example and whilst I’ve never felt the usual presentation is particularly enlightening, I know we can spice things up a little to provide useful insights into how we write Go programs. Let’s begin with the simplest Go program that will output text to the console (Listing 1). The first thing to note is that every Go source file belongs to a package, with the main package defining an executable program whilst all other packages represent libraries. 1 package main For the main package to be executable it needs to include a main() function, which will be called following program initialisation. 2 func main() { Notice that unlike C/C++, the main() function neither takes parameters nor has a return value. Whenever a program should interact with command-line parameters or return a value on termination, these tasks are handled using functions in the standard package library. We’ll examine command-line parameters when developing Echo. Finally let’s look at our payload. 3 println("hello world") The println() function is one of a small set of built-in generic functions defined in the language specification and which in this case is usually used to assist debugging, whilst "hello world" is a value comprising an immutable string of characters in utf-8 format. We can now run our program from the command-line (Terminal on MacOS X or Command Prompt on Windows) with the command $ go run 01.go hello world Packages Now we’re going to apply a technique which I plan to use throughout my book by taking this simple task and developing increasingly complex ways of expressing it in Go. This runs counter to how experienced programmers usually develop code but I feel this makes for a very effective way to introduce features of Go in rapid succession and have used it with some success during presentations and workshops. There are a number of ways we can artificially complicate our hello world example and by the time we’ve finished I hope to have demonstrated all the features you can expect to see in the global scope of a Go package. Our first change is to remove the built-in println() function and replace it with something intended for production code (see Listing 2). The structure of our program remains essentially the same, but we’ve introduced two new features. 2 import "fmt" The import statement is a reference to the fmt package, one of many packages defined in Go’s standard runtime library. A package is a library which provides a group of related functions and data types we can use in our programs. In this case, fmt provides functions and types associated with formatting text for printing and displaying it on a console or in the command shell. 4 fmt.Println("hello world") One of the functions provided by fmt is Println(), which takes one or more parameters and prints them to the console with a carriage return appended. Go assumes that any identifier starting with a capital letter is part of the public interface of a package whilst identifiers starting with any other letter or symbol are private to the package. In production code we might choose to simplify matters a little by importing the fmt namespace into the namespace of the current source file, which requires we change our import statement. 2 import . "fmt" And this consequently allows the explicit package reference to be removed from the Println() function call. 4 Println("hello world") In this case we notice little gain; however, in later examples we’ll use this feature extensively to keep our code legible (Listing 3). One aspect of imports that we’ve not yet looked at is Go’s built-in support for code hosted on a variety of popular social code-sharing sites such as GitHub and Google Code. Don’t worry, we’ll get to this in later chapters of my book. Constants A significant proportion of Go codebases feature identifiers whose values will not change during the runtime execution of a program and our ‘Hello World’ example is no different (Listing 4), so we’re going to factor these out. Here we’ve introduced two constants: Hello and world. Each identifier is assigned its value during compilation, and that value cannot be changed at runtime. As the identifier Hello starts with a capital letter the associated constant is visible to other packages – though this isn’t relevant in the context of a main package – whilst the identifier world starts with a lowercase letter and is only accessible within the main package. We don’t need to specify the type of these constants as the Go compiler identifies them both as strings. Another neat trick in Go’s armoury is multiple assignment so let’s see how this looks (see Listing 5). This is compact, but I personally find it too cluttered and prefer the more general form (Listing 6). Because the Println() function is variadic (i.e. can take a varible number of parameters) we can pass it both constants and it will print them on the same line, separate by whitespace. fmt also provides the Printf() function which gives precise control over how its parameters are displayed using a format specifier which will be familiar to seasoned C/C++ programmers. 8 Printf("%v %v\n", Hello, world) fmt defines a number of % replacement terms which can be used to determine how a particular parameter will be displayed. Of these %v is the most generally used as it allows the formatting to be specified by the type of the parameter. We’ll discuss this in depth when we look at user-defined types, but in this case it will simply replace a %v with the corresponding string. When parsing strings the Go compiler recognises a number of escape sequences which are available to mark tabs, new lines and specific unicode characters. In this case we use \n to mark a new line (Listing 7). Variables Constants are useful for referring to values which shouldn’t change at runtime; however, most of the time when we’re referencing values in an imperative language like Go we need the freedom to change these values. We associate values which will change with variables. What follows is a simple variation of our Hello World program which allows the value of world to be changed at runtime by creating a new value and assigning it to the world variable (Listing 8). There are two important changes here. Firstly we’ve introduced syntax for declaring a variable and assigning a value to it. Once more Go’s ability to infer type allows us assign a string value to the variable world without explicitly specifying the type. 4 var world = "world" However if we wish to be more explicit we can be. 4 var world string = "world" Having defined world as a variable in the global scope we can modify its value in main(), and in this case we choose to append an exclamation mark. Strings in Go are immutable values so following the assignment world will reference a new value. 6 world += "!" To add some extra interest, I’ve chosen to use an augmented assignment operator. These are a syntactic convenience popular in many languages which allow the value contained in a variable to be modified and the resulting value then assigned to the same variable. I don’t intend to expend much effort discussing scope in Go. The point of my book is to experiment and learn by playing with code, referring to the comprehensive language specification available from Google when you need to know the technicalities of a given point. However, to illustrate the difference between global and local scope we’ll modify this program further (see Listing 9). Here we’ve introduced a new local variable world within main() which takes its value from an operation concatenating the value of the global world variable with an exclamation mark. Within main(), any subsequent reference to world will always access the local version of the variable without affecting the global world variable. This is known as shadowing. The := operator marks an assignment declaration in which the type of the expression is inferred from the type of the value being assigned. If we chose to declare the local variable separately from the assignment we’d have to give it a different name to avoid a compilation error (Listing 10). Another thing to note in this example is that when w is declared it’s also initialised to the zero value, which in the case of string happens to be "". This is a string containing no characters. In fact, all variables in Go are initialised to the zero value for their type when they’re declared and this eliminates an entire category of initialisation bugs which could otherwise be difficult to identify. Functions Having looked at how to reference values in Go and how to use the Println() function to display them, it’s only natural to wonder how we can implement our own functions. Obviously we’ve already implemented main() which hints at what’s involved, but main() is something of a special case as it exist to allow a Go program to execute and it neither requires any parameters nor produces any values to be used elsewhere in the program. (See Listing 11.) In this example we’ve introduced world(), a function which to the outside world has the same operational purpose as the variable of the same name that we used in the previous section. The empty brackets () indicate that there are no parameters passed into the function when it’s called, whilst string tells us that a single value is returned and it’s of type string. Anywhere that a valid Go program would expect a string value we can instead place a call to world() and the value returned will satisfy the compiler. The use of return is required by the language specification whenever a function specifies return values, and in this case it tells the compiler that the value of world() is the string "world". Go is unusual in that its syntax allows a function to return more than one value and as such each function takes two sets of (), the first for parameters and the second for results. We could therefore write our function in long form as 7 func world() (string) { 8 return "world" 9 } In this next example we use a somewhat richer function signature, passing the parameter name which is a string value into the function message(), and assigning the function’s return value to message which is a variable declared and available throughout the function. (See Listing 12.) As with world(), the message() function can be used anywhere that the Go compiler expects to find a string value. However, where world() simply returned a predetermined value, message() performs a calculation using the Sprintf() function and returns its result. Sprintf() is similar to Printf() which we met when discussing constants, only rather than create a string according to a format and displaying it in the terminal it instead returns this string as a value which we can assign to a variable or use as a parameter in another function call such as Println(). Because we’ve explicitly named the return value, we don’t need to reference it in the return statement as each of the named return values is implied. (See Listing 13.) If we compare the main() and message() functions (Listing 14), we notice that main() doesn’t have a return statement. Likewise if we define our own functions without return values we can omit the return statement, though later we’ll meet examples where we’d still use a return statement to prematurely exit a function. In Listing 15, we’ll see what a function which uses multiple return values looks like. Because message() returns two values we can use it in any context where at least two parameters can be consumed. Println() happens to be a variadic function, which we’ll explain in a moment, and takes zero or more parameters so it happily consumes both of the values message() returns. For our final example (Listing 16) we’re going to implement our own variadic function. We have three interesting things going on here which need explaining. Firstly I’ve introduced a new type, interface{}, which acts as a proxy for any other type in a Go program. We’ll discuss the details of this shortly but for now it’s enough to know that anywhere an interface{} is accepted we can provide a string. In the function signature we use v …interface{} to declare a parameter v which takes any number of values. These are received by print() as a sequence of values and the subsequent call to Println(v…) uses this same sequence as this is the sequence expected by Println(). So why did we use …interface{} in defining our parameters instead of the more obvious …string? The Println() function is itself defined as Println(…interface{}) so to provide a sequence of values en masse we likewise need to use …interface{} in the type signature of our function. Otherwise we’d have to create a []interface{} (a slice of interface{} values, a concept we’ll cover in detail in a later chapter of my book) and copy each individual element into it before passing it into Println(). Encapsulation In this tutorial, we’ll for the most part be using Go’s primitive types and types defined in various standard packages without any comment on their structure; however, a key aspect of modern programming languages is the encapsulation of related data into structured types and Go supports this via the struct type. A struct describes an area of allocated memory which is subdivided into slots for holding named values, where each named value has its own type. A typical example of a struct in action would be Listing 17, which gives: $ go run 17.go Hello world Here we’ve defined a struct Message which contains two values: X and y. Go uses a very simple rule for deciding if an identifier is visible outside of the package in which it’s defined which applies to both package-level constants and variables, and type names, methods and fields. If the identifier starts with a capital letter it’s visible outside the package, otherwise it’s private to the package. The Go language spec guarantees that all variables will be initialised to the zero value for their type. For a struct type this means that every field will be initialised to an appropriate zero value. Therefore when we declare a value of type Message the Go runtime will initialise all of its elements to their zero value (in this case a zero-length string and a nil pointer respectively), and likewise if we create a Message value using a literal 19 m := &Message{} Having declared a struct type we can declare any number of method functions which will operate on this type. In this case we’ve introduced Print() which is called on a Message value to display it in the terminal, and Store() which is called on a pointer to a Message value to change its contents. The reason Store() applies to a pointer is that we want to be able to change the contents of the Message and have these changes persist. If we define the method to work directly on the value these changes won’t be propagated outside the method’s scope. To test this for yourself, make the following change to the program: 14 func (v Message) Store(x, y string) { If you’re familiar with functional programming then the ability to use values immutably this way will doubtless spark all kinds of interesting ideas. There’s another struct trick I want to show off before we move on and that’s type embedding using an anonymous field. Go’s design has upset quite a few people with an inheritance-based view of object orientation because it lacks inheritance; however, thanks to type embedding we’re able to compose types which act as proxies to the methods provided by anonymous fields. As with most things, an example (Listing 18) will make this much clearer. $ go run 18.go Hello world Hello world Hello world Here we’re declaring a type HelloWorld which in this case is just an empty struct, but which in reality could be any declared type. HelloWorld defines a String() method which can be called on any HelloWorld value. We then declare a type Message which embeds the HelloWorld type by defining an anonymous field of the HelloWorld type. Wherever we encounter a value of type Message and wish to call String() on its embedded HelloWorld value we can do so by calling String() directly on the value, calling String() on the Message value, or in this case by allowing fmt.Println() to match it with the fmt.Stringer interface. Any declared type can be embedded, so in Listing 19, we’re going to base HelloWorld on the primitive bool boolean type to prove the point. In our final example (Listing 20) we’ve declared the Hello type and embedded it in Message, then we’ve implemented a new String() method which allows a Message value more control over how it’s printed. $ go run 20.go Hello Hello world In all these examples we’ve made liberal use of the * and & operators. An explanation is in order. Go is a systems programming language, and this means that a Go program has direct access to the memory of the platform it’s running on. This requires that Go has a means of referring to specific addresses in memory and of accessing their contents indirectly. The & operator is prepended to the name of a variable or to a value literal when we wish to discover its address in memory, which we refer to as a pointer. To do anything with the pointer returned by the & operator we need to be able to declare a pointer variable which we do by prepending a type name with the * operator. An example (Listing 21) will probably make this description somewhat clearer, and we get: $ go run 21.go name = Ellie stored at 0x208178170 pointer_to_name references Ellie Go allows user-defined types to declare methods on either a value type or a pointer to a value type. When methods operate on a value type the value manipulated remains immutable to the rest of the program (essentially the method operates on a copy of the value) whilst with a pointer to a value type any changes to the value are apparent throughout the program. This has far-reaching implications which we’ll explore in later chapters. Generalisation Encapsulation is of huge benefit when writing complex programs and it also enables one of the more powerful features of Go’s type system, the interface. An interface is similar to a struct in that it combines one or more elements but rather than defining a type in terms of the data items it contains, an interface defines it in terms of a set of method signatures which it must implement. As none of the primitive types ( int, string, etc.) have methods they match the empty interface (interface{}) as do all other types, a property used frequently in Go programs to create generic containers. Once declared, an interface can be used just like any other declared type, allowing functions and variables to operate with unknown types based solely on their required behaviour. Go’s type inference system will then recognise compliant values as instances of the interface, allowing us to write generalised code with little fuss. In Listing 22, we’re going to introduce a simple interface (by far the most common kind) which matches any type with a func String() string method signature. $ go run 22.go Hello Hello world Hello Hello world Hello Hello world This interface is copied directly from fmt.Stringer, so we can simplify our code a little by using that interface instead: 11 type Message struct { 12 X fmt.Stringer 13 Y fmt.Stringer 14 } As Go is strongly typed interface values contain both a pointer to the value contained in the interface, and the concrete type of the stored value. This allows us to perform type assertions to confirm that the value inside an interface matches a particular concrete type (see Listing 23). go run 23.go false true true true true true Here we’ve replaced Message’s String() method with IsGreeting(), a predicate which uses a pair of type assertions to tell us whether or not one of Message’s data fields contains a value of concrete type Hello. So far in these examples we’ve been using pointers to Hello and World so the interface variables are storing pointers to pointers to these values (i.e. **Hello and **World) rather than pointers to the values themselves (i.e. *Hello and *World). In the case of World we have to do this to comply with the fmt.Stringer interface because String() is defined for *World and if we modify main to assign a World value to either field (see Listing 24) we’ll get a compile-time error: $ go run 24.go # command-line-arguments ./24.go:36: cannot use World literal (type World) as type fmt.Stringer in assignment: World does not implement fmt.Stringer (String method has pointer receiver) The final thing to mention about interfaces is that they support embedding of other interfaces. This allows us to compose a new, more restrictive interface based on one or more existing interfaces. Rather than demonstrate this with an example, we’re going to look at code lifted directly from the standard io package which does this (Listing 25). Here io is declaring three interfaces, the Reader and Writer, which are independent of each other, and the ReadWriter which combines both. Any time we declare a variable, field or function parameter in terms of a ReaderWriter, we know we can use both the Read() and Write() methods to manipulate it. Startup One of the less-discussed aspects of computer programs is the need to initialise many of them to a pre-determined state before they begin executing. Whilst this is probably the worst place to start discussing what to many people may appear to be advanced topics, one of my goals in this chapter is to cover all of the structural elements that we’ll meet when we examine more complex programs. Every Go package may contain one or more init() functions specifying actions that should be taken during program initialisation. This is the one case I’m aware of where multiple declarations of the same identifier can occur without either resulting in a compilation error or the shadowing of a variable. In the following example we use the init() function to assign a value to our world variable (Listing 26). However, the init() function can contain any valid Go code, allowing us to place the whole of our program in init() and leaving main() as a stub to convince the compiler that this is indeed a valid Go program (Listing 27). When there are multiple init() functions, the order in which they’re executed is indeterminate so in general it’s best not to do this unless you can be certain the init() functions don’t interact in any way. Listing 28 happens to work as expected on my development computer but an implementation of Go could just as easily arrange it to run in reverse order or even leave deciding the order of execution until runtime. HTTP So far our treatment of Hello World has followed the traditional route of printing a preset message to the console. Anyone would think we were living in the fuddy-duddy mainframe era of the 1970s instead of the shiny 21st Century, when web and mobile applications rule the world. Turning Hello World into a web application is surprisingly simple, as Listing 29 demonstrates. Our web server is now listening on localhost port 1024 (usually the first non-privileged port on most Unix-like operating systems) and if we visit the url with a web browser our server will return Hello World in the response body. The first thing to note is that the net/http package provides a fully-functional web server which requires very little configuration. All we have to do to get our content to the browser is define a handler, which in this case is a function to call whenever an http.Request is received, and then launch a server to listen on the desired address with http.ListenAndServe(). http.ListenAndServe returns an error if it’s unable to launch the server for some reason, which in this case we print to the console. We’re going to import the net/http package into the current namespace and assume our code won’t encounter any runtime errors to make the simplicity even more apparent (Listing 30). If you run into any problems whilst trying the examples which follow, reinserting the if statement will allow you to figure out what’s going on. HandleFunc() registers a URL in the web server as the trigger for a function, so when a web request targets the URL the associated function will be executed to generate the result. The specified handler function is passed both a ResponseWriter to send output to the web client and the Request which is being replied to. The ResponseWriter is a file handle so we can use the fmt.Fprint() family of file-writing functions to create the response body. Finally we launch the server using ListenAndServe(), which will block for as long as the server is active, returning an error if there is one to report. In this example (Listing 30) I’ve declared a function Hello and by referring to this in the call to HandleFunc() this becomes the function which is registered. However, Go also allows us to define functions anonymously where we wish to use a function value, as demonstrated in the following variation on our theme. Functions are first-class values in Go and in Listing 31 HandleFunc() is passed an anonymous function value which is created at runtime. This value is a closure so it can also access variables in the lexical scope in which it’s defined. We’ll treat closures in greater depth later in my book, but for now Listing 32 is an example which demonstrates their basic premise by defining a variable messages in main() and then accessing it from within the anonymous function. This is only a very brief taster of what’s possible using net/http so we’ll conclude by serving our hello world web application over an SSL connection (see Listing 33). Before we run this program we first need to generate a certificate and a public key, which we can do using crypto/tls/generate_cert.go in the standard package library. $ go run $GOROOT/src/pkg/crypto/tls/ generate_cert.go -ca=true -host="localhost" 2014/05/16 20:41:53 written cert.pem 2014/05/16 20:41:53 written key.pem $ go run 33.go This is a self-signed certificate, and not all modern web browsers like these. Firefox will refuse to connect on the grounds the certificate is inadequate and not being a Firefox user I’ve not devoted much effort to solving this. Meanwhile both Chrome and Safari will prompt the user to confirm the certificate is trusted. I have no idea how Internet Explorer behaves. For production applications you’ll need a certificate from a recognised Certificate Authority. Traditionally this would be purchased from a company such as Thawte for a fixed period but with the increasing emphasis on securing the web a number of major networking companies have banded together to launch Let’s Encrypt. It’s a free CA issuing short-duration certificates for SSL/TLS with support for automated renewal. If you’re anything like me (and you have my sympathy if you are) then the next thought to idle through your mind will be a fairly obvious question: given that we can serve our content over both HTTP and HTTPS connections, how do we do both from the same program? To answer this we have to know a little – but not a lot – about how to model concurrency in a Go program. The go keyword marks a goroutine which is a lightweight thread scheduled by the Go runtime. How this is implemented under the hood doesn’t matter, all we need to know is that when a goroutine is launched it takes a function call and creates a separate thread of execution for it. In Listing 34, we’re going to launch a goroutine to run the HTTP server then run the HTTPS server in the main flow of execution. When I first wrote this code it actually used two goroutines, one for each server. Unfortunately no matter how busy any particular goroutine is, when the main() function returns our program will exit and our web servers will terminate. So I tried the primitive approach we all know and love from C (see Listing 35). Here we’re using an infinite for loop to prevent program termination: it’s inelegant, but this is a small program and dirty hacks have their appeal. Whilst semantically correct this unfortunately doesn’t work either because of the way goroutines are scheduled: the infinite loop can potentially starve the thread scheduler and prevent the other goroutines from running. $ go version go version go1.3 darwin/amd64 In any event an infinite loop is a nasty, unnecessary hack as Go allows concurrent elements of a program to communicate with each other via channels, allowing us to rewrite our code as in Listing 36. For the next pair of examples we’re going to use two separate goroutines to run our HTTP and HTTPS servers, yet again coordinating program termination with a shared channel. In Listing 37, we’ll launch both of the goroutines from the main() function, which is a fairly typical code pattern. For our second deviation (Listing 38), we’re going to launch a goroutine from main() which will run our HTTPS server and this will launch the second goroutine which manages our HTTP server. There’s a certain amount of fragile repetition in this code as we have to remember to explicitly create a channel, and then to send and receive on it multiple times to coordinate execution. As Go provides first-order functions (i.e. allows us to refer to functions the same way we refer to data, assigning instances of them to variables and passing them around as parameters to other functions), we can refactor the server launch code as in Listing 39. However, this doesn’t work as expected, so let’s see if we can get any further insight $ go vet 39.go 39.go:23: range variable s captured by func literal exit status 1 Running go with the vet command runs a set of heuristics against our source code to check for common errors which wouldn’t be caught during compilation. In this case we’re being warned about this code 21 for _, s := range f { 22 go func() { 23 s() 24 done <- true 25 }() 26 } Here we’re using a closure so it refers to the variable s in the for... range statement, and as the value of s changes on each successive iteration, so this is reflected in the call s(). To demonstrate this, we’ll try a variant where we introduce a delay on each loop iteration much greater than the time taken to launch the goroutine (see Listing 40). When we run this we get the behaviour we expect with both HTTP and HTTPS servers running on their respective ports and responding to browser traffic. However, this is hardly an elegant or practical solution and there’s a much better way of achieving the same effect (Listing 41). By accepting the parameter server to the goroutine’s closure we can pass in the value of s and capture it so that on successive iterations of the range our goroutines use the correct value. Spawn() is an example of how powerful Go’s support for first-class functions can be, allowing us to run any arbitrary piece of code and wait for it to signal completion. It’s also a variadic function, taking as many or as few functions as desired and setting each of them up correctly. If we now reach for the standard library we discover that another alternative is to use a sync.WaitGroup to keep track of how many active goroutines we have in our program and only terminate the program when they’ve all completed their work. Yet again this allows us to run both servers in separate goroutines and manage termination correctly. (See Listing 42.) As there’s a certain amount of redundancy in this, let’s refactor a little by packaging server initiation into a new Launch() function. Launch() takes a parameter-less function and wraps this in a closure which will be launched as a goroutine in a separate thread of execution. Our sync.WaitGroup variable servers has been turned into a global variable to simplify the function signature of Launch(). When we call Launch() we’re freed from the need to manually increment servers prior to goroutine startup, and we use a defer statement to automatically call servers.Done() when the goroutine terminates even in the event that the goroutine crashes. See Listing 43.
https://accu.org/index.php/journals/2321
CC-MAIN-2020-29
refinedweb
5,640
56.39
fmod, fmodf, fmodl − floating-point remainder function #include <math.h> double fmod(double x, double y); float fmodf(float x, float y); long double fmodl(long double x, long double y); Link with −lm. Feature Test Macro Requirements for glibc (see feature_test_macros(7)): fmodf(), fmodl(): _BSD_SOURCE || _SVID_SOURCE || _XOPEN_SOURCE >= 600 || _ISOC99_SOURCE || _POSIX_C_SOURCE >= 200112L; or cc -std=c99 The fmod() function computes the floating-point remainder of dividing x by y. The return value is x − n * y, where n is the quotient of x / y, rounded toward zero to an integer. On success, these functions return the value x − (−0), and y is not zero, +0 (−0) 3.53 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at−pages/.
http://man.linuxtool.net/centos7/u2/man/3_fmodf.html
CC-MAIN-2021-25
refinedweb
129
62.78
The difference between Compile-time and run-time can be explained based on the difference in the time consume to carry out different processes.The stack and heap are both storage in RAM but their functionalities are different.Static, dynamic and automatic on the other hand are use for compile-time and run time processes.The detail explanation of each of the term is given below. Compile-time The time taken to compile by the compiler is known as compile-time.The compilation of C++ source code(.cpp) to produce .exe file also known as executable file takes place in four steps: i) In the first step ,the preprocessor includes header’s file contents , generate macro code , replace the alias name(if any is found) with the value defined by #define in the source code to produce .i file type known as intermediate file. ii) The file(intermediate file type ) generated above is converted to assembly language file(.s type ) by the assembler. iii) The assembly language file generated is converted to object file(.o type ). iv) The object file generated above is linked by the linker with object file of library functions to produce an .exe(executable) file. The time taken to perform the above task is known run-time has begun and ends when the program terminates or the black screen closes. Stack Stack and heap are both storage in RAM . The name stack refers to the way in which the data are stored in RAM.It follows a simple rule which ever variables is declared a space is allocated,this space is stored in stack or if initialized the value is stored in stack.This simply means the storage will follow last in first out rule. ii)Arguments of the function are also stored in stack. iii)Address of the function are also stored in stack. iv)Address of the line next to the function called : When a function is called the program direct it’s attention toward the code present under the function.But after the execution of the function code the program return to the next line of the function call.To return to the next line an address of the next line is necessary and so the address is push onto the stack. { str=str + “happy me.” ; return str ; } int main( ) { string st=”Happy you ” ; st=func( st ) ; cout<< st << endl ; cin.get( ) ; return 0; } In the code above the variable st , address of the func( ), the argument type of memory space while running your program then you have to allocate space in heap.Allocating space in heap is done by using the keyword new .Since the compiler does not know the life time of space allocated in heap it is the programmer’s responsibility to delete it to prevent any memory leakage.Deleting the space in heap is done by using the keyword delete . { int i = 1 ; ///Space allocated in stack int *ii = new int( 23 ) ; ///Space allocated in heap cout<< i*(*ii) ;delete ii ; ///Heap storage deleted cin.get( ) ; return 0; } The program above is a simple example of allocating memory in heap for ‘ii‘ pointer and deleting it again before the program terminates.The compiler will delete the space allocated for i because it was allocated in stack. Link : C++11 Dynamically allocating memory in C++ with new and delete operators. Static The term static is used for variables and objects whose lifetime is same as the lifetime of the program itself(i.e it exist until the program exit from the main()).Some of the static objects and variables are global variable,global object,object and variable either global or local and which are declared static ,object with namespace are also of static type. Stack is the name of the memory storage where the local object and variable resides and dynamic objects reside in heap memory pool,unlike the static or dynamic object there is no specific name for the memory pool where the static type reside. { int i ; public: static string st ; ///local static int variable Test_static( ) { } ~Test_string( ) { } } ; int ii=0; ///static Global int variable string test_static::st=”Sad” ; ///initializing class static variable int main( ) { static int i=0; ///local static variable Test_static ts(89) ; cout << ts.st << endl ; cin.get(); return 0; } Dynamic Dynamic term is used for run-time processes like the storage made during run-time using the keyword new is known as dynamic storage or dynamic memory allocation. Automatic By automatic we refer to those memory allocated during the compile-time.Such memory life-time is determine by the compiler and so the compiler knows when to delete them.For such memory the compiler holds the responsibility of freeing the memory. Code example { int i=90; //atomatic storage static s=89; //static storage int *imem=new int(80); //dynamic storage {//local scope char c[]=”New”; //automatic storage } delete imem; return 0; } Link: Local scope,global scope
https://corecplusplustutorial.com/compile-time-run-time-stack-heap-static-dynamic-c/
CC-MAIN-2017-30
refinedweb
821
62.38
Creating ASP.NET Web APIs on Azure Web Sites Last month the new Windows Azure features were released, and one of them which I found quite interesting was the “Web Sites”. Currently I have an account at GoDaddy.com where I host my “pet projects”, and now that Windows Azure has a similar feature at a similar pricing plan (for the preview we can have up to 10 sites at no cost, with some limited resource utilization), I felt compelled to “eat my own dogfood” and try it out. And I really loved it – especially the integration with the Git source control, where uploading new bits to the site feels like a breeze. Before I start to look too much like a homer, I liked the Git integration better than the GoDaddy interface because I prefer working with command-line interfaces. If you’re into GUI, you can use something like a FTP client to talk to GoDaddy, or one of the many Git shells (which I haven’t tried), so which one is better would depend on your opinion of the tools. So I decided to convert one of my old projects which I have hosted in the GoDaddy account to work in an Azure Web Site. The project, a tool which converts a JSON document into a class (or series of classes) and an operation contract that can be used in WCF, was written using some classes from a Codeplex project which isn’t supported anymore (WCF Web APIs). So when migrating to Azure, I decided to also try the integration between that and the replacement for that project, ASP.NET Web APIs. Here’s a step-by-step account of what I did, hopefully you’ll find it useful when trying to do something similar. So here’s a very detailed step-by-step of what I did – and similarly, what you’d do – to create my API and host it in Azure. I hope you like huge blog posts… If you’re only interested in the tool to convert JSON to data contracts: see it at, or you can get the source code in the MSDN Code Gallery. 1. Create the site The first thing to do is to create the new Web Site which will host the API. I tried the name jsontodatacontract, and it was available: After creating the web site, there are a few options for deploying the site. We could use the Publish option in any web project (or web site) in Visual Studio, or we can also use the integrated source control system in the web sites. Since I want to have source control, and I like Git’s simplicity, I’ll choose that option. Now the repository is created (one click!), and it’s ready to be used. The next page shows the next steps we can take to upload data to the site. 2. Create the project So far we don’t have any data to push to the web site, so let’s take a back off the Azure portal for a while, and let’s create a new project which we’ll use. Since all I want (for now) is to build a service, I’ll create an empty project, so I’ll use an Empty ASP.NET Web Application instead of the full MVC4 application This will make our code as small as possible, but we’ll need to deal with things such as routing ourselves – which are not too hard, as we’ll see shortly. Another advantage of this method (empty project) is that we don’t need to install anything, and to get the ASP.NET MVC 4 templates you need to, well, install the ASP.NET MVC 4 templates (which currently are in the RC version, and many people – myself included – don’t like to install pre-release software in their machines). The RC is at a good quality already, but since in this case I don’t need it, I like to keep things simple whenever possible. So let’s open the New Project dialog, and create an empty ASP.NET Web Application. I’ll save it in the c:\git directory, but that’s just my preference (the root of my Git repositories), it could be created in any folder in your machine. One setting which we should use is the “Create directory for solution” checkbox, as it will allow us to use the NuGet Package Restore feature, which I’ll cover later on. Once the project is created, let’s add a reference to the binaries for Web API. Right-clicking the project, select “Manage NuGet Packages…”, and that will bring NuGet’s dialog, where we can search for “webapi”, and select the “Microsoft ASP.NET Web API (RC)” package (while the final version isn’t available). Clicking “Install” will download that package (and all of its dependencies). It will also add them as references to the project, so they can be used right away. One parenthesis here – I started using NuGet about a year ago, and I really like it. It’s a feature which comes installed by default on Visual Studio 2012. For Visual Studio 2010, you can install it, from a link in the main NuGet page:. My only gripe is that it doesn’t work on the express editions of VS 2010 (C# or VB) – it does work on Visual Web Developer, though. I think it will work on the express editions for VS 2012 as well, though. Ok, back to the project. We need to create a controller class which implements our service. In Web APIs, controller classes can be defined anywhere in the project, but to keep with the convention, I’ll add it in a folder called “Controllers”, which I’ll add to the project. And we can now add our controller class in that folder, by right-clicking on the folder and selecting “Add” –> “New Item”, then selecting a new Web API Controller Class. For now, let’s remove the implementation of the class, and add a pair of Post / Get methods to see if the deployment to Azure will work. - public class JsonToDataContractController : ApiController - { - public HttpResponseMessage Post(HttpRequestMessage value) - { - var response = Request.CreateResponse(HttpStatusCode.OK); - response.Content = new StringContent("I'll be running on an Azure Web Site!"); - return response; - } - - public HttpResponseMessage Get() - { - var response = this.Request.CreateResponse(HttpStatusCode.OK); - response.Content = new StringContent("I'll be running on an Azure Web Site!"); - return response; - } - } One more thing with the project: since we used an empty template, we need to add the routing code ourselves. So let’s add a new Global Application Class And add the routing in the code: - using System; - using System.Web.Http; - - namespace JsonToDataContract - { - public class Global : System.Web.HttpApplication - { - protected void Application_Start(object sender, EventArgs e) - { - GlobalConfiguration.Configuration.Routes.MapHttpRoute( - name: "api", - routeTemplate: "api/{controller}", - defaults: new { controller = "JsonToDataContract" }); - } - } - } At this point we can “F5” our application to see if it works. By default the project will browse to the root “/”, but our controller is at the “api/” URL (based on the route template), so we need to browse to that location. And since also added a Post operation, we should be able to call it as well, which I can do with Fiddler: 3. Add source control using Git Ok, we now have a Web API which is functional, so we can start the deployment process. The last page in the portal showed what to do, so let’s go to the root of the application (c:\git\JsonToDataContract) and initialize our git repository. Now we can see which files Git wants to track by using the git status command There are two things which Git wants to track but we don’t really need to deploy. One is the .suo (Solution User Options) file for the solution, which doesn’t need to be shared. The other are the NuGet packages which are stores in the packages/ directory. Git, like other DVCS, don’t work too well with binary files, since updating them can cause the repository to grow a lot over time. Thankfully, NuGet has a feature called NuGet Package Restore, which allows us to bypass checking the packages in, and during the build it will download any missing packages locally. To do that, let’s right-click the solution in the VS Solution Explorer, and choose the “Enable NuGet Package Restore” option. What the feature did was to add a new solution folder (.nuget), and on it add a new targets file and a small NuGet executable which can be used, during build, to download missing packages. Now we can exclude the packages directory (along with the .suo file) from Git, and to do that we’ll need a .gitignore file on the Git root (in my case, c:\git\JsonToDataContract\.gitignore). Now we can add the project, and see what is to be committed. And finally let’s commit our changes: 4. Deploy to Azure So we committed the changes to our local repository, but nothing has been pushed to Azure yet. Let’s do that, again following the instructions on the Azure Web Site page: A lot happened when we pushed to the web site. First, the we pushed our repository to the server. There, when the transfer finished, the site started the automatic deployment, by building the project. Notice that since we excluded the NuGet packages (but enabled package restore), the packages were downloaded during the build as well. And when the build was done, the site was deployed, so we can go to and see the same page that we saw locally: And our Web API is running on Azure! 5. Checkpoint Ok, what do we have so far? We created a web site on Azure, set up Git publishing, created a project using Web API via NuGet, and deployed the project to Azure. As far as “how to create and deploy a Web API in Azure”, that’s it. I’ll continue on, though, to finish the project which I set out to do. 6. Update controller – converting JSON to classes Ok, let’s make the controller do what it’s supposed to do. Since I already had a project which did something similar, I’ll reuse some of the code from that project in this one and let me start saying that this is definitely not the most beautiful code you’ll see, but since it works, I decided not to fiddle too much with it. The task of converting arbitrary JSON to classes which represent it can be broken down in two steps. First, we need to look at the JSON and see if it has a structure which actually can be represented in classes – there are some which don’t, such as an array containing both objects and primitives. By doing that we can load the JSON into a memory structure, similar to a DOM, with information about the types. Second, we need to traverse that DOM and convert it into the classes which we’ll use to consume and produce that JSON. For the first part, I’ll define a class called JsonRoot, which can represent either a primitive type (string, boolean, numbers) or a complex type which contains members. The members (or the root itself) can be part of an array, so we also store the rank (or number of dimensions) of the array in the JsonRoot type (“rank” is an overloaded term which if more often used with rectangular array, but in this scenario it actually means jagged arrays which are supported by the serializers). Notice that this class could (maybe should?) be better engineered, split in two so that each type has its own behavior, but I won’t go there, at least not in this iteration. - public class JsonRoot - { - public bool IsUserDefinedType { get; private set; } - public Type ElementType { get; private set; } - public string UserDefinedTypeName { get; private set; } - public int ArrayRank { get; private set; } - public IDictionary<string, JsonRoot> Members { get; private set; } - - private JsonRoot Parent { get; set; } - - private JsonRoot(Type elementType, int arrayRank) - { - this.Members = new Dictionary<string, JsonRoot>(); - this.IsUserDefinedType = false; - this.ElementType = elementType; - this.ArrayRank = arrayRank; - } - - private JsonRoot(string userDefinedTypeName, int arrayRank, IDictionary<string, JsonRoot> members) - { - this.IsUserDefinedType = true; - this.UserDefinedTypeName = userDefinedTypeName; - this.ArrayRank = arrayRank; - this.Members = members; - } - - public static JsonRoot ParseJsonIntoDataContract(JToken root, string rootTypeName) - { - if (root == null || root.Type == JTokenType.Null) - { - return new JsonRoot(null, 0); - } - else - { - switch (root.Type) - { - case JTokenType.Boolean: - return new JsonRoot(typeof(bool), 0); - case JTokenType.String: - return new JsonRoot(typeof(string), 0); - case JTokenType.Float: - return new JsonRoot(typeof(double), 0); - case JTokenType.Integer: - return new JsonRoot(GetClrIntegerType(root.ToString()), 0); - case JTokenType.Object: - return ParseJObjectIntoDataContract((JObject)root, rootTypeName); - case JTokenType.Array: - return ParseJArrayIntoDataContract((JArray)root, rootTypeName); - default: - throw new ArgumentException("Cannot work with JSON token of type " + root.Type); - } - } - } - } Parsing primitive types is trivial, as shown above. Parsing objects is also not hard – recursively parse the members and create a user-defined type with those members. When we get to arrays is where the problem starts happening. JSON arrays can contain arbitrary objects, so we need some merging logic so that two similar items can be represented by the same data type. Here are some examples to illustrate the issue: - The array [1, 1234, 1234567, 12345678901] at first can be represented as an array of Int32 values, but the last value is beyond the range of that type, so we must use an array of Int64 instead. - Almost all elements in the array [true, false, false, null, true] can be represented by Boolean, except the 4th one. In this case, we can use Nullable<Boolean> instead. - This array [1, 2, “hello”, false] could potentially be represented as an array of Object, but that loses too much information, so I decided that it wouldn’t be implemented in this iteration - Arrays of objects are more complex. In order to merge two elements of the array, we need to merge the types of all members of the objects, including extra / missing ones. This array – [{“name”:”Scooby Doo”, “breed”:”great dane”}, {“name”:”Shaggy”,”age”:"19}] – would need a type with three members (name, breed, age). And so on. So here’s how we can parse an array as a JsonRoot. After we try to merge all the elements of the array into one JsonRoot type, we’ll create a new JsonRoot object incrementing the ArrayRank property. - private static JsonRoot ParseJArrayIntoDataContract(JArray root, string rootTypeName) - { - if (root.Count == 0) - { - return new JsonRoot(null, 1); - } - - JsonRoot first = ParseJsonIntoDataContract(root[0], rootTypeName); - for (int i = 1; i < root.Count; i++) - { - JsonRoot next = ParseJsonIntoDataContract(root[i], rootTypeName); - JsonRoot mergedType; - if (first.CanMerge(next, out mergedType)) - { - first = mergedType; - } - else - { - throw new ArgumentException(string.Format("Cannot merge array elements {0} ({1}) and {2} ({3})", - 0, root[0], i, root[i])); - } - } - - if (first.IsUserDefinedType) - { - return new JsonRoot(first.UserDefinedTypeName, first.ArrayRank + 1, first.Members); - } - else - { - return new JsonRoot(first.ElementType, first.ArrayRank + 1); - } - } The code for merging two types can be found in the sample in the code gallery (link on the bottom of this post). Now, we have a data structure which says which types we need to generate. Let’s move on to the code generation part. In this example (and in the original post), I’m using the types in the System.CodeDom namespace, since it gives me for free the generation in different languages. I’ll add a new class, JsonRootCompiler, which has one method which will write all the types corresponding to the given JsonRoot object (and the root of any members of that object as well) to a text writer. - public class JsonRootCompiler - { - private SerializationModel serializationModel; - private string language; - - /// <summary> - /// Creates a new instance of the JsonRootCompiler class. - /// </summary> - /// <param name="language">The programming language in which the code will be generated.</param> - /// <param name="serializationModel">The serialization model used in the classes.</param> - public JsonRootCompiler(string language, SerializationModel serializationModel) - { - this.language = language; - this.serializationModel = serializationModel; - } - - public void GenerateCode(JsonRoot root, TextWriter writer) - { - CodeCompileUnit result = new CodeCompileUnit(); - result.Namespaces.Add(new CodeNamespace()); - GenerateType(result.Namespaces[0], root, new List<string>()); - CodeDomProvider provider = CodeDomProvider.CreateProvider(this.language); - CodeGeneratorOptions options = new CodeGeneratorOptions(); - options.BracingStyle = "C"; - provider.GenerateCodeFromCompileUnit(result, writer, options); - } - - private string GenerateType(CodeNamespace ns, JsonRoot root, List<string> existingTypes) - { - if (!root.IsUserDefinedType) return null; - - CodeTypeDeclaration rootType = new CodeTypeDeclaration(GetUniqueDataContractName(root.UserDefinedTypeName, existingTypes)); - existingTypes.Add(rootType.Name); - rootType.Attributes = MemberAttributes.Public; - rootType.IsPartial = true; - rootType.IsClass = true; - ns.Types.Add(rootType); - rootType.Comments.Add( - new CodeCommentStatement( - string.Format( - "Type created for JSON at {0}", - string.Join(" --> ", root.GetAncestors())))); - - AddAttributeDeclaration(rootType, rootType.Name, root.UserDefinedTypeName); - AddMembers(ns, rootType, root, existingTypes); - return rootType.Name; - } - } Again, the bulk of the implementation can be found in the MSDN Code Gallery sample. And we have the two pieces that we need, we can update the code for the controller to use them. The Post method has 4 parameters, 3 of type string (which, by default, are expected to come via the request URI, as query string parameters), and of of type JToken, which can be read by the JSON formatter in the Web API framework (it will be read from to the request body). - public HttpResponseMessage Post(JToken value, string rootTypeName, string language, string serializationModel) - { - JsonRoot root = JsonRoot.ParseJsonIntoDataContract(value, rootTypeName); - StringBuilder sb = new StringBuilder(); - using (StringWriter sw = new StringWriter(sb)) - { - JsonRootCompiler compiler = new JsonRootCompiler( - language, - (SerializationModel)Enum.Parse(typeof(SerializationModel), serializationModel, true)); - compiler.GenerateCode(root, sw); - var response = Request.CreateResponse(HttpStatusCode.OK); - response.Content = new StringContent(sb.ToString()); - return response; - } - } Let’s test it now. Since we only have a POST method, we’ll need to either create a custom client, or use something like Fiddler to do that. 7. Creating a simple front-end At this point our API is done, and can be deployed to Azure via Git. But to make the web site more usable, let’s create a simple front-end where users don’t need to use a tool such as Fiddler to generate JSON. My design skills are quite rudimentary at best, so I won’t even try to make anything fancy and will just use a few HTML controls instead. Let’s first add a new HTML page. Based on the configuration of my Azure Web Site, the first file it will look for when browsing to it is called Default.htm (you can see that in the bottom of the “Configure” page in the portal), so let’s create one in our project. And add some code to it: - <body> - <h1>JSON to Data Contract (or JSON.NET) types</h1> - <p>Root class name: <input type="text" id="rootClassName" value="RootClass" size="50" /></p> - <p>Output programming language: - <select id="progLanguage"> - <option value="CS" selected="selected">C#</option> - <option value="VB">VB.NET</option> - </p> - <p>Serialization Model: - <select id="serializationModel"> - <option value="DataContractJsonSerializer" selected="selected">DataContract</option> - <option value="JsonNet">JSON.NET</option> - </p> - <p><b>JSON document:</b><br /> - <textarea id="jsonDoc" rows="10" cols="60"></textarea></p> - <p><input type="button" id="btnGenerate" value="Generate code!" /></p> - <p><b>Classes:</b><br /> - <textarea id="result" rows="10" cols="60"></textarea></p> - </body> Now we need to hook up the button to call our API. We could do that using the native XmlHttpRequest object, but I found that using jQuery is a lot simpler, and we can use NuGet to add a reference to jQuery to our project, so why not? We need to add a reference to the jQuery.js on the default.htm file; the easiest way is to simply drag the file jQuery.1.7.2.js from the scripts folder (where it NuGet package created it), and drop it within the <head> section in the HTML file. - <script type="text/javascript"> - $("#btnGenerate").click(function () { - var url = "./api/JsonToDataContract?language=" + $("#progLanguage").val() + - "&rootTypeName=" + $("#rootClassName").val() + - "&serializationModel=" + $("#serializationModel").val(); - var body = $("#jsonDoc").val(); - $.ajax({ - url: url, - type: "POST", - contentType: "application/json", - data: body, - dataType: "text", - success: function (result) { - $("#result").val(result); - }, - error: function (jqXHR) { - $("#result").val("Error: " + jqXHR.responseText); - } - }); - }); - </script> When we click the button, it will then send a request to the Web API, and we can test it out. 8. Redeploying to Azure Ok, we’re getting close to the end, I promise. Let’s go back to our command prompt and see what Git is tracking now, using git status: We have many items which we need to add to the staging area, so let’s do that. Depending on the configuration of your Git client, you may see some warnings about CR/LF mismatches in the jQuery files, but they can be safely ignored. Now let’s commit the changes locally. And we’re now finally ready to push to Azure: The deployment succeeded again! That means that we should be ready to go. Let’s try browsing to, and try out the service… And it’s working! On the cloud! 9. Wrapping up This was probably the longest post I’ve written, but I don’t think this scenario is complex. I wanted to give a full step-by-step description on how to develop ASP.NET Web APIs and deploy them to Azure Web Sites, and the huge number of images is what made this post longer than most. If you like this format, please let me know and I’ll write some more like this, otherwise I’ll go back to shorter posts since they’re just easier to write :-)
https://docs.microsoft.com/en-us/archive/blogs/carlosfigueira/creating-asp-net-web-apis-on-azure-web-sites
CC-MAIN-2020-29
refinedweb
3,590
55.24
Hello! So, I created my own Arraylist class called myArrayList. I need to read a file that contains strings (which are just short phrases) into myArrayList. I created a class called Phrase that are to be the objects that will be stored into the Arraylist. I also have a class called fileReader that simply reads the file. I'm getting an Exception in thread "main" java.lang.OutOfMemoryError: Java heap space at the grow() method in myArraylist at add() method in myArraylist at main() method Here's my code: public class myArrayList { private Object array[]; private int totalSize; // the capacity of the array private int currentItems; // the number of items currently stored in the array public myArrayList(int n) { array = new Object[n]; totalSize = n; currentItems = 0; } public void add(Object i) { // Add item to the end of the array list try { array[currentItems] = i; currentItems++; } catch (ArrayIndexOutOfBoundsException a) { grow(); /****ERROR here*/ array[currentItems++] = i; } } private void grow() { // Doubles totalSize in array list Object[] newArray = new Object[2 * totalSize]; /****ERROR here*/ for (int i = 0; i < currentItems; i++) { newArray[i] = array[i]; } array = newArray; totalSize = 2 * totalSize; } } public class Phrase{ private String phrase; public String getPhrase() { return phrase; } public void setPhrase(String phrase) { this.phrase = phrase; } } public static void main(String[] args) { myArrayList newArrayList = new myArrayList(20); fileReader myFileReader = new fileReader( "file.xml"); Phrase myPhrase = myFileReader.readFile(); Phrase newPhrase = new Phrase(myPhrase.getPhrase()); while (myPhrase != null) { myArrayList.add(newPhrase); /****ERROR here*/ } } I don't know if what I'm trying to do is even headed in the right direction. Also, I do not know how to test to see what is in myArrayList once I stop getting errors. Later, I need to sort and remove duplicates in myArrayList. Thanks in advance!
https://www.daniweb.com/programming/software-development/threads/222773/reading-file-into-arraylist
CC-MAIN-2017-09
refinedweb
291
60.55
You are referencing a type that has a higher version number than the version number in a referenced assembly. For example, you have two assemblies, A and B. A references a class myClass that was added to assembly B in version 2.0. But the reference to assembly B specifies version 1.0. The compiler has unification rules for binding references, and a reference to version 2.0 cannot be satisfied by version 1.0. This sample consists of four code modules: Two DLLs that are identical except for a version attribute. A DLL that references them. A client. The following is the first of the identical DLLs. // CS1705_a.cs // compile with: /target:library /out:c:\\cs1705.dll /keyfile:mykey.snk [assembly:System.Reflection.AssemblyVersion("1.0")] public class A { public void M1() {} public class N1 {} public void M2() {} public class N2 {} } public class C1 {} public class C2 {} The following is version 2.0 of the assembly, as specified by the AssemblyVersionAttribute attribute. // CS1705_b.cs // compile with: /target:library /out:cs1705.dll /keyfile:mykey.snk using System.Reflection; [assembly:AssemblyVersion("2.0")] public class A { public void M2() {} public class N2 {} public void M1() {} public class N1 {} } public class C2 {} public class C1 {} Save this example in a file named CS1705ref.cs and compile it with the following flags: /t:library /r:A2=.\bin2\CS1705a.dll /r:A1=.\bin1\CS1705a.dll // CS1705_c.cs // compile with: /target:library /r:A2=c:\\CS1705.dll /r:A1=CS1705.dll extern alias A1; extern alias A2; using a1 = A1::A; using a2 = A2::A; using n1 = A1::A.N1; using n2 = A2::A.N2; public class Ref { public static a1 A1() { return new a1(); } public static a2 A2() { return new a2(); } public static A1::C1 M1() { return new A1::C1(); } public static A2::C2 M2() { return new A2::C2(); } public static n1 N1() { return new a1.N1(); } public static n2 N2() { return new a2.N2(); } } The following sample references version 1.0 of the CS1705.dll assembly. But the statement Ref.A2().M2() references the A2 method in the class in CS1705_c.dll, which returns an a2, which is aliased to A2::A, and A2 references version 2.0 via an extern statement, thus causing a version mismatch. The following sample generates CS1705. // CS1705_d.cs // compile with: /reference:c:\\CS1705.dll /reference:CS1705_c.dll // CS1705 expected class Tester { static void Main() { Ref.A1().M1(); Ref.A2().M2(); } }
http://msdn.microsoft.com/en-us/library/416tef0c(VS.80).aspx
crawl-002
refinedweb
403
60.41
Made by @JBYT27NOTE: If you haven't already, try visiting here for more info on Python Made by @JBYT27 NOTE: If you haven't already, try visiting here for more info on Python import print time.sleep() \ import os import time TIP: If you press the button on the top right corner, it will automatically copy it for you! import os, time from time import sleep import os poggers NOTE: You'll always have one folder, unless your animation has a series of animations looped together(we're not talking about that this tutorial) * | - NOTE: So far you should have 2 files and one folder Test NOTE: Remember, these were made in a regular file, not a .py file. * * * * * * * * * * * * * * * * * * * * NOTE: I have not included ALL the files, only some of them. You will have to create the other files. | | | | | | | | | | | | | | | | | | | | Note: Some of these animations you can also do in python. However, I will talk about this in the next subject. main.py printing NOTE: If you haven't already looked at the header '2.211) The Basics/The Easy way or the Hard way/Filing/ASCII ART', you might as well do so. print("/\\/\\/\\/\\/\\/\\") Output: /\/\/\/\/\/\ NOTE:We use the os module to cover this section. os os.system('clear') import os ... def clear():#call this whatever os.system('clear') NOTE: Just to make sure, you can call the function whatever you want. ;) Red = "\033[0;31m" Green = "\033[0;32m" Orange = "\033[0;33m" Blue = "\033[0;34m" Purple = "\033[0;35m" Cyan = "\033[0;36m" White = "\033[0;37m" black = "\033[0;30m" red = "\033[0;91m" green = "\033[0;92m" yellow = "\033[0;93m" blue = "\033[0;94m" magenta = "\033[0;95m" cyan = "\033[0;96" bright_white = "\033[0;97m" cyan_back = "\033[0;46m" purple_back = "\033[0;45m" white_back = "\033[0;47m" blue_back = "\033[0;44m" orange_back = "\033[0;43m" green_back = "\033[0;42m" pink_back = "\033[0;41m" grey_back = "\033[0;40m" grey = '\033[38;4;236m' bold = "\033[1m" underline = "\033[4m" italic = "\033[3m" darken = "\033[2m" invisible='\033[08m' reverse='\033[07m' reset='\033[0m' grey = "\x1b[90m" NOTE: In this topic I will be talking about how to use this function. for i in range(0):#any int clear()#or your function for line in range(len(open("Folder/frame"+ str(i + 1)).readlines())): print(open("Folder/frame"+ str(i + 1)).readlines()[line],end="") time.sleep(0)#any int NOTE: The for loops could be nested, it could be for x in range or for i in range or whatever. ;) for for x in range for i in range import os, time print(""" .__________________________. | .___________________. |==| | | Apple | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ] | | ,| | !___________________! |(c| !_______________________!__! | ___ -= ___ -= | ,| | ---[_]--- ---[_]--- |(c| !_______________________!__! / \\ / [][][][][][][][][][][][][] \\ / [][][][][][][][][][][][][][] \\ ( [][][][][____________][][][][] ) \ ------------------------------ / \______________________________/ """) time.sleep(2) os.system('clear') .__________________________. | .___________________. |==| | | Apple | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ] | | ,| | !___________________! |(c| !_______________________!__! | ___ -= ___ -= | ,| | ---[_]--- ---[_]--- |(c| !_______________________!__! / \ / [][][][][][][][][][][][][] \ / [][][][][][][][][][][][][][] \ ( [][][][][____________][][][][] ) \ ------------------------------ / \______________________________/ NOTE: The Reason why I added two backslashes was because regularly it is an escape slash in python. Also, the reason why the second backslash disappears in the output is that it now knows that we're trying to make it a symbol, so it switches it to a regular backslash. NOTE: Remember that my code WILL be different than yours, mine will just reference yoursANOTHER NOTE: My code will only show the main.py, so look through my files for those. import time, os, random #colors - i only used some red = "\033[0;31m" green = "\033[0;32m" yellow = "\033[0;33m" blue = "\033[0;34m" magenta = "\033[0;35m" cyan = "\033[0;36m" white = "\033[0;37" def clear(): os.system('clear') color_list=[red,green,yellow,magenta,cyan,bright_blue,bright_cyan,bright_green,bright_magenta,bright_red,bright_yellow] clear() for x in range(5): for i in range(9): f_color = random.choice(color_list) clear() for line in range(len(open("fireworks/frame" + str(i + 1) ).readlines())): print(f_color+open("fireworks/frame" + str(i + 1) ).readlines()[line],end="") time.sleep(0.1) time.sleep(0.5) clear() LAST NOTE OF TUTORIAL: Please don't ask for editing access, it annoys me, and you can just fork the repl. That would help both me and you, because you wouldn't have to wait so long for my permission. Thanks a lot again! ✌☮ @JBYT27 @JBYT27 🐍🎞️🎬 Python Animation! 🐍🎞️🎬 HAHAHAAH, the emoji md works! >:) Make your own animation - Tutorial! 🐍 A step by step tutorial on making a python animation 🐍 Contents: import's time.sleep() 1) About: This tutorial is all about making your first or your non-first animation. Basically, this tutorial is about animating in python. [pogchamp] This was a sudden idea, so some details might be left out. [thonk] 2) The Basics: The basics of an animation that you should remember are the following: time.sleep()'s every so often. \'s to allow it to actually show (This will be talked about later on into the tutorial) Remember the following, and you'll be just fine (or will you...) 2.1) The Basics/imports The only modules you need for animation making are the following: So far, you will have the following code: Both code has correct syntax. However, the thing with python (and many other langs) is that you can import certain librarys from a module. Like this: In this case, it would import the library sleep, from the module time. So any three of those would work. Let's move on now! 2.2) The Basics/The Easy way or the Hard way There are multiple ways to actually code animations. But in this tutorial, we'll talk about 2 ways: So let me start with the file animation. 2.21) The Basics/The Easy way or the Hard way/Filing So the way filing animation works is that it uses files. Say you wanted to name a file poggers. Then just click the file button: However, to store it evenly and organized, click the folder button: So far, you should have a folder and a file (which it is in the folder) The folder and file should also be named; I would suggest: Decide on what kind of animation you're going to make. I suggest making something easy, as it takes pretty long to code the ASCII ART. (This animation you're making in the file is ASCII ART. Click here) Some common examples you might want to try are: *) |) -) I'll choose an example from there and lets say I choose 'Snow'. So then, I would make a test file. So there are 2 ways you can make the file, since you made the folder. Testor anything. Then drag the file onto the folder. Testor whatever. Either way works! So far, we've kinda done the basics, making the file and folder. Now, we do the ASCII code. I can't really explain much about it because this is about your creativity. However, I'll try to explain my best. 2.211) The Basics/The Easy way or the Hard way/Filing/ASCII ART I can't explain much about this part of the tutorial. This is really up to your thinking and creativity. I did say I would try my best though, so lets get right into it. [poggers] So think about your animation plan for a sec. What kind of ASCII character are you going to use?* What will be the scene? Will there be characters? ... *Keyboard buttons Think about those questions and answer them (in your mind, not to me ;)). If you can, your animation is probably possible! If not, try again to think a new idea. This was, however not really talking about ASCII art. So I'll show you a few examples and go on ot the next subject. Snow: Rain: And that's it! (kinda) Let's move on to the next one! 2.22) The Basics/The Easy way or the Hard way/ print's So. That is a lot. lol. Anyway, if any of you guys are stuck, please notify me in the comments! This time, we're gonna focus on making an animation in main.py(in python). Like in the header above, the whole animation is gonna be printing ON AND ON. Which will be annoying. But who cares?!? [haha.] Let me talk about the basics of printinganimations. You know every one of those files that you usually make in the files? Everytime you print something, it's one file. It sounds ok , and you're laying back relaxing in the sun, drinking beer, right? Well, its not. It's literally tortureOk, sry for the strike-through's, but I had to add some humor. I think. [thonk] I'll talk about the torture for a sec. Do you remember when I said that when you dont do 2 \'s, it creates some kind of blob? Yeah, that's the torture. If you don't do it, the indentation becomes a mess and becomes hard to deal with. Really. I know i'm blabing on and on, but I have a point (kinda). I'm telling you guys to use the Filing animation. The indentation, symbols, everything is easier to deal with. However, as I made this tutorial, I'll tell you guys how to do it. 2.221) The Basics/The Easy way or the Hard way/ print's/ASCII ART The ASCII art for the printwill be relatively similar to the Filing ASCII art. The only print*'s, there are some indentation rules to follow. bigdifference is that in *I know i'm saying this over and over, but what else can I say? \, put 2 of them. Like this for ex. Output: This is the problem. If you don't add the extra \'s, you'll end up with a problem. I'm just reminding you to add the double \. [yep] Just follow that rule and you'll be ok! ^Yep, I said rules before, but its rule now. [yep] 3) Starting the Animation To start the animation, first create your files/print's and finish the animation. By means, create all your files/print's in order to make the animation. When you're done with that, you'll be up to here. If not, keep working until you're done. Just remember, practice makes perfect! Once you finished the part above, go to the main.py(otherwise another python file). If you look at the code, it will have this so far: You might have something similar to this, that's ok. As long as it's following these guidelines, you'll be fine. Now we need to make the real part, the animating. As I said before though, we need screen clearing and time sleeping. To do that, we need small details and functions... [wha?] 3.1) Starting the Animation/Small Details What are these small details? Well, this could mean; This pretty much covers all what we need. As we've listed it, lets get coding! 3.11) Starting the Animation/Small Details/Clearing the Console So one of the most common things I do and most other python coders do is import the osmodule. We(py coders) usually use it for clearing the console. The syntax is: os.system('clear'). However, because writing it may be painful [lazy], we make it a function! We do this: Because that this is a function, it can be called any amount of times. Now that we're done with that, lets move on! 3.12) Starting the Animation/Small Details/Colors There isn't much to include in this section, I will just attach a copy and paste of the colors. You can either paste this into the main.pyor a different .py file. Here it is: The capatilzed colors are darker, while the lowercase are a bit lighter. The back is a thing that will be behind the text. The bright is as it is said, bright colors. Also, the rest that I haven't mentioned are other editing things. For example, underline underlines and reset resets everything. And we're done with this section! 3.13) Starting the Animation/Small Details/ time.sleep() So the time.sleep()is very important in animations. It makes the animation actually rest, instead of going at the speed of light and you not being able to see. You have to put it in between file openings and before printing another 'file'. I will talk more about this in later sections. 4) Middle of animation The middle of the animation is the main code. This is where we will talk about opening up the files and printing your 'files'. So lets go to opening up files! 4.1) Middle of animation/File opening Opening up your file is how the animation 'animates'. It is the following syntax: Do that, replace Folder with the folder name, and you'll be done! Actually, add more of those on different lines, and you'll be done! ;) So you've got the basics for file opening! Nice job! To continue on to more to the animation, keep adding printway! forloops. I will be putting an example in this repl. Let's continue on to the 4.2) Middle of animation/Printing So printing animations are actually simple (without the animation). The following syntax is correct (and an example): The output would be: So say you making someone go on the internet typing something in. You could add seperate print statements printing each and every other one. hmmrmm, that would be a good ideaSo there you have it! Let's move on! [pogchamp] 5) End animation If you've read through the whole thing (hopefully you have without sleeping), you'll have reached here! [pog] If not, try your best and keep trying! There isn't much to do left... However, I give congrats to you! [yayay!] 🎉🥳 Just read through the whole code, just in case for typos! Again, if you followed correctly, you have done it! YAY! Congrats! Wait a sec though... 6) Closing Hey guys, thanks for viewing this tutorial so much! I hope you guys like it, give suggestions, feedback! This is my longest tutorial! 400 lines - 2449 Words - 16049 chars 6.1) Closing/Last Note @JBYT27 lol
https://repl.it/talk/learn/JBYT27-lol/116675/428362
CC-MAIN-2021-10
refinedweb
2,365
75.91
#include <wx/spinctrl.h> wxSpinCtrlDouble combines wxTextCtrl and wxSpinButton in one control and displays a real number. (wxSpinCtrl displays an integer.) This class supports the following styles: The following event handler macros redirect the events to member function handlers 'func' with prototypes like: Event macros for events emitted by this class: Default constructor. Creation function called by the spin control constructor. See wxSpinCtrlDouble() for details. Gets the number of digits in the display. Gets the increment value. Gets maximal allowable value. Gets minimal allowable value. Gets the value of the spin control. Sets the number of digits in the display. Sets the increment value. Sets range of allowable values. Sets the value of the spin control. It is recommended to use the overload taking a doubleRLDOUBLE events.
http://docs.wxwidgets.org/3.0.3/classwx_spin_ctrl_double.html
CC-MAIN-2017-51
refinedweb
127
63.05
A class to hold the line 2d_regression data and actual fitting code. More... #include <vgl_line_2d_regression.h> A class to hold the line 2d_regression data and actual fitting code. In addition to fitting a line to a set of points (orthogonal regression), it is designed to help with incremental fitting. You can inexpensively add and remove points. This class does not store the points; it merely stores enough aggregate information to estimate the line parameters. Definition at line 31 of file vgl_line_2d_regression.h. Constructor. Definition at line 17 of file vgl_line_2d_regression.txx. Definition at line 40 of file vgl_line_2d_regression.h. Clear 2d_regression sums. Clear the regression sums. This will reset the object to the freshly constructed state of having zero points. Definition at line 49 of file vgl_line_2d_regression.txx. Remove a point from the 2d_regression. Remove a point from the current regression sums. This should be a previously added point, although this cannot be verified. Definition at line 36 of file vgl_line_2d_regression.txx. Fit a line to the current point set. Fit a line to the current regression data. Definition at line 62 of file vgl_line_2d_regression.txx. Fit a line to the current point set constrained to pass through (x,y). Definition at line 82 of file vgl_line_2d_regression.txx. Get the fitted line. Definition at line 82 of file vgl_line_2d_regression.h. The number of points added. Definition at line 43 of file vgl_line_2d_regression.h. Get fitting error for a given line. Definition at line 102 of file vgl_line_2d_regression.txx. Get fitting error for current fitted line. Definition at line 113 of file vgl_line_2d_regression.txx. Get estimated fitting error if the point (x, y) were added to the fit. estimate of the fitting error if a new point is added. You must call init_rms_error_est() to initialize the running totals before the first use of this function. If increment is true, the running totals are updated as if the point p was added to the point set. It does not update the point set, however, so the point will not affect subsequent line estimation. Worst case is distance from the point, (x, y) to the current line. Add the error to the accumulating estimation sum. Definition at line 140 of file vgl_line_2d_regression.txx. Add a point to the 2d_regression. Add a point to the current regression sums. Definition at line 24 of file vgl_line_2d_regression.txx. Initialize estimated fitting error. We want to add points to the regression until it is likely that the fitting error has been exceeded. squared_error_ = squared_error_ + d^2 npts_+1 Initialize the recursive estimation of fitting error Definition at line 129 of file vgl_line_2d_regression.txx. the fitted line Definition at line 35 of file vgl_line_2d_regression.h. number of points in the regression data Definition at line 34 of file vgl_line_2d_regression.h. an estimate of the squared error Definition at line 37 of file vgl_line_2d_regression.h. partial sums Definition at line 36 of file vgl_line_2d_regression.h.
http://public.kitware.com/vxl/doc/release/core/vgl/html/classvgl__line__2d__regression.html
crawl-003
refinedweb
486
61.43
This forum is closed. Thank you for your contributions. Can I ask a couple of low tech basics on SCM and SCCM. How does an admin get the SCM policy get into SCCM and how does that then scan your PC's across the domain for compliance / non compliance? Say an admin uses SCM - picks some policies for say Win7SP1 clients, and then wants that in SCCM to use as a "all our machines must adhere to this" type baseline compliance mechanism. What happens? Second part of my question, in SCM, if you pick one of the default baselines, how do you actually pick your choice of configuration. Or put another way, if you dont pick a configuration, which default will be applied when you export this baseline prior to import into SCCM. Each parameter listed has "default", "microsoft" or "custom" recommendations. How do you chose which you want to use, or will it just go with the default unless otherwise configured? Kurt Dillard
https://social.technet.microsoft.com/Forums/en-US/71992e48-d524-474a-80a3-4b4d1bc98cd3/scm-to-sccm-process?forum=compliancemanagement
CC-MAIN-2020-50
refinedweb
164
70.02
A Look at Java Thread Overhead can have on an application. The starting point is a modified version of the application I developed for the last post. A thread is called to perform some number-crunching. After the number-crunching is complete, a method is called to return the last result. As I said in my last post, it's a pretty useless application. However, it does a good job of fully utilizing whatever number of processors I'd like to use. And, since there's no real I/O happening (no disk reads/writes, etc.), it works quite well as a means for analyzing what happens when you perform different experiments with threads. For today's testing, I removed all print statements (once I'd verified that the application was doing what I wanted it to do), so that the processing consists exclusively (or, at least as close as I could get to that) of the computational processing performed by each thread instance, and thread "overhead" (creating, launching, joining). Here's the main class: class ThreadOverheadTest { public static void main(String args[]) { int nThrCalls; NewThread thr1; double result = 0.0; int nWork = 1000000; int jWork0; int jWork1; int jWorkIncr; int iThrCall; if (args.length < 1) { nThrCalls = 1; } else { nThrCalls = Integer.parseInt(args[0]); if (nThrCalls < 1) nThrCalls = 1; if (nThrCalls > nWork) nThrCalls = nWork; } System.out.println("Performing " + nWork + " total units of work using " + nThrCalls + " thread calls."); jWorkIncr = nWork / nThrCalls; jWork0 = 1; jWork1 = jWorkIncr; System.out.println("Each consecutive thread will perform " + jWorkIncr + " units of work."); while (jWork0 <= nWork) { thr1 = new NewThread("thr1"); // start thread thr1.SetWorkRange(jWork0, jWork1); thr1.t.start(); try { // wait for other threads to end thr1.t.join(); } catch (InterruptedException e) { System.out.println("Main thread Interrupted"); } result = thr1.GetLastValue(); jWork0 += jWorkIncr; jWork1 += jWorkIncr; if (jWork1 > nWork) jWork1 = nWork; } System.out.println("Final Result: " + result); } } So, we're going to perform 1 Million units of work ( nWork). The argument defines how many consecutive threads will be launched to perform all the units of work. The default value is to do all the work using a single thread. Here are the results when I run this using a single thread on my CentOS 6.2 Linux machine: $ time java ThreadOverheadTest 1 Performing 1000000 total units of work using 1 thread calls. Each consecutive thread will perform 1000000 units of work. Final Result: 14142.13562373095 real 0m14.130s user 0m14.102s sys 0m0.025s Here the computation thread is called once, and told to do all 1,000,000 units of work. This, then, is the baseline timing, the amount of time required to complete the computations basically in the absence of any thread overhead. In case you're curious, here's the computational thread that performs the work: import static java.lang.Math.pow; // Create multiple threads. j = iVal0; j <= iVal1; j++) { for(int i = 1; i <= 200; i++) { double val0 = i; double val1 = j; double val2 = val0 * val1; double val3 = pow(val2, 0.5); lastVal = val3; } } } catch (Exception e) { System.out.println(name + "error" + e); } //System.out.println(name + " exiting."); } } A "unit of work" is the inner i loop that does the numerical computation. 14.13 seconds were required to perform 1 Million units of work, so each unit of work takes about 0.014 milliseconds to complete on my CentOS system. Now watch what happens as the number of threads is increased: It took a lot of threads running before there's much of a noticeable performance hit. But ultimately, by consecutively creating and running 1 Million threads, each performing a single unit of work, I was able to bog down my application's performance pretty severely. You can't really say that all of the extra time represents thread overhead. For example, something as simple as flipping the i and j loops in the computational thread produces a somewhat different set of results. But, I think we can fairly safely state that creating and invoking 1,000,000 threads consecutively puts a significant burden on my system. So why, you may wonder, was I interested in taking the time to create and perform this experiment? Because it provides a baseline for similar experiments I plan to perform using the Java 7 Fork/Join Framework and other JVM concurrency options, eventually including Project Lambda. Java.net Weblogs Since my last blog post, Harold Carr has posted two new java.net blogs: - Harold Carr, Strata Conference 2012 Santa Clara - my notes; and - Harold Carr, My Strata Tuesday. Poll Our current Java.net poll asks Will you use JavaFX for development once it's fully ported to Mac and Linux platforms?. Voting will be open until this Friday, March 2. Articles Our latest Java.net article is Michael Bar-Sinai's PanelMatic 101. Java News Here are the stories we've recently featured in our Java news section: - Geertjan Wielenga discovers Gel Analysis on NetBeans; - Ludovic Poitou announces OpenDJ 2.4.5 is now available; - Peter Lawry demonstrates File local access; - Bill B describes What’s New in Java 7: WatchService; - Dustin Marx shares Late February 2012 Software Development Links of Interest; - Alexis Moussine-Pouchkine celebrates GlassFish 3.1 is one year old today; - Adam Bien demonstartes How To Run NetBeans 7.1 On JDK 1.7 Mac OS X Port Developer Preview; - Geertjan Wielenga updatesJFugue Music Notepad Status:; - Jonathan Giles shares JavaFX links of the week, February 27; Spotlights Our latest Java.netSpotlight is Zoran Sevarac's Java Community Song: Zoran Sevarac presents "A Java Community Song"! Zoran says, "I wrote the lyrics under the impression of JavaOne 2011 conference, talking about Java Community, open source and free software movement"...235 reads
https://weblogs.java.net/blog/editor/archive/2012/02/29/look-java-thread-overhead
CC-MAIN-2015-18
refinedweb
946
66.03
The QSound class provides access to the platform audio facilities. More... #include <QSound> Inherits QObject. The OS X, we use QuickTime for sound. All QuickTime formats are supported by Qt/Mac. In Qt/Embedded, a built-in mixing sound server is used, which accesses /dev/dsp directly. Only the WAVE format is supported. The availability of sound can be tested with QSound::isAvailable(). Note that QSound does not support resources. This might be fixed in a future Qt version. Constructs a QSound that can quickly play the sound in a file named filename. This may use more memory than the static play function. The parent argument (default 0) is passed on to the QObject constructor. Destroys the sound object. If the sound is not finished playing stop() is called on it. See also stop() and isFinished(). Returns the filename associated with the sound. Returns true if sound facilities exist on the platform; otherwise returns false. An application may choose either to notify the user if sound is crucial to the application or to operate silently without bothering the user. If no sound is available, all QSound operations work silently and. See also setLoops(). Returns the number of times the sound will loop. This value decreases each time the sound loops. Plays the sound in a file called filename.. Sets the sound to repeat n times when it is played. Passing the value -1 will cause the sound to loop indefinitely. See also loops(). Stops the sound playing. On Windows the current loop will finish if a sound is played in a loop. See also play().
http://doc.trolltech.com/4.0/qsound.html
crawl-001
refinedweb
265
70.8
Version 0.9.2 released 20 May 2013 Dominik Picheta We are pleased to announce that version 0.9.2 of the Nimrod compiler has been released. This release has attracted by far the most contributions in comparison to any other release. This release brings with it many new features and bug fixes, a list of which can be seen later. One of the major new features is the effect system together with exception tracking which allows for checked exceptions and more, for further details check out the manual . Another major new feature is the introduction of statement list expressions, more details on these can be found here. The ability to exclude symbols from modules has also been implemented, this feature can be used like so: import module except symbol. Thanks to all contributors! Bugfixes - The old GC never collected cycles correctly. Fixed but it can cause performance regressions. However you can deactivate the cycle collector with GC_disableMarkAndSweepand run it explicitly at an appropriate time or not at all. There is also a new GC you can activate with --gc:markAndSweepwhich does not have this problem but is slower in general and has no realtime guarantees. castfor floating point types now does the bitcast as specified in the manual. This breaks code that erroneously uses castto convert different floating point values. - SCGI module’s performance has been improved greatly, it will no longer block on many concurrent requests. - In total fixed over 70 github issues and merged over 60 pull requests. Library Additions - There is a new experimental mark&sweep GC which can be faster (or much slower) than the default GC. Enable with --gc:markAndSweep. - Added system.onRaiseto support a condition system. - Added system.localsthat provides access to a proc’s locals. - Added macros.quotefor AST quasi-quoting. - Added system.unsafeNewto support hacky variable length objects. system.fieldsand system.fieldPairssupport objecttoo; they used to only support tuples. - Added system.CurrentSourcePathreturning the full file-system path of the current source file. - The macrosmodule now contains lots of useful helpers for building up abstract syntax trees. Changes affecting backwards compatibility sharedis a keyword now. - Deprecated sockets.recvLineand asyncio.recvLine, added readLineinstead. - The way indentation is handled in the parser changed significantly. However, this affects very little (if any) real world code. - The expression/statement unification has been implemented. Again this only affects edge cases and no known real world code. - Changed the async interface of the scgimodule. - WideStrings are now garbage collected like other string types. Compiler Additions - The doc2command does not generate output for the whole project anymore. Use the new --projectswitch to enable this behaviour. - The compiler can now warn about shadowed local variables. However, this needs to be turned on explicitly via --warning[ShadowIdent]:on. - The compiler now supports almost every pragma in a pushpragma. - Generic converters have been implemented. - Added a highly experimental noforwardpragma enabling a special compilation mode that largely eliminates the need for forward declarations. Language Additions case expressionsare now supported. - Table constructors now mimic more closely the syntax of the casestatement. - Nimrod can now infer the return type of a proc from its body. - Added a mixindeclaration to affect symbol binding rules in generics. - Exception tracking has been added and the doc2command annotates possible exceptions for you. - User defined effects (“tags”) tracking has been added and the doc2command annotates possible tags for you. - Types can be annotated with the new syntax not nilto explicitly state that nilis not allowed. However currently the compiler performs no advanced static checking for this; for now it’s merely for documentation purposes. - An exportstatement has been added to the language: It can be used for symbol forwarding so client modules don’t have to import a module’s dependencies explicitly. - Overloading based on ASTs has been implemented. - Generics are now supported for multi methods. - Objects can be initialized via an object constructor expression. - There is a new syntactic construct (;)unifying expressions and statements. - You can now use from module import nilif you want to import the module but want to enforce fully qualified access to every symbol in module. Notes for the future - The scope rules of ifstatements will change in 0.9.4. This affects the =~pegs/re templates. - The socketsmodule will become a low-level wrapper of OS-specific socket functions. All the high-level features of the current socketsmodule will be moved to a networkmodule.
https://nim-lang.org/blog/2013/05/20/version-092-released.html
CC-MAIN-2018-51
refinedweb
730
50.84
Hello Forum, I am trying to use a list shuttle with a covnerter. I have an ArrayList for both my source and target lists. The list items are String. E.g., one entry is: "specificProblem" I wanted to present different string values in the picklists. E.g., I want to show "Specific problem" I thought that I could define a converter to map the strings back and forth. Since my "objects" are also strings I thought this would be fairly straightforward. I can see from println statements that the converter is being called to convert the source and target lists as expected. When the page is rendered, I see calls togetAsString, E.g., I see: "getAs String from specificProblem to Specific problem" When it is submitted I see calls to getAsObject. E.g., I see: "getAsObject from Specific problem to specificProblem" However, when the shuttle is displayed, I see the original values, not the converted values. E.g., in this case I see "specificProblem" in the list, not "Specific problem" as I want. Am I misunderstaning something? The shuttle seems to discard the converted values & display "object" values instead. My Shuttle: <rich:listShuttle <rich:column> <h:outputText</h:outputText> </rich:column> </rich:listShuttle> public Object getAsObject(FacesContext context, UIComponent component, String value) throws ConverterException { System.out.println("getAsObject from " + value + " to " + _nameMap.get( (String)value ) ); return ( StringUtils.isEmpty(value) ) ? null : _nameMap.get( (String)value ); } public String getAsString(FacesContext context, UIComponent component, Object value) throws ConverterException { if(value == null) { return ""; } else if ( value instanceof String ) { System.out.println("getAs String from " + value + " to " + nameMap.get( (String) value ) ); return nameMap.get( (String) value ); } { throw new ConverterException("Property Name not string value." ); } }
https://developer.jboss.org/thread/9756
CC-MAIN-2018-05
refinedweb
279
50.33
Error handling in long Promise chains The longer the chain, the easier it is to break it So you’re writing async code in JavaScript, right? I assume you are using Promises — if not, well… you should be. Chances are as well that you’re using a microservice-oriented approach to your application. These settings can bring some problems to your application regarding error handling, especially if you make use of some complex business logic to access those microservices. In my case I was developing a giant form for a web application using AngularJS. Every section of this form would then, on submit, be translated to an API call onto different microservices and endpoints. As a dedicated developer, I’ve started working on this form, and thus began the adventure. The tale of one giant form Once upon a time, there was a giant form. It called different API endpoints on submit. Simple, efficient and pretty, although naive; this was the code for the submit function of that form: saveApplication() .then(uploadImages) .then(saveService) .then(savePricingInfo) .then(savePaymentInfo) .then(gotoMainPage) .catch(setErrorState); Where all the functions on the Promise chain such as uploadImage, saveService, etc. are functions that return a Promise object from an API call (using Angular’s ngResource). But what we didn’t knew was that those innocent days were about to change. Darkness was silently spreading its roots below our kingdom. We started to notice it only when we’ve decided to change this giant form into a wizard-like multi-step form. This introduced a small problem which could have been easily solved at the beginning had we had the right approach in mind. But we didn’t. We did opt for that wooden sword to fight a medium-sized monster. Which worked, but was not good enough. Battling the medium-sized monster with a wooden sword It turns out every step of the form, that is, every API call on submit could return a different error, which should be mapped to a specific field and step of the multi-step form. We had to have a way of identifying which step corresponded to the error when it happened. Our multi-step form had four steps then, namely: basic, display, pricing and payment. Our first (and childish) approach was, at first, to add a .catch block after every .then which contained an API call. This catch would then have, hardcoded, the corresponding step for redirection. This is what it looked like: saveApplication() .catch(handleError(’basic’)) .then(uploadImages) .catch(handleError(’display’)) .then(saveService) .catch(handleError(’display’)) .then(savePricingInfo) .catch(handleError(’pricing’)) .then(savePaymentInfo) .catch(handleError(’payment’)) .then(gotoMainPage); function handleError(step) { return (err) => { if(err.break) return $q.reject(err); setErrorState(); multiStepManager.go(err.step); err.break = true; return $q.reject(err); }; } Whoa! No need for all that, Joe. At the end, this was like using a cannon to kill a fly. But it worked. But it was a total mess. But it worked… The handleError function returned a function which handled the error for that specific step on the Promise chain. It then would set a property (break) on the error object — which gets passed down along the chain to .catch blocks — , what would prevent other .catch blocks to handle that same error. It turns out that, in the end, the medium-sized monster was only a fly and the wooden sword was actually a huge badly engineered cannon. The Tao of Promises: Promise.resolve(Promise.reject(x)) === Promise.reject(Promise.resolve(x)) === Promise.reject(x). This magic rule above is what saved me some thought power at the end of the day. This meant I could treat those errors as soon as they happened, inside their own functions, staying away from that long chain madness. The answer was always there staring at me, I just couldn’t see it. Now I see, and it’s beautiful. This meant I could simply have the saveApplication function like this, for example: function saveApplication() { return makeApiCall().catch((err) => Promise.reject('basic')); } The .catch block means that we are handling an error on the basic step of the form, because the saveApplication call is related to the step of the form called basic. This led us to the beautiful piece of code down below: saveApplication() .then(uploadImages) .then(saveService) .then(savePricingInfo) .then(savePaymentInfo) .then(gotoMainPage) .catch((step) => { setErrorState(); multiStepManager.go(step); }); We only had to change a single line of the Promise chain, now that all the functions inside the .then blocks returns a Promise which already rejects to the corresponding step. But what if other types of errors happened, which were not handled by the inner catches? Well, that could be easily solved by implementing custom error types, and separating the evaluation of different error types inside our main .catch block. Like this: function saveApplication() { return makeApiCall().catch((err) => Promise.reject(new StepError('basic'))); } // saveApplication() .then(uploadImages) .then(saveService) .then(savePricingInfo) .then(savePaymentInfo) .then(gotoMainPage) .catch((step) => { if(err instanceof StepError) { setErrorState(); multiStepManager.go(step); } else { throw err; } }); In this case, the main .catch block only handles errors of type StepError. Other types of errors are simply thrown, not rejected, so that they can be handled accordingly by the application or the browser. The same principle can and should be extended to handle specific error types, such as different HTTP statuses. The end Working with long Promise chains can easily and quickly become one hell of a mess if you don’t stick to the rules. It’s actually really easy to achieve a good structure using Promise chains. Building the right mindset is the laborious part. All in all, here’s what I’ve learned from this quest: - If you don’t have only one single .catch block in your Promise chain, you’re doing it wrong. Sorry to say it like this, but it’s true; - Custom error types on JavaScript can be cool; - It’s important to really know the rules of the tools you’re using. In this case, Promises. If you liked these rules and this exciting quest, please hit the green heart button below and make me happy! :D
https://medium.com/@arthurxavier/error-handling-in-long-promise-chains-155f610b5bc6
CC-MAIN-2017-17
refinedweb
1,027
57.57
. Once a cell, face, or edge becomes a parent it is no longer active. FlatManif. A "coarse mesh" in deal.II is a triangulation object that consists only of cells that are not refined, i.e., a mesh in which no cell is a child of another cell. This is generally how triangulations are first constructed in deal.II, for example using (most of) the functions in namespace GridGenerator, the functions in class GridIn, or directly using the function Triangulation::create_triangulation(). One can of course do computations on such meshes, but most of the time (see, for example, almost any of the tutorial programs) one first refines the coarse mesh globally (using Triangulation::refine_global()), or adaptively (in that case first computing a refinement criterion, then one of the functions in namespace GridRefinement, and finally calling Triangulation::execute_coarsening_and_refinement()). The mesh is then no longer a "coarse mesh", but a "refined mesh". In some contexts, we also use the phrase "the coarse mesh of a triangulation", and by that mean that set of cells that the triangulation started out with, i.e., from which all the currently active cells of the triangulation have been obtained by mesh refinement. (Some of the coarse mesh cells may of course also be active if they have never been refined.) Triangulation objects store cells in levels: in particular, all cells of a coarse mesh are on level zero. Their children (if we executed Triangulation::refine_global(1) on a coarse mesh) would then be at level one, etc. The coarse mesh of a triangulation (in the sense of the previous paragraph) then consists of exactly the level-zero cells of a triangulation. (Whether they are active (i.e., have no children) or have been refined is not important for this definition.) Most of the triangulation classes in deal.II store the entire coarse mesh along with at least some of the refined cells. (Both the Triangulation and parallel::shared::Triangulation classes actually store all cells of the entire mesh, whereas some other classes such as parallel::distributed::Triangulation only store some of the active cells on each process in a parallel computation.) In those cases, one can query the triangulation for all coarse mesh cells. Other triangulation classes (e.g., parallel::fullydistributed::Triangulation) only store a part of the coarse mesh. See also the concept of coarse cell ids for that case. Most of the triangulation classes in deal.II, notably Triangulation, parallel::shared::Triangulation, and parallel::distributed::Triangulation, store the entire coarse mesh of a triangulation on each process of a parallel computation. On the other hand, this is not the case for other classes, notably for parallel::fullydistributed::Triangulation, which is designed for cases where even the coarse mesh is too large to be stored on each process and needs to be partitioned. In those cases, it is often necessary in algorithms to reference a coarse mesh cell uniquely. Because the triangulation object on the current process does not actually store the entire coarse mesh, one needs to have a globally unique identifier for each coarse mesh cell that is independent of the index within level zero of the triangulation stored locally. This globally unique ID is called the "coarse cell ID". It can be accessed via the function call where triangulation is the triangulation to which the iterator coarse_cell pointing to a cell at level zero belongs. Here, coarse_cell->index() returns the index of that cell within its refinement level (see TriaAccessor::index()). This is a number between zero and the number of coarse mesh cells stored on the current process in a parallel computation; it uniquely identifies a cell on that parallel process, but different parallel processes may use that index for different cells located at different coordinates. For those classes that store all coarse mesh cells on each process, the Triangulation::coarse_cell_index_to_coarse_cell_id() simply returns a permutation of the possible argument values. In the simplest cases, such as for a sequential or a parallel shared triangulation, the function will in fact simply return the value of the argument. For others, such as parallel::distributed::Triangulation, the ordering of coarse cell IDs is not the same as the ordering of coarse cell indices. Finally, for classes such as parallel::fullydistributed::Triangulation, the function returns the globally unique ID, which is from a larger set of possible indices than the indices of the coarse cells actually stored on the current process. Colorization is the process of marking certain parts of a Triangulation with different labels. The use of the word color comes from cartography, where countries on a map are made visually distinct from each other by assigning them different colors. Using the same term coloring is common in mathematics, even though we assign integers and not hues to different regions. deal.II refers to two processes as coloring: colorize. This argument controls whether or not the different parts of the boundary will be assigned different boundary indicators. Some functions also assign different material indicators as well., AffineConstraints, ...). dimand spacedim Many classes and functions in deal.II have two template parameters, dim and spacedim. An example is the basic Triangulation class: In all of these contexts where you see dim and spacedim referenced, these arguments have the following meaning: dim denotes the dimensionality of the mesh. For example, a mesh that consists of line segments is one-dimensional and consequently corresponds to dim==1. A mesh consisting of quadrilaterals then has dim==2 and a mesh of hexahedra has dim==3. spacedimdenotes the dimensionality of the space in which such a mesh lives. Generally, one-dimensional meshes live in a one-dimensional space, and similarly for two-dimensional and three-dimensional meshes that subdivide two- and three-dimensional domains. Consequently, the spacedimtemplate argument has a default equal to dim. But this need not be the case: For example, we may want to solve an equation for sediment transport on the surface of the Earth. In this case, the domain is the two-dimensional surface of the Earth ( dim==2) that lives in a three-dimensional coordinate system ( spacedim==3). More generally, deal.II can be used to solve partial differential equations on manifolds that are embedded in higher dimensional space. In other words, these two template arguments need to satisfy dim <= spacedim, though in many applications one simply has dim == spacedim. Following the convention in geometry, we say that the "codimension" is defined as spacedim-dim. In other words, a triangulation consisting of quadrilaterals whose coordinates are three-dimensional (for which we would then use a Triangulation<2,3> object) has "codimension one". Examples of uses where these two arguments are not the same are shown in step-34, step-38, step-54. The term "degree of freedom" (often abbreviated as "DoF") problem moving or distorting a mesh by a relatively large amount. If the appropriate flag is given upon creation of a triangulation, the function Triangulation::create_triangulation, which is called by the various functions in GridGenerator and GridIn (but can also be called from user code, see step-14 and the example at the end of step-49), function GridTools::fix_up_distorted_child_cells can, in some cases, fix distorted cells on refined meshes by moving around the vertices of a distorted child cell that has an undistorted. "Generalized support points" are, as the name suggests, a generalization of support points. The latter are used to describe that a finite element simply interpolates values at individual points (the "support points"). If we call these points \(\hat{\mathbf{x}}_i\) (where the hat indicates that these points are defined on the reference cell \(\hat{K}\)), then one typically defines shape functions \(\varphi_j(\mathbf{x})\) in such a way that the nodal functionals \(\Psi_i[\cdot]\) simply evaluate the function at the support point, i.e., that \(\Psi_i[\varphi]=\varphi(\hat{\mathbf{x}}_i)\), and the basis is chosen so that \(\Psi_i[\varphi_j]=\delta_{ij}\) where \(\delta_{ij}\) is the Kronecker delta function. This leads to the common Lagrange elements. (In the vector valued case, the only other piece of information besides the support points \(\hat{\mathbf{x}}_i\) that one needs to provide is the vector component \(c(i)\) the \(i\)th node functional corresponds, so that \(\Psi_i[\varphi]=\varphi(\hat{\mathbf{x}}_i)_{c(i)}\).) On the other hand, there are other kinds of elements that are not defined this way. For example, for the lowest order Raviart-Thomas element (see the FE_RaviartThomas class), the node functional evaluates not individual components of a vector-valued finite element function with dim components, but the normal component of this vector: \(\Psi_i[\varphi] = \varphi(\hat{\mathbf{x}}_i) \cdot \mathbf{n}_i \), where the \(\mathbf{n}_i\) are the normal vectors to the face of the cell on which \(\hat{\mathbf{x}}_i\) is located. In other words, the node functional is a linear combination of the components of \(\varphi\) when evaluated at \(\hat{\mathbf{x}}_i\). Similar things happen for the BDM, ABF, and Nedelec elements (see the FE_BDM, FE_ABF, FE_Nedelec classes). In these cases, the element does not have support points because it is not purely interpolatory; however, some kind of interpolation is still involved when defining shape functions as the node functionals still require point evaluations at special points \(\hat{\mathbf{x}}_i\). In these cases, we call the points generalized support points. Finally, there are elements that still do not fit into this scheme. For example, some hierarchical basis functions (see, for example the FE_Q_Hierarchical element) are defined so that the node functionals are moments of finite element functions, \(\Psi_i[\varphi] = \int_{\hat{K}} \varphi(\hat{\mathbf{x}}) {\hat{x}_1}^{p_1(i)} {\hat{x}_2}^{p_2(i)} \) in 2d, and similarly for 3d, where the \(p_d(i)\) are the order of the moment described by shape function \(i\). Some other elements use moments over edges or faces. In all of these cases, node functionals are not defined through interpolation at all, and these elements then have neither support points, nor generalized support points. The "geometry paper" is a paper by L. Heltai, W. Bangerth, M. Kronbichler, and A. Mola, titled "Using exact geometry information in finite element computations", that describes how deal.II describes the geometry of domains. In particular, it discusses the algorithmic foundations on which the Manifold class is based, and what kind of information it needs to provide for mesh refinement, the computation of normal vectors, and the many other places where geometry enters into finite element computations. The paper is currently available on arXiv at . The full reference for this paper is as follows: and the parallel::shared::Triangulation classes. primary value somewhere else – thus, the name "ghost". This is also the case for the parallel::distributed::Vector class. On the other hand, in Trilinos (and consequently in TrilinosWrappers::MPI::Vector), a ghosted vector is simply a view of the parallel vector where the element distributions overlap. The 'ghosted' Trilinos vector in itself has no idea of which entries are ghosted and which are locally owned. In fact, a ghosted vector may not even store all of the elements a non-ghosted vector would store on the current processor. Consequently, for Trilinos vectors, there is no notion of an 'owner' of vector elements in the way we have it in the primary::flat. In practice, the material id of a cell is typically used to identify which cells belong to a particular part of the domain, e.g., when you have different materials (steel, concrete, wood) that are all part of the same domain. One would then usually query the material id associated with a cell during assembly of the bilinear form, and use it to determine (e.g., by table lookup, or a sequence of if-else statements) what the correct material coefficients would be for that cell. This material_id may be set upon construction of a triangulation (through the CellData data structure), or later through use of cell iterators. For a typical use of this functionality, see the step-28 tutorial program. The functions of the GridGenerator namespace typically set the material ID of all cells to zero. When reading a triangulation through the GridIn class, different input file formats have different conventions, but typically either explicitly specify the material id, or if they don't, then GridIn simply sets them to zero. Because the material of a cell is intended to pertain to a particular region of the domain, material ids are inherited by child cells from their element as a triple \((K,P,\Psi)\) where This definition of what a finite element is has several advantages, concerning analysis as well as implementation. For the analysis, it means that conformity with certain spaces (FiniteElementData::Conformity), e.g. continuity, is up to the node functionals. In deal.II, it helps simplifying the implementation of more complex elements like FE_RaviartThomas considerably. Examples for node functionals are values in support points and moments with respect to Legendre polynomials. Examples: The construction of finite elements as outlined above allows writing code that describes a finite element simply by providing a polynomial space (without having to give it any particular basis – whatever is convenient is entirely sufficient) and the nodal functionals. This is used, for example in the FiniteElement::convert_generalized_support_point_values_to_dof_values() function. 100,000,000::get_unit_support_points(), such that the function FiniteElement:. The "Z order" of cells describes an order in which cells are traversed. By default, if you write a loop over all cells in deal.II, the cells will be traversed in an order where coarser cells (i.e., cells that were obtained from coarse mesh cells with fewer refinement steps) come before cells that are finer (i.e., cells that were obtained with more refinement steps). Within each refinement level, cells are traversed in an order that has something to do with the order in which they were created; in essence, however, this order is best of thought of as "unspecified": you will visit each cell on a given refinement level exactly once, in some order, but you should not make any assumptions about this order. Because the order in which cells are created factors into the order of cells, it can happen that the order in which you traverse cells is different for two identical meshes. For example, think of a 1d (coarse) mesh with two cells: If you first refine the first of these cells and then the other, then you will traverse the four cells on refinement level 1 in a different order than if you had first refined the second coarse cell and then the first coarse cell. This order is entirely practical for almost all applications because in most cases, it does not actually matter in which order one traverses cells. Furthermore, it allows using data structures that lead to particularly low cache miss frequencies and are therefore efficient for high performance computing applications. On the other hand, there are cases where one would want to traverse cells in a particular, specified and reproducible order that only depends on the mesh itself, not its creation history or any other seemingly arbitrary design decisions. The "Z order" is one way to achieve this goal. To explain the concept of the Z order, consider the following sequence of meshes (with each cell numbered using the "level.index" notation, where "level" is the number of refinements necessary to get from a coarse mesh cell to a particular cell, and "index" the index of this cell within a particular refinement level): Note how the cells on level 2 are ordered in the order in which they were created. (Which is not always the case: if cells had been removed in between, then newly created cells would have filled in the holes so created.) The "natural" order in which deal.II traverses cells would then be 0.0 -> 1.0 -> 1.1 -> 1.2 -> 1.3 -> 2.0 -> 2.1 -> 2.2 -> 2.3 -> 2.4 -> 2.5 -> 2.6 -> 2.7. (If you want to traverse only over the active cells, then omit all cells from this list that have children.) This can be thought of as the "lexicographic" order on the pairs of numbers "level.index", but because the index within each level is not well defined, this is not a particularly useful notion. Alternatively, one can also think of it as one possible breadth-first traversal of the tree that corresponds to this mesh and that represents the parent-child relationship between cells: On the other hand, the Z order corresponds to a particular depth-first traversal of the tree. Namely: start with a cell, and if it has children then iterate over these cell's children; this rule is recursively applied as long as a child has children. For the given mesh above, this yields the following order: 0.0 -> 1.0 -> 2.4 -> 2.5 -> 2.6 -> 2.7 -> 1.1 -> 1.2 -> 1.3 -> 1.4 -> 2.0 -> 2.1 -> 2.2 -> 2.3. (Again, if you only care about active cells, then remove 0.0, 1.0, and 1.3 from this list.) Because the order of children of a cell is well defined (as opposed to the order of cells within each level), this "hierarchical" traversal makes sense and is, in particular, independent of the history of a triangulation. In practice, it is easily implemented using a recursive function: This function is then called as follows: Finally, as an explanation of the term "Z" order: if you draw a line through all cells in the order in which they appear in this hierarchical fashion, then it will look like a left-right inverted Z on each refined cell. Indeed, the curve so defined can be thought of a space-filling curve and is also sometimes called "Morton ordering", see .
https://dealii.org/developer/doxygen/deal.II/DEALGlossary.html
CC-MAIN-2020-45
refinedweb
2,965
50.46
How to connect to a JavaBean using Flash Remoting and JRun 4/J2EE This TechNote provides information on how to connect a simple "Hello World" JavaBean to Flash Remoting using JRun 4/J2EE. This TechNote is designed for users who are new to developing with Macromedia Flash MX and/or JRun 4/J2EE. This tutorial will walk through the steps to connect your Flash movie to a JavaBean using Flash Remoting. The instructions will cover the following tasks: - How to create the JavaBean - How to build the Macromedia Flash Movie - How to write the ActionScript - How to play the Flash movie and use the NetConnection Debugger Download and install the Hello World JavaBean sample files to follow along with this tutorial: - Download the JavaBean-HelloWorld.zip file. Unzip the files using WinZip. - Save the HelloWorld.jar file into your SERVER-INF/classes folder. - Save the JavaBean-HelloWorld.fla file into your application. Then, open the FLA file using Macromedia Flash MX. - The gateway URL is set to 8101. Edit the port number to match the port your server is currently running on. How to create the JavaBean, compile it and place it in the correct location - First, create a JavaBean that you plan to connect to Flash Remoting. For this example we will use the following HelloWorld JavaBean: public class HelloWorld { private String message; public HelloWorld() { message = "Hello World From JavaBean"; } public void setMessage(String message) { this.message = "Hi " + message; } public String getMessage() { return message; } } - Compile the JavaBean. - Next, we will need to deploy this JavaBean in a location that will be accessible by the Flash Remoting gateway. Since the gateway is not local to your application, you will need to put the class file in one of the following locations: Note: If you were to place the JavaBean in your WEB-INF/classes folder it would not be accessible by the gatewayunless you unzipped the flashgateway.ear file and deployed the gateway in your application. - Make sure the folder you selected is in the classpath, by doing the following: - Open JRun JMC. - Click the "+" button to expand the server you are using. - Select JVM Settings. - In the Classpaths for VM, look for the folder that contains the HelloWorld.class file. For this example, we put the HelloWorld.class file in the SERVER-INF/classes folder. Note: If you don't see the folder in the classpath, then it must be added before continuing. After you have added the folder to the classpath, restart the server. To recap this process, the steps above have illustrated how to create the JavaBean, deploy it to a directory accessible by the Flash Remoting gateway, and verify that the JavaBean is in the classpath. Next, we'll discuss how to build the front-end. How to build the Macromedia Flash Movie - Open Macromedia Flash MX and draw a box, using the Rectangle tool from the toolbox. - Using the Text tool, draw a text box on the stage and type the following: Data from JavaBean - Draw another text box on the stage. This time, do not type any text. This text box will be used to display the data from the JavaBean, as follows: - Choose Window > Properties to launch the Properties inspector (if it is not already visible). - Select "Dynamic Text" from the pop-up menu. - In the field titled "Instance Name", enter: messageOutput The Instance Name "messageOutput" is used when we reference the instance in the ActionScript to display the data returned by the JavaBean. The illustration below provides a visual representation of how the movie might look at this point: How to write the ActionScript - Click the Actions window, or select Window > Actions. When you first launch the Actions window, the default setting is Normal Mode. To insert code while in Normal Mode, you must select the "+" sign and choose the desired functions from the list of Actions on the left side of the window. We will use Expert Mode insteadbecause this allows you to type the ActionScript directly into the Actions window. Select Expert Mode from the pop-up menu, as shown below: - Select Frame 1 from the pop-up menu. First, make sure to include the NetService.as class file into the first frame:Note: The NetServices.as class is responsible for making the connection to the gateway. This step is mandatory. #include "NetServices.as" - Next, we'll create the connection to the Flash Remoting gateway:This ActionScript sets the default gateway URL, connects to the gateway and creates a Service Objectin this example, the Service Object is flashtestService. // connect to the gateway server if (inited == null) { inited = true; NetServices.setDefaultGatewayUrl (""); gatewayConnnection = NetServices.createGatewayConnection(); flashtestService = gatewayConnnection.getService ("HelloWorld",this); } The methods in the JavaBean are exposed to Flash as Service Functions. The two methods in the HelloWorld JavaBean are setMessage and getMessage. Therefore, the two Service Functions available in the Service Object we created are: getMessage() and setMessage(). - Return the data from the JavaBean using the Service Function in Flash, as follows:If you run the movie now (by selecting CTRL+ENTER) you will not see the data in Flash yet... but you will receive a message in the Output window, stating: // call the service function getMessage() flashtestService.getMessage(); NetServices info 1: getMessage_Result was received from server: Hello World From JavaBean This message indicates that the connection was successful. It also indicates that although you didn't call the getMessage_Result Service Function, it was returned to the Flash movie. - Next, we need to add ActionScript to return the data and display it in the text box we created in the Flash movie. This code will accomplish that goal:When you use // use _Result to have Flash call this function function getMessage_Result(result) { messageOutput.text = result; } // use _Status to handle any errors function getMessage_Status(result){ messageOutput.text = "status: " + result.details; } _Resultafter a Service Function, Flash will automatically call that function. In this case we are calling the getMessageService Function. We can now reference the Instance Name messageOutputthat we created in the Flash movie, which will display the data in the text box. When you use _Statusafter a Service Function, Flash will call this function if there is an error. In this example, the error message would be displayed in the text box we created. How to play the Flash movie and use the NetConnection Debugger - To run the movie select File > Publish Preview, (or CTRL + ENTER). You should see the "Hello World From JavaBean" data displayed in the Text box we created, as shown in the illustration below: - Run the NetConnection Debugger to see how the debugger works. Take some time to review the data provided by the NetConnection Debugger. The first thing you need to do is include the NetDebug.as class. Add the NetDebug.as class just below the NetServices.as include, using this code: #include "NetDebug.as" - Save the Flash movie. - Open the NetConnection Debugger, (by selecting Window > NetConnection Debugger). - To keep the NetConnection Debugger in focus, select CTRL+ENTER.
http://www.adobe.com/support/flash_remoting/ts/documents/javabean-helloworld.htm
CC-MAIN-2015-11
refinedweb
1,161
56.05
>>> styleA("Adam","Oreo",2.99) "Adam, the product 'Oreo' costs $3.37 after tax. Thanks for shopping with us, Adam! Enjoy your Oreos" Obviously this is a contrived example. It should be contrived as we're trying to pinpoint specific functionality and don't want to get bogged down with additional superfluous code that doesn't add to the tutorial. So, let's begin with styleA. Style A.) Concatenation def styleA(yourName, productName, value): tax = 1.127 #for simplification I include the 1 return yourName+", the product '"+productName+"' costs $"+str(round(value*tax,2))+" after tax. Thanks for shopping with us, "+yourName+"! Enjoy your "+ productName +"s"This style is probably the most common style among beginners - it's also the worst. It offers the fewest options in terms of formatting and is ugly to look at. To put it bluntly, you simply cast everything to a string and then it's a simple matter of string concatenation. From a performance standpoint, this is likely going to be your slowest option as string concatenation can be a bit ugly. Further, if you want to apply any sort of formatting to your strings, you'll either have to apply additional function calls to the input values or apply some functions or slicing to the value after it's been cast to a string. In the above example, you see that we wanted to round the price to two decimal places. To do that, I felt that the best method would be to round the float value to two decimal places and then cast it to the string. For the untrained eye, this may look fine but to a more experienced programmer, this certainly begs to be improved upon. On the positive side, it is very easy to tell what variables go where. Style B.) Substitution def styleB(yourName, productName, value): tax = 1.127 #for simplification I include the 1 return "%s, the product '%s' costs $%.2f after tax. Thanks for shopping with us, %s! Enjoy your %ss"%(yourName, productName, value*tax, yourName, productName)This style has been around for quite a while. It exists in other languages such as C and Java and is very well known to the more seasoned developers. In this style, we use '%s' in a spot where we have a string we want to substitute in, we use '%f' for a float. You'll notice that the float is %.2f. This means that we want it to round to two decimal places. There are other format operators such as %i for integer. You'll notice at the end of the string we have another percent symbol and a tuple containing the values we want to substitute in. If there's only 1 value, you don't need to put it into a tuple, but it really doesn't hurt to put it into 1 most of the time, so you can probably consider it a best practice to go ahead and throw whatever you're subbing into a tuple regardless. One annoyance you'll immediately notice is that in order to use this style, we must know what types we're passing in. Thus we need to know if it's an integer, string, float, etc... or we'll get an error. This is a serious shortcoming when one considers that Python is a language that boasts duck typing! This style is generally considered good except for one additional problem – it's deprecated! This means that at some point, Python will theoretically drop support for it in favor of a newer, better style. You'll likely see many tutorials that still use this method for quite a while. If you're like me and want to be on the cutting edge and show that you know Python well, there's only 1 way to do it: Style C.) String Format Function (ie. The newer and better style) def styleC(yourName, productName, value): tax = 1.127 #for simplification I include the 1 return "{0}, the product '{1}' costs ${2:.2f} after tax. Thanks for shopping with us, {0}! Enjoy your {1}s".format(yourName, productName, value*tax)This style is the least well known because it's on the newer side only having been introduced in Python 2.6. In this style, we use braces {} to denote where a substitution will be made. You'll notice that in my function, I include an integer in the braces. These integers correspond to the position of the value passed into the format function. It's not strictly necessary to pass integers in, but I usually use it because it makes it easier to see which value goes in which position in longer strings. By using the integer in the braces, we're also able to reuse variables passed into the function by reusing their integer index. In this example, you notice that we use the “yourName” and “productName” variables twice, yet only need to pass them into the function once. This specific example was chosen because it shows how much cleaner and simpler the code is with the format method. With substitution, we have no choice but to pass the variables in multiple times, similar to the concatenation method. You'll also notice that the code looks very similar to the substitution method. You may even be upset to notice that I had to incorporate the 'f' for float in there when I specifically said that needing to know the type was a weakness of the substitution method! Well, we don't NEED to know the types with the format method, but if we do want to do decimal place rounding with it, we need to know it's a float – but that should be obvious! If we have to round decimal places, it's going to be a float regardless (as far as string formatting is concerned, anyways). The format function has lots of other cool features if we use the colon. Of course {:.xf} would say that we want to round the value to x decimal places but you probably realized that. By saying {:+f} we say that we want the number to include a +/- sign, but saying {:-f} we say that we want the sign to be used only for negative numbers (default behavior), and {: }(there's a space there) we want a blank space left in front of positive numbers and the negative sign to be present in front of negative ones. Another example is {:,} which says that a number should have commas placed into it, for example >>> '{:,}'.format(1234567890) '1,234,567,890' Another simple example is adding padding to strings to make sure that all strings are the same width: >>> toPrint = ["hi","I","got","it"] >>>>> for word in toPrint: print(x.format(word)) hi other stuff I other stuff got other stuff it other stuff >>>>> for word in toPrint: print(x.format(word)) hi other stuff I other stuff got other stuff it other stuff >>> This sort of thing is extremely useful when printing out tables. There's a lot more that can be done with the format method and I strongly suggest you read the docs and learn more about it. You can check the official docs. Or plenty of other great resources such as this one. This post has been edited by atraub: 03 May 2015 - 04:27 PM
http://www.dreamincode.net/forums/topic/375385-python-string-formatting/
CC-MAIN-2017-22
refinedweb
1,227
71.14
(on the other side) operating-system maintainers who want, as much as possible, to deal with one standardized, language-agnostic but platform-specific tool to distribute software and updates. And ordinarily those goals can get along fairly well as long as a few compromises are made, but in this specific case the elevated tension seems to be caused by the way the Ruby gem system works, specifically by tightly coupling application code to the use of the gem utility. Pretty much everything else on the Debian packagers’ list of problems seems like it could be resolved if this issue went away. But I’m not here to try to solve that problem; I’m simply mentioning it because it’s an interesting parallel (and hence a good lead-in) to some long-standing complaints I’ve had about the way packaging is often done in Python, and which recently came to my attention once again when someone filed a bug report against django-registration, mentioning that a custom management command in that application doesn’t work if you install via easy_install with the default options. If you’d like to just get the executive summary, here it is: Please, for the love of Guido, stop using setuptools and easy_install, and use distutils and pip instead. If you’d like to know why, read on. Also, please note that the following are simply my opinions; I have some experience to back them up, from both personal projects and my duties as Django’s release manager (and, hence, the person who makes the packages for Django), but my opinions are simply mine, and not those of any particular project or institution (for the record: Django doesn’t use setuptools anymore, but I wasn’t part of the decision to move away from it). Why non-standard packaging tools exist for Python Most of my problems with setuptools boil down to the same problem that seems to be at the heart of the Debian-vs.-Ruby fight: setuptools has an unfortunate habit of infecting bits of code which shouldn’t need to have any awareness of how the code, or its dependencies, are being packaged and distributed. As a starting point, consider how Python’s standard distribution system — distutils — works: - You write your code. - You write a script named setup.pywhich imports the setupfunction from distutilsand specifies the packaging options you want. - You run setup.py sdistto generate a standard source package, or use other commands to build different package formats or upload the package to the Python package index. Installing something that’s been packaged with distutils is easy; if it’s in a format specific to your operating system ( distutils can generate a variety of OS-specific formats, including for example RPM packages for Red Hat systems or self-extracting installers for Windows) then you can simply install normally. If you’ve got a source package, however, it’ll simply be a standard compressed archive ( .tar.gz format) you can unpack to get the code and the setup.py script, and setup.py install will install it. And of course the installation process is configurable in a variety of ways, as covered in the distutils documentation. So far, so good. But there are two major shortcomings to distutils: - It provides no way to specify dependencies between packages. - It provides no way to emulate the experience on, for example, many Linux distributions where you type a command, feed it a package name, and the appropriate package (and dependencies) will be downloaded from a repository and installed for you. To provide this functionality, many people turn to setuptools. And if it had done nothing except deal with these two issues, it would have been great; an easy dependency-management mechanism and a network-enabled installation system make lots of people’s lives easier. But setuptools didn’t stop there, and that’s where the real trouble begins.. The first of these is certainly bad, but the second is the one which really bothers me, and which closely parallels the problems with Ruby’s gem system. To see an example, consider a Python feature that’s occasionally useful: if you have a zip file whose contents are files of Python code, you can place the zip file on your Python import path and import will just work for the code inside it; Python knows how to look inside a zip file and find the code, and you don’t need to do anything else special aside from making sure the file’s on the import path. But setuptools has latched onto this feature to create an entire zipped package format which includes not only Python code but also things like data files. Now, normally if you package an application which includes some data files, you can specify that they’re to be installed alongside the code and use standard Python techniques to figure out where your package is and where your data files are, and work with them from your code. With setuptools, however, you can’t do this, because setuptools puts your data files into the same zipped package, and from that point on you have to use functions in setuptools to access them. Oh, and did I mention that this is how setuptools does things by default? Anything you install via easy_install will get this treatment unless: - You’ve explicitly told easy_installnot to do this on a per-package basis, or - You’ve explicitly configured setuptools to disable this “feature” globally, or - The person who created the package set it up to force setuptools not to zip it. Requiring one of the first two options to make Python applications work normally is bad enough, but the third is simply perverse: in order to create a package that setuptools won’t try to zip, you have to use setuptools to create the package. Which, in turn, means that only people who have setuptools installed can install your package. “You can opt out of our system by opting in to our system” is not an acceptable way to do things, in my opinion. This is, incidentally, how setuptools managed to break django-registration. The fact that setuptools defaults to installing that zipped version of the package means Django’s standard mechanism for locating management commands stops working; since Django doesn’t use setuptools’ APIs to peer into zipped packages, it can’t see the custom management command bundled in django-registration. And, of course, most people who use easy_install don’t actually know that it behaves this way, since they just wanted, well, an easy way to install Python packages. So the bug reports end up coming to me, which makes me sad and angry. And that’s really just the tip of the iceberg; setuptools and its associated frameworks, because of the features they try to support, end up slipping out of packaging concerns and into your application code in all sorts of oddball ways. There’s even an analog of the require_gem feature which gives Debian packagers headaches: setuptools lets you specify dependencies directly in application code to ensure that you’re importing precisely the version of a library that you want to import, and this only works when setuptools is also installed (and, from what I can tell, may only work if the package you want to import from was itself installed by setuptools). The end result is that, once you start using setuptools, you’re gradually nudged further and further away from using standard Python APIs and techniques, and more and more into using things that only exist as part of setuptools. And just when you thought it couldn’t get worse, setuptools also encourages package creators to set up drive-by installations of setuptools, so that unsuspecting users end up with it installed whether they wanted it or not. Me, I’m a fan of Python being Python, not some bizarre parasitic thing that tries to force itself on you and make you use its own APIs instead of Python’s. So I generally stay as far away from setuptools as I possibly can. The alternative Of course, this brings up a question: if setuptools and easy_install are bad, what can we use instead? For packaging, I still use (and in fact have always used) just plain old distutils. It’s simple (at least, it’s about as simple as a packaging system can be while still being useful), it’s standard, it works, I use it for all my personal applications. For actually installing and managing packages, I use pip. It’s by Ian Bicking, who’s smarter than any ten people have a right to be, and it gets an awful lot of things right. One of those things, and the one which, by itself, would make pip worthwhile, is actually noted as a shortcoming in its documentation: It cannot install from eggs. It only installs from source. Eggs, of course, are setuptools’ zipped package format. I’m really really OK with not installing from eggs, Ian. If I type: easy_install django-registration then even though it’s a standard source-code package built with distutils I still end up with a broken zipped package that can’t find its own management command. But if I type: pip install django-registration then I get something that actually works the way Python is supposed to work. I’m really OK with that. Another thing pip gets right is that it doesn’t try to graft a dependency-specification system onto the setup.py script, and so doesn’t create a dependency from your setup.py script to pip. Instead, it lets you write a short file listing your requirements and point pip at that file; it’ll handle the rest. And as if that wasn’t enough, pip — thanks to its requirement-file mechanism and a couple other features — enables the holy grail of deployment: the repeatable, scriptable install. Seriously, pip is good stuff. So my recommendation is that you run, not walk, over to pip, then forget about setuptools and easy_install. While you’re at it, check out virtualenv (also by Ian), which makes all sorts of previously-huge deployment and management headaches go away, and about which I plan to write much more in the future.
http://www.b-list.org/weblog/2008/dec/14/packaging/
CC-MAIN-2017-51
refinedweb
1,720
54.76
Haskell/YAHT/Io From Wikibooks, the open-content textbooks collection As we mentioned earlier, it is difficult to think of a good, clean way to integrate operations like input/output into a pure functional language., there should be no problem with replacing it with a function f _ = (), due to referential transparency. But clearly this does not have the desired effect. [edit] The RealWorld Solution unwieldy. In this style (assuming an initial RealWorld state were an argument to main), our "Name.hs" program from the section on Interactivity. Suffice it to say that doing IO operations in a pure lazy functional language is not trivial. [edit] Actions The breakthrough for solving this problem came when Phil. In fact, we have already seen one way to do this using the do notation (how to really do this will be revealed in the chapter Monads). Let's consider the original name program: main = do hSetBuffering stdin LineBuffering four actions: setting buffering, a putStrLn, a getLine and another putStrLn. The putStrLn action has type String ->." Normal Haskell constructions like if/then/else and case/of can be used within the do notation, but you need to be somewhat careful. For instance, in our "guess the number" program, we have: do ... IO Int) and makes it into an action that returns the given value (for instance, the value of type unless guess == num if (read guess < num) then do print "Too low!"; doGuessing num else do print "Too high!"; doGuessing num will not behave as you expect.. [edit] The IO Library The IO Library (available by importing the IO module) contains many definitions, the most common of which are listed below: () bracket :: IO a -> (a -> IO b) -> (a -> IO c) -> IO c Most of these functions are self-explanatory. The openFile and hClose functions open and close a file, respectively, using the IOMode argument as an entire file without having to open it first. The bracket function is used to perform actions safely. is the action to perform at the beginning. The second is the action to perform at the end, regardless of whether there's an error or not. The third is the action to perform in the middle, which might result in an error. For instance, our character-writing function might look like: writeChar :: FilePath -> Char -> IO () writeChar fp c = bracket (openFile fp ReadMode). [edit] A File Reading Program We can write a simple program that allows a user to read and write files. The interface is admittedly poor, and it does not catch all errors (try reading a non-existant file). Nevertheless, it should give a fairly complete example of how to use IO. Enter the following code into "FileRead.hs," and compile/run: module Main where import IO main = do hSetBuffering stdin LineBuffering doLoop doLoop = do putStrLn "Enter a command rFN wFN or q to quit:" command <- getLine case command of 'q':_ -> return () 'r':filename -> do putStrLn ("Reading " ++ filename) doRead filename doLoop 'w':filename -> do putStrLn ("Writing " ++ filename) doWrite filename doLoop _ -> doLoop doRead filename = bracket (openFile filename ReadMode) hClose (\h -> do contents <- hGetContents h putStrLn "The first 100 chars:" putStrLn (take 100 contents)) doWrite filename = do putStrLn "Enter text to go into the file:" contents <- getLine bracket (openFile filename WriteMode) hClose (\h -> hPutStrLn h contents) What does this program do? First, it issues a short string of instructions and reads a command. It then performs a case switch on the command and checks first to see if the first character is a `q.' If it is, it returns a value of unit type. n and a list and returns the first n elements of the list). The doWrite function asks for some text, reads it from the keyboard, and then writes it to the file specified. The only major problem with this program is that it will die if you try to read a file that doesn't already exists or if you specify some bad filename like *\^\#_@. the section on Exceptions.
http://en.wikibooks.org/wiki/Haskell/YAHT/Io
crawl-002
refinedweb
665
69.62
So after scanning a file, I am to format it and output it following these guidelines: -Lines are trimmed to remove any leading or trailing white space. Note that String has a method trim() that removes leading/trailing white space. A trimmed line is displayed with X leading spaces The initial value for X is 0 If a ‘{‘ appears on a line then X is increased by 4 and this affects the display of subsequent lines If a ‘}’ appears on a line then X is decremented by 4 and this affects the display beginning with the current line. -The effect of this processing is similar to that done by the Auto-layout option in BlueJ. My code so far is as follows: public class PrettyPrint { public static void main (String[] args) throws IOException { String spaces = ""; Scanner kb = new Scanner(System.in); System.out.println("Please enter the name of the file you wish to format"); String fileName = kb.nextLine(); File file = new File(fileName); Scanner inputFile = new Scanner(file); while (inputFile.hasNextLine()) { String line = inputFile.nextLine(); line.trim(); if (line.contains("{")) { spaces += " "; line = spaces + line; } System.out.println(line); } } } The program is far from finished as you can see because I've run into a problem already. I was able to successfully add 4 spaces onto the line that contained the "{", however, it only affected that line alone. The rest of the lines after it had no added spaces. After looking back on the program I can see why. The issue is that I have no idea how to fix this. Any ideas?
https://www.daniweb.com/programming/software-development/threads/486467/how-do-i-add-or-subtract-leading-white-spaces-while-scanning-a-txt-file
CC-MAIN-2017-34
refinedweb
263
71.85
RENAME(2) System Calls Manual RENAME(2) rename, renameat - change the name of a file #include <stdio.h> int rename(const char *from, const char *to); #include <fcntl.h> #include <stdio.h> int renameat(int fromfd, const char *from, int tofd, const char *to);() function is equivalent to rename() except that where from or to specifies a relative path, the directory entry names used are resolved. Upon successful completion, the value 0 is returned; otherwise the value -1 is returned and the global variable errno is set to indicate the error. rename() and renameat() will fail and neither of the argument files will be affected if: [ENAMETOOLONG] A component of a pathname exceeded NAME_MAX characters, or an entire pathname (including the terminating NUL) exceeded PATH_MAX bytes. [ENOENT] A component of the from path does not exist, or a path prefix of to does not exist. [EACCES] A component of either path prefix denies search permission. [EACCES] The requested change requires writing in a directory that denies write permission. [EACCES] The from argument is a directory] from or to points outside the process's allocated address, references. mv(1), open(2), symlink(7) The rename() and renameat() functions conform to IEEE Std 1003.1-2008 (``POSIX.1''). The renameat() function appeared in OpenBSD 5.0.. OpenBSD 5.9 September 10, 2015 OpenBSD 5.9
http://resin.csoft.net/cgi-bin/man.cgi?section=2&topic=rename
CC-MAIN-2016-44
refinedweb
223
55.44
This is the next in a series of posts on using ImageSharp to resize images in an ASP.NET Core application. I showed how you could define an MVC action that takes a path to a file stored in the wwwroot folder, resizes it, and serves the resized file. The biggest problem with this is that resizing an image is relatively expensive, taking multiple seconds to process large images. In the previous post I showed how you could use the IDistributedCache interface to cache the resized image, and use that for subsequent requests. This works pretty well, and avoids the need to process the image multiple times, but in the implementation I showed, there were a couple of drawbacks. The main issue was the lack of caching headers and features at the HTTP level - whenever the image is requested, the MVC action will return the whole data to the browser, even though nothing has changed. In the following image, you can see that every request returns a 200 response and the full image data. The subsequent requests are all much faster than the original because we're using data cached in the IDistributedCache, but the browser is not caching our resized image. In this post I show a different approach to caching the data - instead of storing the file in an IDistributedCache, we instead write the file to disk in the wwwroot folder. We then use StaticFileMiddleware to serve the file directly, without ever hitting the MVC middleware after the initial request. This lets us take advantage of the built in caching headers and etag behaviour that comes with the StaticFileMiddleware. Note: James Jackson-South has been working hard on some extensible ImageSharp middleware to provide the functionality in these blog posts. He's even written a post blog introducing it, so check it out! The system design The approach I'm using in this post is shown in the following figure: With this design a request for resizing an image, e.g. to /resized/200/120/original.jpg, would go through a number of steps: - A request arrives for /resized/200/120/original.jpg - The StaticFileMiddlewarelooks for the original.jpgfile in the folder wwwroot/resized/200/120/, but it doesn't exist, so the request passes on to the MvcMiddleware - The MvcMiddlewareinvokes the ResizeImagemiddleware, and saves the resized file in the folder wwwroot/resized/200/120/. - On the next request, the StaticFileMiddlewarefinds the resized image in the wwwrootfolder, and serves it as usual, short-circuiting the middleware pipeline before the MvcMiddlewarecan run. - All subsequent requests for the resized file are served by the StaticFileMiddleware. Writing a resized file to the wwwroot folder After we first resize an image using the MvcMiddleware, we need to store the resized image in the wwwroot folder. In ASP.NET Core there is an abstraction called IFileProvider which can be used to obtain information about files. The IHostingEnvironment includes two such IFileProvders: ContentRootFileProvider- an `IFileProvider for the Content Root, where your application files are stored, usually the project root or publish folder. WebRootFileProvider- an IFileProviderfor the wwwrootfolder We can use the WebRootFileProvider to open a stream to our destination file, which we will write the resized image to. The outline of the method is as follows, with preconditions, and the DOS protection code removed for brevity:` public class HomeController : Controller { private readonly IFileProvider _fileProvider; public HomeController(IHostingEnvironment env) { _fileProvider = env.WebRootFileProvider; } [Route("/resized/{width}/{height}/{*url}")] public IActionResult ResizeImage(string url, int width, int height) { // Preconditions and sanitsation // Check the original image exists var originalPath = PathString.FromUriComponent("/" + url); var fileInfo = _fileProvider.GetFileInfo(originalPath); if (!fileInfo.Exists) { return NotFound(); } // Replace the extension on the file (we only resize to jpg currently) var resizedPath = ReplaceExtension($"/resized/{width}/{height}/{url}"); // Use the IFileProvider to get an IFileInfo var resizedInfo = _fileProvider.GetFileInfo(resizedPath); // Create the destination folder tree if it doesn't already exist Directory.CreateDirectory(Path.GetDirectoryName(resizedInfo.PhysicalPath)); // resize the image and save it to the output stream using (var outputStream = new FileStream(resizedInfo.PhysicalPath, FileMode.CreateNew)) using (var inputStream = fileInfo.CreateReadStream()) using (var image = Image.Load(inputStream)) { image .Resize(width, height) .SaveAsJpeg(outputStream); } return PhysicalFile(resizedInfo.PhysicalPath, "image/jpg"); } private static string ReplaceExtension(string wwwRelativePath) { return Path.Combine( Path.GetDirectoryName(wwwRelativePath), Path.GetFileNameWithoutExtension(wwwRelativePath)) + ".jpg"; } } The overall design of this method is pretty simple. - Check the original file exists. - Create the destination file path. We're replacing the file extension with jpgat the moment because we are always resizing to a jpeg. - Obtain an IFileInfofor the destination file. This is relative to the wwwrootfolder as we are using the WebRootFileProvideron IHostingEnvironment. - Open a file stream for the destination file. - Open the original image, resize it, and save it to the output file stream. With this method, we have everything we need to cache files in the wwwroot folder. Even better, nothing else needs to change in our Startup file, or anywhere else in our program. Trying it out Time to take it for a spin! If we make a number of requests for the same page again, and compare it to the first image in this post, you can see that we still have the fast response times for requests after the first, as we only resize the image once. However, you can also see the some of the requests now return a 304 response, and just 208 bytes of data. The browser uses its standard HTTP caching mechanisms on the client side, rather than caching only on the server. This is made possible by the etag and Last-Modified headers sent automatically by the StaticFileMiddleware. Note, we are not actually sending any caching headers by default - I wrote a post on how to do this here, which gives you control over how much caching browsers should do. It might seem a little odd that there are three 200 requests before we start getting 304s. This is because: - The first request is handled by the ResizeImageMVC method, but we are not adding any cache-related headers like ETagetc - we are just serving the file using the PhysicalFileResult. - The second request is handled by the StaticFileMiddleware. It returns the file from disk, including an ETagand a Last-Modifiedheader. - The third request is made with additional headers - If-Modified-Sinceand If-None-Matchheaders. This returns the image data with a new ETag. - Subsequent requests send the new ETagin the If-None-Matchheader, and the server responds with 304s. I'm not entirely sure why we need three requests for the whole data here - it seems like two would suffice, given that the third request is made with the If-Modified-Since and If-None-Match headers. Why would the ETag need to change between requests two and three? I presume this is just standard behaviour though, and something I need to look at in more detail when I have time! Summary This post takes an alternative approach to caching compared to my last post on ImageSharp. Instead of caching the resized images in an IDistributedCache, we save them directly to the wwwroot folder. That way we can use all of the built in file response capabilities of the StaticFileMiddleware, without having to write it ourselves. Having said that, James Jackson-South has written some middleware to take a similar approach, which handles all the caching headers for you. If this series has been of interest, I encourage you to check it out!
https://andrewlock.net/using-imagesharp-to-resize-images-in-asp-net-core-part-4-saving-to-disk/
CC-MAIN-2020-50
refinedweb
1,235
53.41
create a link to an existing file #include <unistd.h> int link( const char *existing, const char *new ); The link() function creates a new directory entry named by new to refer to (that is, to be a link to) an existing file named by existing. The function atomically creates a new link for the existing file, and increments the link count of the file by one.. /* * The following program performs a rename * operation of argv[1] to argv[2]. * Please note that this example, unlike the * library function rename(), ONLY works if * argv[2] doesn't already exist. */ #include <stdio.h> #include <unistd.h> #include <stdlib.h> void main( int argc, char **argv ) { /* create a link of argv[1] to argv[2]. */ if( link( argv[1], argv[2] ) == -1 ) { perror( "link" ); exit( EXIT_FAILURE ); } if( unlink( argv[1] ) == -1 ) { perror( argv[1] ); exit( EXIT_FAILURE ); } exit( EXIT_SUCCESS ); } POSIX 1003.1 errno, rename(), symlink(), unlink()
https://users.pja.edu.pl/~jms/qnx/help/watcom/clibref/qnx/link.html
CC-MAIN-2022-33
refinedweb
152
66.03
From: David Abrahams (dave_at_[hidden]) Date: 2004-02-20 17:33:19 Brian McNamara <lorgon_at_[hidden]> writes: > On Fri, Feb 20, 2004 at 04:03:57PM -0500, David Abrahams wrote: >> Simple: specializations that follow the point of instantiation aren't >> considered. This program exits with an error: >> >> template <class T> >> int f(T) { return 1; } >> >> int main() { return ::f(0); } >> >> template <> int f(int) { return 0; } > > Aha; this is part of what I was missing. > This clears most of it up for me. > > One last question, and then I think I'm done. In my example: > > namespace lib { > template <class T> void f(T) { /* print "bar" */ } > template <class T> void g(T x) { lib::f(x); } // (1) > } > > namespace user { > struct MyClass {}; > } > namespace lib { > template <> void f( user::MyClass ) { /* print "foo" */ } > } > > int main() { > user::MyClass m; > lib::g(m); // (2) > } > > What is printed? foo > (I think this question comes down to whether or not (1) or (2) is the > "point of instantiation" of f(), yes?) Yeah; if you change g so it callse lib::f(0) it prints bar. -- Dave Abrahams Boost Consulting Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2004/02/61496.php
CC-MAIN-2021-25
refinedweb
204
72.26
Yes Rest LookUp is possible in PI Recently we had a requirement where we need to perform a look up for a rest service to obtain the token and then include the same in the future requests. I did not find any document for REST look up in PI , so thought to create a new one so that it can help others. For this document, I have used the below rest api This service takes zip code as json input and returns status and result in json format. So like other look up first thing we need to do is to create a new communication channel with type REST in ID REST URL Tab: Notice we have used a variable parameter {req_zipcode} in the URL.The value of this parameter will be fetched from the input xml message which we will be passing during the look up. In the xpath expression we have used the value as //zipcode. So in the input xml there has to be a field with the name zipcode. Below is our input xml Rest Operation tab: Data Format tab: Our input is xml but the service expects json so we have to choose the option ‘Convert XML Payload To JSON’. Similarly the service will return the output as JSON. So we have to choose the option ‘Convert to XML’. Also we need to select the ‘Add Wrapper Element’. The final output in xml format will look like below Communication channel is ready now. Next we need to create source and target structure in ESR source: target: Java Mapping Code: package com.test; import java.io.ByteArrayInputStream; import java.io.InputStream; import java.io.OutputStream;; import org.w3c.dom.NodeList; import com.sap.aii.mapping.api.AbstractTransformation; import com.sap.aii.mapping.api.StreamTransformationException; import com.sap.aii.mapping.api.TransformationInput; import com.sap.aii.mapping.api.TransformationOutput; import com.sap.aii.mapping.lookup.Channel; import com.sap.aii.mapping.lookup.LookupService; import com.sap.aii.mapping.lookup.Payload; import com.sap.aii.mapping.lookup.SystemAccessor; public class RestLookInPI extends AbstractTransformation { public void transform(TransformationInput arg0, TransformationOutput arg1) throws StreamTransformationException { this.execute(arg0.getInputPayload().getInputStream(), arg1 .getOutputPayload().getOutputStream()); }// end of transform public void execute(InputStream in, OutputStream out) throws StreamTransformationException { try { String status = ""; // generate the input xml for rest look up String loginxml = "<?xml version=\"1.0\" encoding=\"UTF-8\"?>" + "<zipcode>10001</zipcode>"; //perform the rest look up Channel channel = LookupService.getChannel("BC_468470_Receiver","CC_Rest_Rcv"); SystemAccessor accessor = null; accessor = LookupService.getSystemAccessor(channel); InputStream inputStream = new ByteArrayInputStream(loginxml.getBytes()); Payload); NodeList stats = document.getElementsByTagName("status"); Node node = stats.item(0); if (node != null) { node = node.getFirstChild(); if (node != null) { status = node.getNodeValue(); } } Document targetDoc = builder.newDocument(); Element targetRoot = (Element) targetDoc.createElement("ns0:MT_Output"); targetRoot.setAttribute("xmlns:ns0",""); Element stat = (Element) targetDoc.createElement("status"); stat.setTextContent(status); targetRoot.appendChild(stat); targetDoc.appendChild(targetRoot); DOMSource domSource = new DOMSource(targetDoc); StreamResult result = new StreamResult(out); TransformerFactory tf = TransformerFactory.newInstance(); Transformer transformer = tf.newTransformer(); transformer.transform(domSource, result); } catch (Exception e) { e.printStackTrace(); } } // end of execute } Test Result: Hi Indrajit, Thank you for sharing. I think we can also use the SystemAccessor for other lookups e.g CSV or SOAP 🙂 Regards, Mark Thank You Mark.. Hi Indrajit, Can I do rest lookup inside UDF similar to soap lookup? For some reason the lookup in Java mapping is not working for me. Thanks, Hi Amol! Sure, you can do it the same way. Take a look at this blog: Regards, Evgeniy. Thanks Evgeniy - Yes, It worked. Thanks, Amol Hi Kolmakov, I use ASMA instead of xpath substitution,I set ASMA in UDF before call lookup channel,but when using it in channel,the channel didn't get the ASMA values,but it did showup in the payload of dynamic configuration,how come? Thanks Hi Yunze! I'm not sure that SystemAccessor class used in such type of lookup pays any attention to attributes placed in the main message's DC header. Regards, Evgeniy. Great Indrajit!! Thank you for sharing!, Rest communications are every day more usual, and this was one of the last lookup frontiers!. Regards. Thank you Iñaki.. Hi Indro, Thanks for sharing knowledge and this is really worthy blog. Regards, Sami. Thanks Sami.. Thank you Indrajit, for sharing this information. It was helpful. Good one. Thanks for sharing this info.. Br, Praveen Gujjeti is it possible to read the header information in the rest lookup response? i need to get the "etag" value while performing rest lookup. This etag value is not coming the body. it is coming under response header. Muni, Were you able to read the tag from the header? Is this possible via dynamic config? Br, Manoj Hi Manoj, I was not able to read the header paraters in the rest look up. So I created two interfaces, one is for lookup and another one is for main interface, instead of handling in one interface. i called first interface(proxy to rest sync) and read the header details in response mapping using dynamic configuration and send it to ECC. After lookup service is successful, main interface will be triggered with first interface header values. I went with this approach as in house consultant who is going to support is not good at java coding. another approach will be calling rest service from java code without using any adapter. this you can put it in udf, java map or adapter module. Regards, Muni My approach was to have lookup to get the X-csrf-token and use it in subsequent calls. but unfortunately, the session used in lookup and the subsequent calls are different hence that token becomes invalid. Hi Former Member, they have a more complete scenario ?, I am trying to replicate with a restlookup but I can not get a correct result, my doubt is in the mapping of a synchronous scenario, thanks. Martin, Did you made it ? I also copy the same here and it´s not working. Former Member ? Hi, My lookup code is not triggering the REST communication channel at all . Any clues please ! I have used an UDF to do the REST look up and also configured REST Receiver channel with a dummy ICO. Does anyone provide information on how to get CSRF token set it in HTTP header for Odata API? so, i can copy and paste your code into de udf option in graphical mapping and modifying the CC or BS, and then it works?. or i have to do it eclipse and import as a jar file? please let me know how to do it
https://blogs.sap.com/2015/09/11/yes-rest-lookup-is-possible-in-pi/
CC-MAIN-2021-49
refinedweb
1,095
60.01
Last issue I gave you a 'hit-the-ground-running' introduction to custom Web control development and showed you how to build a renered control and an inherited control. In this issue you'll complete your inherited control by adding styling, sizing capabilities, as well as instruct it how to raise events. Afterward you will jump into building the last control of the series, the EmailContact control, bringing together the previous two controls with some business functionality into a powrful composite Web control.. Control Sizing When you drop any Web control on a form, you’re probably used to sizing it by dragging one of its sizing points in whatever direction you want. The problem here is that the Web control you’re building consists of three HTML elements, and you want to be able to size each one individually to give the FormField control maximum usefulness. The built-in Width and Height property that the control has comes from the Control class that you are ultimately inheriting from, and it corresponds to the control as a whole. If you try to resize the control using these properties as it currently stands, nothing happens. This is because you have not added sizing attributes to the contained elements, so they stay exactly as-is. I’m going to do something later with these existing properties, but for now I want you to add two properties called CaptionWidth and ButtonWidth. For the purposes of this article, I’m only going to show you how to deal with widths, but the downloadable code contains code to handle heights as well. I left out a width property for the textbox for a reason, as I’ll explain later. It’s a good idea to use a Case statement (switch in C#) as opposed to an If statement. This sets you up for any future enhancement to your control. A property that handles height or width for an element is of type Unit. Here’s the code for the CaptionWidth property. In VB.NET: Public Property CaptionWidth() As Unit Get If CType( _ ViewState("CaptionWidth"), Object) _ Is Nothing Then Return Unit.Pixel(130) End If Return CType( _ ViewState("CaptionWidth"), Unit) End Get Set(ByVal Value As Unit) ViewState("CaptionWidth") = Value End Set End Property In C#: public Unit CaptionWidth { get { if (((object) ViewState["CaptionWidth"]) == null) return Unit.Pixel(130); return ((Unit) ViewState["CaptionWidth"]); } set { ViewState["CaptionWidth"] = value; } } As you can see, the same ViewState-oriented property technique is used here as described earlier in the article. The Unit object is serializable so it can be fully persisted in the ViewState variable. Now that you’ve added the new properties, you need to do something with them. Remember that earlier I taught you that you can use the AddAttribute method of the HtmlTextWriter object to add tag attributes to the upcoming RenderBeginTag call. That’s exactly how you’re going to set the Width attribute to the ‘span’ tag and ‘input’ tag for the button, with one minor difference I’ll explain in a minute. output.AddStyleAttribute( HtmlTextWriterStyle.Width, this.CaptionWidth.ToString()); And of course, you would have a similar line for the ButtonWidth property. Now you can set these properties individually to adjust the width of the caption and the button. As you realized, there’s one element of the Web control whose width I have not handled this way. In fact, there’s a reason I haven’t yet showed you how to handle this element. To make the control more programmer-friendly, I’m letting the textbox take the remaining width of the entire control; that is the total width of the Web control minus the width of the caption and the button (and don’t forget those two spaces you inserted between elements-calculated to be 10 pixels). The value that you’re going to use to set the width of the textbox will consist of the total width of the control (the Width property) minus the value of CaptionWidth, minus the value of ButtonWidth, and minus 10. The subtraction of the ButtonWidth value will depend on the setting of the ButtonVisible property and the number 10 accounts for the extra spaces rendered between the elements. I derived the number 10 by trial and error to see what looked best in the designer. You’ll use the calculated value to set the Width attribute of the textbox’s “input” tag. In VB.NET: Dim i_Width As Integer = _ CType(Me.Width.Value, Integer) - (CType(Me.CaptionWidth.Value, Integer) - 10) If Me.ButtonVisible Then i_Width -= CType( _ Me.ButtonWidth.Value, Integer) End If If i_Width < 20 Then i_Width = 20 output.AddStyleAttribute( _ HtmlTextWriterStyle.Width, i_Width.ToString()) output.RenderBeginTag(HtmlTextWriterTag.Input) In C#: int i_Width = ((int)(this.Width.Value)) - ((int)(this.CaptionWidth.Value) - 10); if(this.ButtonVisible) i_Width -= ((int)(this.ButtonWidth.Value)); if (i_Width < 20) i_Width = 20; output.AddStyleAttribute( HtmlTextWriterStyle.Width, i_Width.ToString()); output.RenderBeginTag(HtmlTextWriterTag.Input); Notice that you’re also ensuring a minimum total width of 20 for the control. In the downloadable code for the finished FormField control (Figure 1), I also handle the Width property to account for percentages as well as pixel entry as its value. This will become necessary when I get to the composite control later, but for now I’m not going to worry about it. Another thing to note is that you’re using a different method from the AddAttribute you used before. The AddStyleAttribute takes care of adding the property, not as an attribute to the tag, but as an attribute within the HTML style attribute. Later when I address styling, I’ll touch on this some more. If you try sizing this control on a Web Form now, you will see that the textbox stretches to the size of the entire control while leaving the caption and button the same width. Those elements will only be sized by setting their properties individually. Now that you can size the control properly, you’re going to add some actual functionality for handling postbacks and handling events. Events What good is having a button on a form if it doesn’t do anything? Since I’ve taken time to build a custom Web control with a button as part of its elements, I now want to give that button some functionality. In a rendered control, you accomplish this by implementing a couple of interfaces. The button does not need to do any data checking, instead, it simply needs to trigger a postback and raise an event in the Web Form’s code-behind class. In order to do this, I start by extending the control’s class to implement the IPostBackEventHandler interface. This interface defines only one method called RaisePostBackEvent which receives a string argument. This method will get fired when a postback is triggered by one of the elements in the control. In order to trigger a postback from the Web control, you need to do a couple of things. First, you must tell the button to trigger a page postback when a user clicks it. The HTML tag that you used to render the button is an “input” tag with a “type” attribute of “button.” Inherently, this HTML tag can only raise a client event in its “onclick” attribute; no problem, that’s exactly what you’re going to do. Once again you’re going to add another attribute to one of the HTML tags. This time it will be the “input” tag that gets rendered for the button element. The attribute you need to add to this tag is the “onclick” attribute whose value should contain Jscript code to execute when the user clicks the button. When the ASP.NET parser processes an ASPX Web Form to render to a browser, it also builds a Jscript function that handles the postback to the server. This function is normally called by any control that needs to trigger a postback. You don’t really need to know the name of this Jscript function because .NET provides a method call that will generate it (though if you view the source of any rendered ASPX page, you will see this function which is called “__doPostBack”). This is good in case the function name changes in future versions of .NET. The method that generates the Jscript call is called GetPostBackEventReference and it sits off the Page object, which incidentally is accessible from the control’s class. The two arguments you need to send to this method are the calling class (the control’s class) and an identifier that identifies the button element. This identifier is what gets sent into the RaisePostBackEvent method that was defined by the IPostBackEventHandler interface. As before, you add the new attribute to the button’s “input” tag before the “input” tag is rendered. In VB.NET: output.AddAttribute( _ HtmlTextWriterAttribute. _ Onclick, Page.GetPostBackEventReference( _ Me, "button")) output.RenderBeginTag(HtmlTextWriterTag.Input) In C#: output.AddAttribute( HtmlTextWriterAttribute.Onclick, Page.GetPostBackEventReference( this, "button")) ; output.RenderBeginTag(HtmlTextWriterTag.Input); As you can see, the word “button” is chosen for the identifier of the button element. This will get passed into the RaisePostBackEvent method when the page is postbacked. Your control is now ready to handle postbacks. Clicking the button will now call the RaisePostBackEvent method, sending the word “button” into its argument. The only problem is that you haven’t told this method to do anything yet, so let’s wire in an event to raise to the page. You can create custom composite controls that contain other custom composite controls, thus creating a control tree. Remember however, that the deeper you get the more performance-heavy your control will get. When you click on a regular button control on a Web Form, you trigger a Click event on the page’s code-behind class. You’re going to create a ButtonClick event that will get raised on the page’s code-behind class when the button on the FormField control gets pressed. Let’s start by declaring a ButtonClick event using the standard EventHandler delegate. In VB.NET: Public Event ButtonClick As EventHandler In C#: public event EventHandler ButtonClick; This is the event that will be raised in the RaisePostBackEvent method. To make sure that this event gets raised only when the button is pressed, you’ll need a condition-check against the value that was used when the button element was rendered. In VB.NET: Public Sub RaisePostBackEvent( _ ByVal eventArgument As String) Implements _ IPostBackEventHandler.RaisePostBackEvent Select Case eventArgument.ToLower() Case "button" RaiseEvent ButtonClick( _ Me, New EventArgs) End Select End Sub In C#: public void RaisePostBackEvent( string eventArgument) { switch (eventArgument.ToLower()) { case "button" : if(this.ButtonClick != null) this.ButtonClick( this, new EventArgs()); break; } } It’s a good idea to use a Case statement (switch in C#) as opposed to an If statement. This sets you up for any future enhancement to your control. Now that you have an event wired up to the button element, let’s make it the default event for the control. This will allow programmers that use your control to double-click on it while in design mode on a Web Form, and have the code-behind come up with the ButtonClick all coded up and ready to go. To do this you need to decorate the class declaration with the DefaultEvent attribute and send into its constructor, the string “ButtonClick” (code not shown). You’re not done with events yet. You need to create a TextChanged event to capture changes in the textbox, much like the one that comes with the regular Textbox Web control. This event is a bit different because upon the page postback, you’re going to need to check if the value in the textbox has changed before you raise it. You should know that you have to declare the event so go ahead and do that at the top of the class. In VB.NET: Public Event TextChanged As EventHandler In C#: public event EventHandler TextChanged; As you can see, this code will use the default EventHandler delegate as well. For neither of these two events do you need to create a new delegate and event argument object, so you’re fine with using the default one. In the case where you needed to send information to the event, you would use a custom event argument object and delegate as you would in any case where you’re using events. Now that the event is declared, you need to raise it somewhere. You need to implement an interface that will allow you to check posted values for elements in your control; it’s called IPostBackDataHandler and it implements two methods: LoadPostData and RaisePostDataChangedEvent. The LoadPostData method gets called during a postback and receives data from the elements on the Web Form. This data can be checked against properties in your control to check for changes or anything else that may be required. This is the essence behind the ability to check for text changes in the textbox. The code in the LoadPostData event will look at the data that was posted from the textbox and compare it against the value of the Text property, which may be different from that of the actual textbox on the form. In VB.NET: Public Function LoadPostData( _ ByVal postDataKey As String, _ ByVal postCollection As NameValueCollection) As Boolean _ Implements IPostBackDataHandler.LoadPostData Dim s_CurrentValue As String = _ Me.Text Dim s_PostedValue As String = _ postCollection(Me.UniqueID & _ ":Field") Dim s_Button As String = _ postCollection(Me.UniqueID & _ ":Button") Dim b_ButtonClicked As Boolean = _ (Not s_Button Is Nothing AndAlso _ s_Button.Length <> 0) If b_ButtonClicked Then Page.RegisterRequiresRaiseEvent(Me) End If If (Not s_CurrentValue.Equals( _ s_PostedValue)) Then Me.Text = s_PostedValue Return True End If Return False End Function In C#: public bool LoadPostData( string postDataKey, NameValueCollection postCollection) { string s_CurrentValue = this.Text; string s_PostedValue = postCollection[this.UniqueID]; string s_Button = postCollection[this.UniqueID + _ ":Button"]; bool b_ButtonClicked = (s_Button != null) && (s_Button.Length != 0); if(b_ButtonClicked) Page.RegisterRequiresRaiseEvent(this); if(!s_CurrentValue.Equals(s_PostedValue)) { this.Text = s_PostedValue; return true; } return false; } In the interest of space, I’ve included the finished version of the LoadPostData method above, but I’ll start by concentrating on the code relevant to the TextChanged event first. The postCollection argument that gets passed into this method contains all the posted information from the Web Form. In the second line of code above, I’m setting a string variable to the item in the collection identified by a certain identifier. Do you recognize that identifier? It should seem familiar to you. Back when you wrote the Render method for the Web control, you set the Name and ID attributes on the tags that you rendered. If you recall, when you created the “input” tag that would render the textbox, the ID attribute was set to the control’s UniqueID property. The s_PostedValue variable captures the text that the textbox contains at the moment of the postback. This is obtained from the NameValueCollection that the method received. Later below, that value is compared to the value of the Text property. If the two values are not the same, then the text in the textbox has changed so you return a value of true from the method. Returning a true value from this method will fire another method called RaisePostDataChangedEvent. It is from this method that you’ll raise the TextChangedEvent. The rest of the code in this method checks to see if you got here by an action caused by the textbox or the button. Since this method will get called if the button is pressed, you have to account for that and not just automatically call the “text changed” functionality. If you conclude that the button was the cause of the postback, you register that the control needs to call the RaisePostBackEvent and then continue through to the text change check. Below is the code for the RaisePostDataChangedEvent. In VB.NET: Public Sub RaisePostDataChangedEvent() Implements IPostBackDataHandler.RaisePostDataChangedEvent RaiseEvent TextChanged(Me, New EventArgs) End Sub In C#: public void RaisePostDataChangedEvent() { if(this.TextChanged != null) this.TextChanged( this,new EventArgs()); } As you can see, all you’re doing here is raising the TextChanged event. In order to properly integrate your control within the ASP.NET page lifecycle, you should raise the “data-change-oriented” events from this method instead of raising them directly from the LoadPostData method. The code above takes care of checking for changes in posted data and setting up the task of raising events. What you still have left to do is somehow get this code to be called when the text is changed in the textbox. This is normally done through the ontextchanged JavaScript event of a textbox, so you need to add another attribute to the “input” tag for the textbox. Since you want to give the choice of having the textbox cause a postback or not, much like the standard ASP.NET textbox, you need to add a Boolean property called AutoPostBackEnabled. Once you’ve done that, you can use that property to set a condition around the attribute in addition to the textbox in the Render method. In VB.NET: If Me.AutoPostBackEnabled Then output.AddAttribute( _ HtmlTextWriterAttribute.Onchange, _ Page.GetPostBackEventReference( _ Me, "field")) End If output.RenderBeginTag(HtmlTextWriterTag.Input) In C#: if(this.AutoPostbackEnabled) { output.AddAttribute( HtmlTextWriterAttribute.Onchange, Page.GetPostBackEventReference( this, "field")) ; } output.RenderBeginTag(HtmlTextWriterTag.Input); You also need to inform the page that uses your control that you need to perform postbacks for data checking. To do this, the page uses its RegisterRequiresPostBack method where it sends the class for the control. Fortunately, since you have access to the Page object from the Web control, you can perform the call there, thus encapsulating everything you need into the Web control. This eliminates the need to remember to make any calls about the control from every page that uses it. The call is made from your control’s OnPreRender event, which you must override. In VB.NET: Protected Overrides Sub OnPreRender(ByVal e As EventArgs) MyBase.OnPreRender(e) If Not Page Is Nothing Then Page.RegisterRequiresPostBack(Me) End If End Sub In C#: protected override void OnPreRender(EventArgs e) { base.OnPreRender(e); if(Page != null) { Page.RegisterRequiresPostBack(this); } } Now you’re ready to go. Your control can handle a server event upon clicking the button as well as a change of text in the textbox. The AutoPostBackEnabled property allows the textbox to trigger the postback, though this is not the requirement for the TextChanged event to be fired. Just like the standard ASP.NET textbox control, the TextChanged event will be hit upon the next postback, even if it was not the textbox itself that triggered it. With a little creativity and possibly an extra property, you can code to customize this functionality to your liking. The last thing you have to do with the Web control to give it maximum versatility is styling. Styling Styling in rendered Web controls can be a bit tricky but if handled correctively, is key to their versatility and reusability. The more styling you can provide for a Web control, the more places you can reuse it and make it look different. The first thing you need to do is set up the styling properties that you’ll expose in your Web control. The FormField control is going to have three styling properties: CaptionStyle, FieldStyle, and ButtonStyle; but for the purposes of this article, I’ll only walk through CaptionStyle in detail. The CaptionStyle property, like the other style properties, will be of type Style. .NET uses other styling types but they all ultimately derive from Style. It is the Style type that contains all of those great properties that you’ve worked with in the past: BackColor, ForeColor, Font, and many others, including CssClass for attaching a style sheet. You can also create your own custom style types, but that’s beyond the scope of this article. Unlike the other properties, persisting in the ViewState will be done a bit different. You need to set up member variables to act as the internal store for your properties, much like how you set up properties of a business object. In VB.NET: Private _CaptionStyle As Style = New Style … other code … Public ReadOnly Property CaptionStyle() As Style Get Return Me._CaptionStyle End Get End Property In C#: private Style _CaptionStyle = new Style(); … other code … public Style CaptionStyle { get { return this._CaptionStyle; } } The first thing you’ll notice here is a read-only property. This is often the case with properties that are of an object type. To explain this better, let’s skip ahead and pretend that you have a finished control and are using it on a Web Form. Your control, called fldName, has a property called CaptionStyle. If you wanted to set the font weight to Bold on this property from the code-behind class, you would do something like this: fldName.CaptionStyle.Font.Bold = true; This way of accessing the CaptionStyle property requires only the property get accessor. The set accessor would only be hit if you set the property like this: Style myStyle = new Style(); myStyle.Font.Bold = true; fldName.CaptionStyle = myStyle; This can certainly be done but rarely is, nor can you rely on users of your controls to set styling properties in this manner; so for your purposes here you will go with a read-only property. Notice that you don’t have ViewState code in the property, so you’re probably wondering how to persist the value of _CaptionStyle. Because more often than not, you will be hitting this property using its get accessor only. Placing ViewState code in a set accessor, even if one existed, would not be hit. For that reason, you must persist the _CaptionStyle variable in another way, though ultimately still in ViewState. There are three methods you override to handle specific state management situations like this one. Before, you were reading from and setting a value in the ViewState variable. Now you run into a situation where you would be able to read from it but not set it. By overriding these methods, you can persist your member variable, _CaptionStyle, into ViewState along with any other values as well. The methods you need to override are called SaveViewState, LoadViewState, and TrackViewState, and are part of the IStateManager interface, which the control class automatically implements by way of its inheritance. Complete explanations as to the exact implementation of this interface is beyond the scope of this article, but you do need to know that this interface is used by any control or object that needs to persist some kind of state using ASP.NET’s ViewState mechanism. The Style object which defines the style properties also implements IStateManager. Let’s take a look at what the SaveViewState method will look like. In VB.NET: Protected Overrides Function SaveViewState() As Object Dim state() As Object = New Object(4) {} state(0) = MyBase.SaveViewState() state(1) = CType(Me._CaptionStyle, _ IStateManager).SaveViewState() state(2) = CType(Me._FieldStyle, _ IStateManager).SaveViewState() state(3) = CType(Me._ButtonStyle, _ IStateManager).SaveViewState() Return state End Function In C#: protected override object SaveViewState() { object[] state = new object[4]; state[0] = base.SaveViewState(); state[1] = ((IStateManager) this._CaptionStyle).SaveViewState(); state[2] = ((IStateManager) this._FieldStyle).SaveViewState(); state[3] = ((IStateManager) this._ButtonStyle).SaveViewState(); return state; } If you follow this code, you see that it is building an array and storing in it the value returned from the SaveViewState method of the style member variables. Note that because the IStateManager interface methods are implemented in a protected way, you have to cast the variable to the type of the interface before accessing any member. The first subscript of the array calls the base method. This is extremely important for persisting all base object data all the way up your inheritance tree. Essentially you end up with an array of information where the first subscript is the entire array of its base class. The final array is returned as an object type by the method. This and the other methods I am about to describe are called by whatever page uses your controls during its page lifecycle (see sidebar, Page Implements IStateManager). This takes care of saving your state when at the appropriate time; now how do you load it back after a postback? In VB.NET: Protected Overrides Sub LoadViewState( _ ByVal savedState As Object) Dim state() As Object = Nothing If Not savedState Is Nothing Then state = CType(savedState, Object()) MyBase.LoadViewState(state(0)) CType(Me._CaptionStyle, _ IStateManager).LoadViewState(state(1)) CType(Me._FieldStyle, _ IStateManager).LoadViewState(state(2)) CType(Me._ButtonStyle, _ IStateManager).LoadViewState(state(3)) End If End Sub In C#: protected override void LoadViewState( object savedState) { object[] state = null; if (savedState != null) { state = (object[])savedState; base.LoadViewState(state[0]); ((IStateManager) this._CaptionStyle). LoadViewState(state[1]); ((IStateManager) this._FieldStyle). LoadViewState(state[2]); ((IStateManager) this._ButtonStyle). LoadViewState(state[3]); } } The LoadViewState method performs the reverse of what the last method showed. Here you’re receiving an object which you’re then casting into an object array. Then you just extract each member and fill in the member variables. Note once again that the first subscript is reserved for the call to the base class. You can use these methods to persist any variable in your class. You would replace calls to the SaveViewState and LoadViewState within the methods with simple variables. For example: state[1] = myVar; // in SaveViewState myVar = state[2]; // in LoadViewState If you wanted to, you could have set up all of your properties like you normally do in business objects, which means you simply expose member variables in every case. Then you would have to persist all the member variables using these method overrides. There is one more method I want to briefly mention but without code examples. The TrackViewState method override calls the TrackViewState on any variables to be persisted that implement IStateManager, such as the style variables in this case. This ensures that all objects that should track state for themselves are doing so. There is one more piece of code you need to put into your style property statements before you return the internal member variable. In VB.NET: If Me.IsTrackingViewState Then CType(Me._CaptionStyle, _ IStateManager).TrackViewState() End If Return Me._CaptionStyle In C#: if (this.IsTrackingViewState) ((IStateManager)this._CaptionStyle). TrackViewState(); Return this._CaptionStyle This code ensures that the style objects track their state every time they are accessed, provided your control is tracking state as well. The default condition for your control is true, but just in case you turn state off, you want that to carry over into your state-tracked objects. That covers it for the style properties. Using this method, you can add as many style properties as you want. Now the trick is to use the values of the Style object to decorate the HTML you are rendering. The downloadable code shows you the complete code for this, but here I’m going to demonstrate using just three values of the Style object:, Font.Name, Font.Bold, and CssClass; and I’ll also only deal with the CaptionStyle property. Unlike the composite control I’ll show you how to develop later (yes, can you believe there’s more?), you need to turn the properties of the Style object into HTML properties that will get rendered into the tag. Since I stated that I’ll only demonstrate the CaptionStyle property, the tag that you need to add attributes for is the “span” tag that gets rendered as a control’s caption. In VB.NET: If Me._CaptionStyle.Font.Name <> "" Then output.AddStyleAttribute( _ HtmlTextWriterStyle.FontFamily, _ Me._CaptionStyle.Font.Name) End If output.AddAttribute( _ HtmlTextWriterAttribute.Class, _ Me._CaptionStyle.CssClass) If Me._CaptionStyle.Font.Bold Then output.AddStyleAttribute( _ HtmlTextWriterStyle.FontWeight, "bold") End If output.RenderBeginTag(HtmlTextWriterTag.Span) In C#: if(this._CaptionStyle.Font.Name != "") output.AddStyleAttribute( HtmlTextWriterStyle.FontFamily, this._CaptionStyle.Font.Name); output.AddAttribute( HtmlTextWriterAttribute.Class, this._CaptionStyle.CssClass); if(this._CaptionStyle.Font.Bold) output.AddStyleAttribute( HtmlTextWriterStyle.FontWeight, "bold"); output.RenderBeginTag(HtmlTextWriterTag.Span); As in the previous code examples where you have added to the rendering of your control, I’ve shown you the RenderBeginTag statement so you can see where the code is placed within the Render method. Note how the control uses the AddStyleAttribute method. As in the case of the “width” style, elements need to be rendered as part of the HTML “style” attribute, not as attributes of the “span” tag. The exception to this is the Style object’s CssClass property which maps to the Class attribute in HTML. By the way, have you noticed how cluttered some of these code snippets look? Unfortunately, as you’ve probably noticed, the methods and enums you’re dealing with are quite long. I have done my best to format it within the space allowed, but in the end, the best way to view it is in the downloadable code. I want to touch on something I mentioned when you began to develop this control. If you recall, you made the FormField class inherit from the WebControl class because I said it adds more styling ability than the Control class. The styling I’m referring to is accessible directly from the FormField class. I explained that the Style object contains properties such as Font, BackColor, CssClass, etc., and that these are accessible to any of the three styles your control contains. If you drop a FormField control on a page and examine its properties in the Visual Studio Property Browser, you’ll notice these “style” properties in the “Appearance” category of the Property Browser. These are the result of inheriting from the WebControl class and affect the way the FormField control looks in the context of a container of other elements. The CaptionStyle property affects the caption and the FieldStyle property affects the textbox, but both of these are still contained within the Web control’s class, the actual custom Web control, and this “container” can have styling as well. Just like you can set the border-style of the caption or the button through the CaptionStyle property, you can use the BorderStyle property built into the FormField control itself to alter the border of the control as a whole. This can give you even more visual versatility and more reusability for the control. Unfortunately, rendering all the properties of the Style object to each of three tags can be a bit of code. But remember, the point of writing a rendered control is to get the most performance during the rendering time, and sometimes performance gains require programming “lower-level,” fine-grain programming. (Are there any old assembler people out there?). In the finished control, I’ve refactored this into a method called during the rendering of each HTML element. I want to add one more thing to this control. By decorating the control’s class with the ValidationProperty attribute, you can assign an existing property to serve as that which gets checked by any Form validators the Web Form developers want to use. The constructor of the attribute takes the name of the property, which for these purposes will be the Text property. [ValidationProperty("Text")] Now the FormField control is fully compatible with the validation controls that ship with Visual Studio. This brings me back to something I mentioned in Part 1 of this article. The textbox element in this control received a value in its ID and Name attributes that was the same as that of the actual Web control. When you use a validator control on a Web Form, the JavaScript creates links to the validation code with an HTML element. This link is made using the ID of the Web control being validated. If all the internal elements of a Web control have a hierarchical naming scheme, the validation code will not be tied to anything on the rendered HTML page. It is for that reason that the textbox element retains the same name as the Web control that contains it. The <input> tag that is rendered later to represent the textbox gets validated by the JavaScript code generated by the validator used. Well, you’ve finished the FormField control. Play around with this and I think you’ll find this control very useful in your Web forms. But wait, there’s more! The downloadable code contains a finished version of this control with much more functionality than what I’ve had time to show you how to create here. Here’s a list of just some of the features in the final FormField control: - Variable button location where the button can be placed on the left or right of the textbox. If the caption is turned off (CaptionVisible property), the button can be placed on the left of the textbox and serve as a clickable caption. - The caption can be placed either to the left of the control or above it, giving you maximum flexibility for creating data-entry forms. - The TextMode can be set so that control can be used as a single-line textbox, a multi-line, or a password field. This actually affects whether the textbox gets rendered as an ‘input’ tag or a ‘textarea’ tag. - Our button has the same ‘confirmation message’ feature as in the inherited control we developed earlier has. - Validation capability, including field requirement and regular expression validation. - The control can be set to automatically convert the text in the textbox to upper or lower case when the focus leaves. - Vertical and Horizontal alignment for each of the elements within its own space. This allows the control to display a large, multi-line textbox with the caption still appearing vertically aligned with the top. This is accomplished by rendering table elements around the tags you created here. One other very cool feature that the finished product has is an extra style property called ReadOnlyFieldStyle. This style gets applied to the internal textbox based on the value of another property called ReadOnly. The beauty here is that you can set two styles to the text field and toggle between the two simply by changing the value of the ReadOnly property, which incidentally also locks the textbox so its value cannot be edited. Check out the pictures of the FormField control in action shown in Figure 5. Keep in mind that all of these are of the same control type though they look drastically different. During the creation of the FormField control, you have essentially duplicated the functionality that is provided by ASP.NET’s Label, Textbox, and Button Web Controls (and in the final version, the LinkButton as well). Because of the nature of rendered controls and their ultimate goal of rendering speed, using instances of the existing ASP.NET controls is not possible. The exact opposite is the case with composite controls as you will soon see. It’s not safe to come back into the water yet. You have one more control to develop. The EmailContact composite control will leverage both the ConfirmationButton and FormField button. This control will be developed differently but will repeat many of the same techniques you’ve learned thus far. When I talk about creating properties or styling or state management, I will not be including too much detail as they would be handled the same as I explained them before. The EmailContact Control This last custom Web control will bring together the two previous controls and form the third type of Web control, the composite control. A composite Web control is comprised of one or more other Web controls within it. You’re going to find that the code in this control is a bit clearer than in the rendered control, though I have heard argument to the contrary. A composite control can be comprised of any combination of Web controls of any type, but because it has to instantiate each of those internal controls, it takes a small performance hit that the rendered control does not take. For this reason, I typically choose to develop a Web control as a composite control when it is something I will only use just a few times at most on a Web Form. ‘lite’ In C#: protected override void Render( HtmlTextWriter output) { this.EnsureChildControls(); base.Render(output); } Notice that the second line in the method is just calling the method’s base. The reason you are overriding this method is to insert the call to EnsureChildControls before you call the base rendering method. This method checks to make sure you have set up all the child controls (this is the term I will use for the internal Web controls that reside in the composite control) appropriately before you actually render the control. Before I show you how to set up child controls, let’s determine what child controls are needed and declare them at the top of the class. Child Controls The visual elements that are going to be needed for this control consist of a sender’s name, sender’s e-mail, a recipient’s e-mail, an e-mail subject, e-mail body, and of course a send-button. You’re also going to add a heading to the top that may come in handy somewhere. The five fields I’ve just described are going to require a caption and a textbox for each. Now where do you suppose you can get some of those? That’s right-your EmailContact control will contain five instances of the FormField control. It will also contain a Label for the heading and a ConfirmationButton for the send-button. You might as well leverage the button created earlier and gain some of its functionality as well. For this reason, don’t forget to include the assemblies of the previous two controls in your references. You declare child controls at the top of the class like this: In VB.NET: In C#:(); Now that you’ve established the child controls needed and declared them, you need to massage their properties appropriately and make them visually part of the control. This is done by overriding the CreateChildControls method. This method gets called within the page lifecycle and is where you are expected to build your control tree. Our EmailContact class inherits a property called Controls which is of type ControlsCollection. This property is what contains all the child controls as well as any literal HTML text you want to render as well. Later, when you hit the Render method, the Controls collection will render all its contents in the order they were placed in it. I called this a control tree for a reason. It’s worth mentioning that you can create custom composite controls that contain other custom composite controls, thus creating a control tree. Remember however, that the deeper you get the more performance-heavy your control will get. Let’s take a look at a simple example of the CreateChildControls method using the lblHeading, ctlSubject, and btnSend child controls only, in the interest of space. Listing 1 shows the code in Visual Basic. Listing 2 shows the same code in C#. Let’s go through the code in this method because it is the meat of this control and where most of the work will be done. The first part of the code simply sets up the properties of your child controls. Notice that there is a lot of property setting going on here. For example, the CaptionWidth property if the ctlSubject control is set to the CaptionWidth property of your EmailContact control; this is called mapping. Because this custom control contains other controls, the properties of those contained controls are not automatically exposed to the outside, nor do you want them to be. To solve this you’ll add a CaptionWidth property to your control, just like you did to the FormField control before. The property of the child control(s) is then set to the one of its container, which is your composite control. The effect here is that when the programmer uses this on a Web Form and sets the CaptionWidth property in the property browser or in the code-behind class, the child controls that need to will also receive the appropriate setting. This same technique is repeated with other properties. The second part of the CreateChildControls method builds the Controls collection using the child controls and another control called LiteralControl, which as you can guess is used to place literal text into the Controls collection. This text almost always represents HTML that you want to code around your child controls. You’re once again building HTML, just in a different manner. Notice that some of the building is dependent on the settings of some properties. The point is that when you use the control on a Web Form, turning a property like ShowSubject or ShowSendButton off can toggle the visibility of that portion of the control, making the control much more versatile and reusable. For example, you can choose to display the “To Email” field and allow the user to enter a destination e-mail, thus using this control to send e-mail to anyone. In another case, such as a “Contact Us” page, you can turn the “To Email” field off, yet set a property that the control will also have called ToEmail. This property, which would have to otherwise be set by the “To Email” field, would be used to send the e-mail out without giving the user a choice. In this case, the ToEmail property could be set to your tech-support e-mail address. (See Figure 3 as an example.) There’s one other thing I want to point out in this code. The Width property of our Label and our FormField control is set to 100%; as is the table cells that contain them, as well as the surrounding table. The point here is that you size your child controls to the full width of the container. Therefore, when you drop the control on a Web Form and resize the control, the contents get resized right along with it. There are of course exceptions depending on what child controls you’re talking about-notice the Send button does not receive the same treatment. Also notice that I set the ID property of each of the child controls. This is especially important for the purpose of state maintenance. Your child controls know how to maintain state for themselves without any problem, but the ViewState variable uses the ID of each control to save and retrieve its state. Remember in the FormField control you set the ID and Name attributes of the three tags according to the UniqueID of the custom control. By setting the ID of the child controls, you are effectively setting their UniqueID, thus giving each control the information it needs to name its internal tags and thus maintain state properly. a “_” for ID and “:”. Composite Control Properties There are a couple of characteristics of composite control properties that need to be discussed. In the FormField rendered control you used a property coding technique that incorporated the using the ViewState variable to persist the internal value of the properties. In composite controls you have a couple of choices. You can choose to handle the properties the same way. This is fine for properties that define new characteristics of your control, but not really recommended for those that map directly to properties of child controls. The reason is that the child controls are already set up to handle state on their own. If you map their properties to properties in their container (the composite control) which also use ViewState to persist themselves, you run the risk of overbloatting the ViewState. Remember that everything stored in ViewState gets rendered on the page in a hidden textbox called __ViewState. The larger this gets, the more information has to travel through the pipe between the server and the browser. ViewState is one of the more powerful features of ASP.NET and one that automatically takes care of a lot of stuff that you would otherwise have to handle manually, but like every other tool in your arsenal, you need to think ahead and use it wisely. Let’s take a look at how you should code the CaptionWidth property of the EmailContact control, which as I’ve already said, will map to the CaptionWidth properties of some of the child controls. In VB.NET: Public Property CaptionWidth() As Unit Get Me.EnsureChildControls() Return Me.ctlFromName.CaptionWidth End Get Set(ByVal Value As Unit) Me.EnsureChildControls() ctlFromName.CaptionWidth = Value 'handle other child controls here too End Set End Property In C#: public Unit CaptionWidth { get { this.EnsureChildControls(); return this.ctlFromName.CaptionWidth; } set { this.EnsureChildControls(); this.ctlFromName.CaptionWidth = value; //handle other child controls here too } } The area that is commented, ‘handle other child controls…’, would contain the code that sets the other FormField controls’ CaptionWidth properties, as well as that of the lblHeading child control. This technique allows the child controls to maintain their state without the composite control repeating the task. Instead, the composite control simply maps its property to that of the child controls. Notice that the ‘get’ accessor returns the value of one control, while the ‘set’ accessor sets that of several (though only one is shown above). The one coded in the ‘get’ accessor can be any one of the child FormField controls. Since you’re setting all of them, they will all be the same so you can return any one you want. Many properties will map to only one child control, as would be the case for the ButtonWidth property. This property (whose code will not be shown) would map to the Width property of the btnSend child control in the same way shown above. The call to EnsureChildControls is necessary to make sure that the child controls have been initialized before you try to access them. If they have not, then the CreateChildControls method would be automatically called. For properties that do not map directly to child controls, you would still use the ViewState technique you learned earlier. The ShowSubject property, which I haven’t talked about yet, does not map to any child control; it instead handles a specific behavior to the EmailContact control. This property is coded exactly as the ones you learned about earlier in the design of the FormField control with one addition-The ‘set’ accessor needs to set the ChildControlsCreated property, which is inherited into your composite control, to false. set { ViewState["ShowSubject"] = value; this.ChildControlsCreated = false; } Setting this flag ensures an automatic call to the CreateChildControls method so child controls are reset. Don’t worry, that call does not occur upon the setting of this flag, so the flag can be set by the setting of many properties, and only calls to the method will take place upon the next page rendering. You can, in fact, use this style of property handling for all your properties but I would advise you to set the EnableViewState property of the child controls to False since you would no longer rely on their state maintenance ability. There would also be exceptions to turning state off in some child controls, as in the case of contained DropdownList controls that rely on their own state to persist the contents of their list. By providing plenty of appearance-oriented properties, that is properties that can hide or reposition child controls, you give your composite control maximum flexibility. The downloadable code extensively demonstrates this. Another thing handled in the CreateChildControls is the application of styles. One again this is handled simpler in composite controls than in rendered controls. Styling Style properties are defined exactly the same as in the FormField control with the addition of the flag setting described earlier. this.ChildControlsCreated = false; The difference with styling in composite controls is how they are applied to the child controls defined within. If you look back at the earlier code defining the CreateChildControls method you’ll notice two methods used, one on the ctlSubject control and a different one on the btnSend control. Here’s a C# recap. ctlSubject.CaptionStyle.CopyFrom( this.CaptionStyle); ctlSubject.FieldStyle.CopyFrom( this.FieldStyle); ctlSubject.ReadonlyFieldStyle.CopyFrom( this.ReadonlyFieldStyle); btnSend.ApplyStyle(this.ButtonStyle); When you designed the FormField control, which is the type of the ctlSubject child control, you gave it several styling properties. The first thing you have to do with your EmailContact control in regards to styling is to define its styling properties as well, which as you can probably guess will somehow be mapped to the child controls. First of all, for simplicity, you would define the style properties exactly as you did in the FormField control, but once again with the addition of the ChildControlsCreated setting. The way you would then “map” the style properties is with the CopyFrom method of the Style object. That’s all there is to it for those controls. The ApplyStyle method used in the case of the btnSend and lblHeading child control, applies the style property to the main style of the child control as opposed to a style property. The main style constitutes the styling properties inherited from the WebControl class. Remember these properties will apply styling to the control’s container where added style properties add styling to elements within the control. This is the case in both the FormField and EmailContact controls. Remember, the btnSend control is of type ConfirmationButton, which you’ll recall inherits from the Button class. You did not add any custom styling to this control, and the Button class does not expose any style properties. All the properties the Style object exposes are exposed by the WebControl class, which is the base of the Button class. As I said, ApplyStyle corresponds to these properties. The FormField control has these properties as well and is certainly compatible with ApplyStyle. Though you have not done so in your EmailContact, you can certainly add a styling property corresponding to the surrounding container of each contained FormField control. This property can then be applied to the child controls. However, you are mapping styling properties to the custom styling properties you gave the FormField control and that is done with the CopyFrom method. The styling that the final version of the EmailContact control provides allows the control to be integrated into any Web site. Figures 6 and 7 show pictures of the same control in two different sites. There’s one child control left for which you need to provide some special handling; this is the btnSend control. The intent of your control is to be able to send e-mails for you when a user clicks this button so you need to handle an event here. Event Handling Events are one area where composite controls shine in terms of simplicity and ease to follow. Implementation of the IPostBackEventHandler and IPostBackDataHandler interfaces is not necessary. This is not to say that they cannot be used in special circumstances, but such are beyond the scope of this article and are usually the case in very complex Web controls. You’ve noticed so far that the design of a composite control is very straightforward and very similar to how you would render controls dynamically on a Web Form using its code-behind class. Events are no exception. You need to wire an event handler from the btnSend control’s Click event to a method within your EmailContact control, much like you would wire such an event from a control to a Web Form’s method. You’re going to do this wiring in the constructor of the control. Since you’re initializing your child controls at the class level, you don’t want to wire events in the CreateChildControls method, otherwise multiple event handlers can end up wired every time the method gets called. Some In C#: public EmailContact() { SetDefaults(); btnSend.Click += new EventHandler(btnSend_Click); } This code should not be strange to you, as it simply wires the Click event of the btnSend control to the btnSend_Click method. Incidentally, the SetDefaults method you see here simply initializes some of the child control properties. The btnSend_Click event is a place where you can give your control a lot of versatility. I’ll show you an abridged version of it and then I’ll describe what’s going on. In VB.NET: In C#:()); } } The btnSend_Click method gets called within the EmailContact control when the Send button is clicked. Remember that the btnSend control raises its event to its container; in this case that is your EmailContact control, not a Web Form. It is in this method that you are going to handle e-mail-sending functionality, but you’re going to do it in a way that will make this control usable in many other situations. As you see in the code, I’m checking a property called AutoHandle to determine if you’re going to handle sending an e-mail or simply raise an event (SendButtonClick). You may run into the situation where a developer wants to use this control strictly for visual purposes. If you go through all the properties of the finished control in the downloadable code, you’ll see that I expose the value of every text field in the control whether or not you chose to show it (remember the ShowSubject property).. Other Associated Technologies Designers By not applying a designer class to your controls you inherently use the default designer. A designer controls the way the control renders in Visual Studio during design of a Web Form. Some controls, such as custom grids for example, may need to display themselves in a certain way during design time and a different way at run time (to display sample data perhaps). A custom designer comes into play here. Complex Properties These are properties that are object types which contain other properties. They display in the property browser as an expandable line and serve the purpose of grouping properties together and can be reused in many controls. The Style object is an example of a complex property. Type next release of ASP.NET adds a few extra gifts to the world of Web controls. For starters, there are claims that the ViewState has been decrease in size by as much as 47% for more than simple visual objects, but rather. Conclusion.
https://www.codemag.com/article/0511051
CC-MAIN-2019-13
refinedweb
9,090
62.38
- NAME - SYNOPSIS - DESCRIPTION - API - DISCUSSION - DEBUG - BUGS AND LIMITATIONS - REPOSITORY - SEE ALSO - VERSION - THANKS - AUTHOR - LICENSE NAME Finance::Math::IRR - Calculate the internal rate of return of a cash flow SYNOPSIS use Finance::Math::IRR; # we provide a cash flow my %cashflow = ( '2001-01-01' => 100, '2001-03-15' => 250.45, '2001-03-20' => -50, '2001-06-23' => -763.12, # the last transaction should always be <= 0 ); # and get the internal rate of return for this cashflow # we want a precision of 0.1% my $irr = xirr(%cashflow, precision => 0.001); # or simply: my $irr = xirr(%cashflow); if (!defined $irr) { die "ERROR: xirr() failed to calculate the IRR of this cashflow\n"; } DESCRIPTION The internal rate of return (IRR) is a powerfull tool when evaluating the behaviour of a cashflow. It is typically used to assess whether an investment will yield profit. But since you are reading those lines, I assume you already know what an IRR is about. In this module, the internal rate of return is calculated in a similar way as in the function XIRR present in both Excell and Gnumeric. This means that cash flows where transactions come at irregular intervals are well supported, and the rate is a yearly rate. An IRR is obtained by finding the root of a polynomial where each coefficient is the amount of one transaction in the cash flow, and the power of the corresponding coefficient is the number of days between that transaction and the first transaction divided by 365 (one year). Note that it isn't a polynomial in the traditional meaning since its powers may have decimals or be less than 1. There is no universal way to solve this equation analytically. Instead, we have to find the polynomial's root with various root finding algorithms. That's where the fun starts... The approach of Finance::Math::IRR is to try to approximate one of the polynomial's roots with the secant method. If it fails, Brent's method is tried. However, Brent's method requires to know of an interval such that the polynomial is positive on one end of the interval and negative on the other. Finance::Math::IRR searches for such an interval by trying systematically a sequence of points. But it may fail to find such an interval and therefore fail to approximate the cashflow's IRR: API - xirr(%cashflow, precision => $float) Calculates an approximation of the internal rate of return (IRR) of the provided cashflow. The returned IRR will be within $float of the exact IRR. The cashflow is a hash with the following structure: my %cashflow = ( # date => transaction_amount '2006-01-01' => 15, '2006-01-15' => -5, '2006-03-15' => -8, ); To get the IRR in percent, multiply xirr's result by 100. If precision is omitted, it defaults to 0.001, yielding 0.1% precision on the resulting IRR. xirr may fail to find the IRR, in which case it returns undef. xirr will croak if you feed it with junk. xirr removes all transactions with amount 0 from the cashflow. If the resulting cashflow is empty, an irr of 0% is returned. If the resulting cashflow contains only one non 0 transaction, undef is returned. DISCUSSION Finding the right strategy to solve the IRR equation is tricky. Finance::Math::IRR uses a slightly different technique than the corresponding XIRR function in Gnumeric. Gnumeric uses first Newton's method to approximate the IRR. If it fails, it evaluates the polynomial on a sequence of points ( '-1 + 10/(i+9)' and 'i' with i from 1 to 1024), hoping to find 2 points where the polynomial is respectively positive and negative. If it finds 2 such points, gnumeric's XIRR then uses the bisection method on their interval. Finance::Math::IRR has a slightly different strategy. It uses the secant method instead of Newton's, and Brent's method instead of the bisection. Both methods are believed to be superior to their Gnumeric counterparts. Finance::Math::IRR performs additional controls to guaranty the validity of the result, such as controlling that the root candidate returned by Secant and Brent really are roots. DEBUG To display debug information, set in your code: local $Finance::Math::IRR::DEBUG = 1; BUGS AND LIMITATIONS This module has been used in recquiring production environments and thoroughly tested. It is therefore believed to be robust. Yet, the method used in xirr may fail to find the IRR even on cashflows that do have an IRR. If you happen to find such an example, please email it to the author at <erwan@cpan.org>. REPOSITORY The source of Finance::Math::IRR is hosted at sourceforge as part of the xirr4perl project. You can access it at. SEE ALSO See Math::Polynom, Math::Function::Roots. VERSION $Id: IRR.pm,v 1.5 2007/07/12 12:35:46 erwan_lemonnier Exp $ THANKS Kind thanks to Gautam Satpathy ( gautam@satpathy.in) who provided me with his port of Gnumeric's XIRR to Java. Its source can be found at. Thanks to the team of Gnumeric for releasing their implementation of XIRR in open source. For the curious, the code for XIRR is available in the sources of Gnumeric in the file 'plugins/fn-financial/functions.c' (as of Gnumeric 1.6.3). More thanks to Nicholas Caratzas for his efficient help and sharp financial and mathematical insight! AUTHOR Erwan Lemonnier <erwan@cpan.org>, as part of the Pluto developer group at the Swedish Premium Pension Authority. LICENSE This code was developed.
https://metacpan.org/pod/release/ERWAN/Finance-Math-IRR-0.10/lib/Finance/Math/IRR.pm
CC-MAIN-2019-26
refinedweb
919
56.55
On Sep 28, 4:47 pm, Terry Reedy <tjre... at udel.edu> wrote: > Aaron "Castironpi" Brady wrote: > > On Sep 28, 2:52 am, Steven D'Aprano <st... at REMOVE-THIS- > >> As for why the complicated version works, it may be clearer if you expand > >> it from a one-liner: > > >> # expand: f[ n ]= (lambda n: ( lambda: n ) )( n ) > > >> inner = lambda: n > >> outer = lambda n: inner > >> f[n] = outer(n) > > >> outer(0) => inner with a local scope of n=0 > >> outer(1) => inner with a local scope of n=1 etc. > > For this to work, the 'expansion' has to be mental and not actual. > Which is to say, inner must be a text macro to be substituted back into > outer. > > >> Then, later, when you call inner() it grabs the local scope and returns > >> the number you expected. > > > I must have misunderstood. Here's my run of your code: > > I cannot speak to what Steven meant, but > > >>>> inner = lambda: n > > when inner is actually compiled outside of outer, it is no longer a > closure over outer's 'n' and 'n' will be looked for in globals instead. > > >>>> outer = lambda n: inner > >>>> outer(0) > > <function <lambda> at 0x00A01170> > >>>> a=outer(0) > >>>> b=outer(1) > >>>> a() > > Traceback (most recent call last): > > File "<stdin>", line 1, in <module> > > File "<stdin>", line 1, in <lambda> > > NameError: global name 'n' is not defined > > > Why doesn't 'inner' know it's been used in two different scopes, and > > look up 'n' based on the one it's in? > > That would be dynamic rather than lexical scoping. I couldn't find how those apply on the wikipedia website. It says: "dynamic scoping can be dangerous and almost no modern languages use it", but it sounded like that was what closures use. Or maybe it was what 'inner' in Steven's example would use. I'm confused. Actually, I'll pick this apart a little bit. See above when I suggested 'late' and 'early' functions which control (or simulate) different bindings. I get the idea that 'late' bound functions would use a dangerous "dynamic scope", but I could be wrong; that's just my impression. > >> inner = lambda: n > >> outer = lambda n: inner > >> f[n] = outer(n) > > >> outer(0) => inner with a local scope of n=0 > >> outer(1) => inner with a local scope of n=1 etc. If you defined these as: inner= late( lambda: n ) outer= lambda n: inner You could get the right results. It's not even clear you need quotes. Perhaps 'late' could carry the definition of 'n' with it when it's returned from 'outer'. In my proposal, it makes a copy of the "localest" namespace, at least all the variables used below it, then returns its argument in an original closure.
https://mail.python.org/pipermail/python-list/2008-September/515747.html
CC-MAIN-2014-15
refinedweb
456
68.1
Version 1.23.0 For an overview of this library, along with tutorials and examples, see CodeQL for C# . A variable. Either a stack variable (StackVariable) or a field (Field). StackVariable Field import semmle.code.cil.Variable Gets a read access to this variable, if any. Gets a write access to this variable, if any. Gets an access to this variable, if any. Gets the type of this variable. Gets a textual representation of this variable including type information. Holds if this element was compiled from source code that is also present in the database. That is, this element corresponds to another element from source. an attribute (for example [Obsolete]) of this declaration, if any. [Obsolete] Gets the C# declaration corresponding to this CIL declaration, if any. Note that this is only for source/unconstructed declarations. this declaration is a source declaration. Holds if other has the same metadata handle in the same assembly. other
https://help.semmle.com/qldoc/csharp/semmle/code/cil/Variable.qll/type.Variable$Variable.html
CC-MAIN-2020-24
refinedweb
155
61.43
There are many situations where you'll want an event in your code to continue for an amount of time. Often, this is accomplished using time.sleep() as in the following code:) Here, the first NeoPixel turns on for 0.5 seconds, and then turns off for 0.5 seconds before repeating indefinitely. The usage of time.sleep(0.5) in this code basically says: turn the LED on and wait in that state for half a second, then turn it off and wait in that state for half a second. In many situations, this usage of time works great. However, during time.sleep(), the code is essentially paused. Therefore, the board cannot accept any other inputs or perform any other functions for that period of time. This type of code is referred to as being blocking. In the case of the code above, this is sufficient as the code is not attempting to do anything else during that time. Waiting Without Blocking However, for this project, we want to continue processing inputs, so instead of sleeping for 0.5 seconds, we'll process other inputs for 0.5 seconds and change the led when that time expires. To accomplish this, we're going to use time.monotonic(). Where time.sleep() expects an amount of time be provided, time.monotonic() tells us what time it is now, so we can see whether our 0.5 seconds has passed yet. So, we no longer supply an amount of time. Instead, we assign time.monotonic() to two different variables at two different points in the code, and then compare the results. At any given point in time, time.monotonic() is equal to the number seconds since your board was last power-cycled. (The soft-reboot that occurs with the auto-reload when you save changes to your CircuitPython code, or enter and exit the REPL, does not start it over.) When it is called, it returns a number with a decimal, which is called a float. If, for example, you assign time.monotonic() to a variable, and then call it again to assign into a different variable, each variable is equal to the number of seconds that time.monotonic() was equal to at the time the variables were assigned. You can then subtract the first variable from the second to obtain the amount of time that passed. time.monotonic() example Let's take a look at an example. You can type the following into the REPL to follow along. First we import the time module, then we time.monotonic(). This is to give you an idea of what is going on in the background. The next two lines assign x = time.monotonic() and y = time.monotonic() so we have two variables, and points in time, to compare. Then we print(y - x). This gives us the amount of time, in seconds, that passed between assigning time.monotonic() to x and y. We print time.monotonic() again to give you a general idea of the difference. Remember, the two numbers resulting from printing the current time are not exactly the same difference from each other as the two variables due to the amount of time it took to assign the variables and print the results. Non-Blocking Blink But, how does this allow us to blink our NeoPixel? The result of the comparison is a period of time. So, if we use that period of time to determine when the state of the LED should change, we can successfully blink the LED in the same way we did in the first program. Let's find out what that looks) This does exactly the same thing as before! It's exactly what we wanted. Now, let's break it down. Before the loop begins, we create a blink_speed variable and set it to 0.5. This allows for easier configuration of the blink speed later if you wanted to alter it. Next, we set the initial state of the LED to be (0, 0, 0), or off. Then, we call time.monotonic() for the first time by setting initial_time = time.monotonic(). This applies once when the program begins, before it enters the loop. Once the code enters the loop, we set current_time = time.monotonic(). We call it a second time to compare to the first, to see if enough time has passed. Then we say if current_time minus initial_time is greater than blink_speed, do two things: set initial_time to now be equal to current_time and cycle the NeoPixel to the next state. Setting initial_time = current_time means it starts the time period over again. Essentially, every time the difference reaches 0.5 seconds, it cycles the state and starts again, repeating indefinitely. Why would we do it this way? It seems way more complicated! We do it this way because this allows us to do other things while the NeoPixel is blinking. Instead of pausing the code to leave the LED in a red or off state, the code continues to run. The code for the Spoka lamp allows you to change speed and brightness without halting the rainbow animation, and this is how we accomplish that!
https://learn.adafruit.com/hacking-ikea-lamps-with-circuit-playground-express/passing-time
CC-MAIN-2019-18
refinedweb
862
75.91
The "currently-open" mailbox. More... #include <stdbool.h> #include <sys/types.h> Go to the source code of this file. The "currently-open" context.h. Definition at line 65 of file context.h. Definition at line 49 of file context.c. Watch for changes affecting the Context - Implements observer_t. Definition at line 295 of file context.c. Definition at line 72 of file context.c. Update the Context's message counts. this routine is called to update the counts in the context structure Definition at line 106 of file context.c. Update a Context structure's internal tables. Definition at line 199 of file context.c. Is a message in the index tagged (and within limit) If a limit is in effect, the message must be visible within it. Definition at line 352 of file context.c. Is a message in the index within limit. If no limit is in effect, all the messages are visible. Definition at line 336 of file context.c. This safely gets the result of the following: mailbox->emails[mailbox->v2r[vnum]] Definition at line 414 of file context.c. Get a list of the tagged Emails. Definition at line 366 of file context.c.
https://neomutt.org/code/context_8h.html
CC-MAIN-2020-05
refinedweb
200
62.85
Can nslookup hostname but ping can't find host? Related Research:Networking·Networking Research Guide 70 Replies Oct 25, 2012 at 4:56 UTC The one thing I am noticing is that on the machines with the issues is that 1. they are all laptops (desktops are not affected) 2. they are not identifying the network correctly. instead of saying mydomain@whatever.com like they should they say internet access like I am connected to a random wifi hotspot 3. When said laptops are hard wired to LAN they work normally. Oct 25, 2012 at 5:18 UTC no, I just realized said laptops are not working at all. LAN or WLAN connected. (would have been nice to know 5 hours ago) All of the affected laptops are DirectAccess enabled. I am currently suspecting that they are failing the check that tells them they are in our local office and are not remote. this would explain why I didn't see any DNS traffic coming from the laptop I was sniffing earlier on our firewall \ NAT device. if it thinks it is outside the office it wouldn't be sending DNS traffic for local resources in the clear but it would be trying to send them through an encypted tunnel (which would also fail since they are inside the network) I am looking into this at the moment Oct 25, 2012 at 5:27 UTC I thought you said that one of the laptops that wasn't working did not have DirectAccess setup? You also mentioned that when you plugged one of the laptops into a LAN port it started working normally. If DA is the common denominator, it's definitely a good focal point and explains why things might have changed suddenly with a windows update. Oct 25, 2012 at 5:48 UTC Yes, I did say that. I have one laptop I was testing with that is not AD enabled (old one I pulled off the shelf when the tickets starting coming in) and it was not working on wireless either. but worked via the LAN. I am going to assume that laptop is just f'd (its like 6 years old) and I hadn't actually tried any of the other laptops on a LAN connection as no one was complaining about that. so I didn't realize they were not working via LAN until one supervisor complained her laptop would work at all on the network. Oct 25, 2012 at 5:56 UTC Check this out regarding DA issues: http:/ Oct 25, 2012 at 6:52 UTC It does appear to be DA related. it looks like it is failing the location check to see if the laptop is on our internal network. (laptops out of the building are connecting fine) This causes it to incorrectly assume it is external to the network If I remove DA from the laptop it works correctly on the network again. (which is a huge pain when it can't talk to the servers) Oct 25, 2012 at 7:05 UTC So the question still remains, what have changed? Oct 25, 2012 at 7:07 UTC Hmmm...here is a report of an October patch killing DirectAccess functionality. Not the same issue exactly, but possibly related? http:/ Oct 25, 2012 at 7:16 UTC just found this handy command that shows the state of the DA connection and where the laptop thinks it is netsh dns show state Mine are showing that they are outside the corporate network even though they are not Oct 25, 2012 at 7:38 UT. - netsh namespace show policy - netsh namespace show effective Oct 25, 2012 at 10:05 UTC I found the issue and everything is working again. Its kinda a big got-cha with DA and SSL certs. After finally realizing this issue was not a wireless issue as it seemed to be that it only affected laptops configured with Direct Access I was quickly able to determine that the location check was failing for all laptops in the corporate network. After looking into that for a bit I was able to determine that the IIS site DA uses for this check had an expires SSL cert (expired yesterday) which was causing the check to fail and the clients to think they were on an external network. The got-cha is that this IIS site is only used by DA for the purpose of this check and is not accessed ever by a human. there is no check to notify about the cert expiring and DA itself was not reporting this as an error anywhere. it was still listing itself as healthy. Once I found this updated and updated the expired SSL cert everything works as it should again! Hurray! to bad it took me 9 hours to find one expired SSL cert! Oct 25, 2012 at 10:13 UTC invisible dependencies are such a PITA, I would have to say that this was one of a more interesting problems. Oct 25, 2012 at 10:26 UTC as a side not I created a feature request for SW to see if we can get SSL cert monitoring added if anyone is interested http:/ Oct 26, 2012 at 1:18 UTC Hey! I suggested you turn off DA and start from there to narrow it down back at 10am yesterday ;) Definitely an interesting problem, glad to see it was resolved! Oct 26, 2012 at 1:57 UTC yes, yes you did. and if I had tried your suggestion sooner I could have saved myself a lot of headache :-) Oct 26, 2012 at 2:57 UTC Hindsight's always 20/20, and its a whole different world when you're waist-deep in the troubleshooting process. Thanks for letting us know the resolution, I definitely learned something new! :) Oct 26, 2012 at 3:02 UTC Thanks for the help! Apr 25, 2013 at 9:44 UTC Great thread. Just had the same symptoms and found our issue was a missing internal DNS record for the direct access server. Due to a web application that is used both internally and externally and can only point to one URL, our internal DNS is authoritative for both our default ".local" domain and for one of our external domains. It may be because of our internal DNS setup, but, as nearly as I have been able to glean from the documentation, for us (using a single NIC on direct access server box which holds all the direct access roles) it is necessary that you have both an external DNS pointer to, for example, ras1.domain.com (pointing to your external internet routable IP) AND pointing to the DA server internal private IP on your internal DNS. Once I created the A record on the internal DNS that pointed to the internal LAN IP of the DA server the pings (and everything else - like network browsing) worked immediately. Jul 17, 2013 at 1:58 UTC I had the same problem, for me working writing the server name in router at Domain name. Windows 2003 server. Jul 19, 2013 at 8:47 UTC I'm experiencing something similar - dos not appear to be a cert issue and seems (so far) to be affecting only one site. That site has no DC onsite and gets DNS etc over a WAN link. If any experts can help me trouble-shoot this, I'd appreciate it (in new thread) Mar 5, 2014 at 7:17 UTC Is your machine fighting over an already taken IP address? just because it doesn't return a ping doesn't mean that something else isn't trying to use that same IP. Change your IP to something else and then see if it works.
http://community.spiceworks.com/topic/270104-can-nslookup-hostname-but-ping-can-t-find-host?page=3
CC-MAIN-2014-15
refinedweb
1,296
66.57
RECV(2) BSD Programmer's Manual RECV(2) recv, recvfrom, recvmsg - receive a message from a socket #include <sys/types.h> #include <sys/socket); recvfrom() and recvmsg() are used to receive messages from a socket, s, and may be used to receive data on a socket whether or not it is connection-oriented. If from is non-null and the socket is not connection-oriented, supported in future releases. On successful completion, all three routines return the number of message bytes read. If a message is too long to fit in the supplied buffer, ex- cess) system calls may be used to determine when more data arrive. The flags argument to a recv call is formed by ORing one or more of the values: MSG_OOB process out-of-band data MSG_PEEK peek at incoming message MSG_WAITALL wait for full request or error MSG_DONTWAIT don't block MSG_DONTWAIT flag requests the call to return when it would block otherwise. If no data is available, errno is set to EAGAIN. This flag is not available in strict ANSI or C99 compilation mode. */ unsigned. msg_iov and msg_iovlen describe scatter gather loca- tions,[]; */ };, 1999.
http://www.mirbsd.org/htman/i386/man2/recvmsg.htm
CC-MAIN-2014-41
refinedweb
190
59.84
Command Palette - tallpauley last edited by gferreira Hi all, I was thinking it'd be fun to make a "command-palette" extension like in Sublime Text, Atom and VSCode. For those of you who aren't using these editors, it can enable you to do a lot via the keyboard without remembering keyboard shortcuts (in fact it can even show you the keyboard shortcut for future reference). Through the API, how can I access all the actions available to map to hot keys (glyph view and space center) and short keys? I tried getMenuShortCuts which returned an empty Dictionary. Then I looked at programmatically iterating through the menu bar to get all those "actions", but that feels hacky. Any tips where to look? I'm guessing it'd be whatever API RoboFont uses itself to populate "Short Keys" and "Hot Keys" lists in Preferences. In fact, the interface for "Short Keys" already has the autocomplete I'd need as well. Chris hello @tallpauley, the default shortcuts are indeed not returned by getMenuShortCuts, and are currently not accessible from the public API. (maybe this could be added?) it’s possible to get them using the internal libmodule (see example below). you can use stuff from the lib in your scripts, but ⚠️ the API may change over time, without a nice deprecation warning like mojo. from lib.tools.shortCutTools import getShortCuts from lib.UI.fileBrowser import shortKeyToString shortcuts = getShortCuts() for key, item in shortcuts.items(): if item.keyEquivalent(): shortkey = item.keyEquivalentModifierMask(), item.keyEquivalent() print(key, item, shortkey) see also a sketch for a simple search UI here (please fork and change as you wish) I haven’t been able to figure out how to run the commands… @frederik help :) this could become a very useful extension! thanks edit: updated the gist, now it also runs the command (thanks @frederik) - tallpauley last edited by Thanks @gferreira, this is very helpful! I'll play around with the code you gave me - tallpauley last edited by @tallpauley Thanks for the update to run the command! This gives me something to play around with. To take it a step further, is there a lib API I can use to also pull in space center and glyph view shortcuts, like whatever they use to populate their "Hot Keys" tabs? (Btw, I won't filter out commands that don't have shortcut keys, this is part of the allure of having a command palette, you can access commands that don't have shortcuts keys defined yet). I saw mojo.UI.setGlyphViewDisplaySettings, but this isn't all the available commands for glyph view. For space center I didn't see any API in mojo.UI I would dig around myself, but I think you guys compile to .pycand I don't want to violate the EULA by reverse engineering (hence me actually clicking on the EULA lol). Once I have all the commands available, I think I should have what I need.
https://forum.robofont.com/topic/872/command-palette
CC-MAIN-2020-40
refinedweb
493
71.55
A collection of tools to work with SMS messages. Project description A collection of tools used to send SMS messages. Tools Message Profiling Accepts a raw SMS message string and determines its most efficient encoding, then determines how many segments would be used to send it. Largely based on this tool (code found here). Example: from sms_toolkit.messages.profiling import profile_message import json profile = profile_message("Sup chonus") print(json.dumps(profile, indent=4)) { "num_segments": 1, "segments": [ { "message": "Sup chonus", "total_segment_length": 10, "unicode_character_list": [ "S", "u", "p", " ", "c", "h", "o", "n", "u", "s" ], "byte_groups": [ [83], [117], [112], [32], [99], [104], [111], [110], [117], [115] ] } ], "message_length": 10 } Testing From the root repository directory run the following: pytest -s tests Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/sms-toolkit/
CC-MAIN-2019-51
refinedweb
143
54.32
From: David Abrahams (david.abrahams_at_[hidden]) Date: 2002-04-04 01:59:06 ----- Original Message ----- From: "Vladimir Prus" <ghost_at_[hidden]> To: <jamboost_at_[hidden]> Sent: Wednesday, April 03, 2002 10:13 AM Subject: Re: [jamboost] Boost.Build V2, load behaviour part 2. > David Abrahams wrote: > > > I hope you get well soon! Please rest and feel better, > > Thanks you. I'm back to coding now, waiting for another virus (or whatever > that was) to come my way! > > > Aww, heck, it's not in the CVS! > > I have something on my disk. Where did it come from? > > Eeeh... I don't recognize that code, either. Probably it was you who wrote > it? :-) > Anyway, my version is at > Cool! We have 3 versions! >. > 2. All jamfiles in one module or all jamfiles in separate modules. This is a > big question -- I always thought that module-per-jamfile is better, largely > because of similarity of this approach to object-oriented programming. When in doubt, I prefer separation of namespaces. >. > These two things will make the problem less important. If stealing variable > names from project root is really needed, we can make a rules which returns > the list of variable, as David suggested. > > So, what we decide? I find myself in favor of isolated namespaces. > BTW (not sure how this relates to the discussion). If we have > > module A { > > x = 1 ; > > module B { > echo $(x) ; > } > } > > Will "1" be printed, IOW, will x be visible from B? No, every module has a completely distinct namespace "stack"; entering B swaps out A's definitions (including locals) and swaps in B's. -Dave Boost-Build list run by bdawes at acm.org, david.abrahams at rcn.com, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/boost-build/2002/04/0690.php
CC-MAIN-2021-10
refinedweb
292
78.75
On Sun, Jun 17, 2012 at 2:20 AM, Heinrich Apfelmus <apfelmus at quantentunnel.de> wrote: > Johan Tibell wrote: >> >> I've modified the proposal (in an earlier email) to be to add >> getExecutablePath. We'll implement it using the methods Simon linked >> to, which I believe are the same as used in executable-path. > > > Ah, ok. That works for me. > > Reading Simon Hengel's email, I think that distinguishing between different > invocation methods (program, script, interactive) via a data type > > > data ExecutablePath = Binary FilePath > | Script FilePath > | Interactive > > is an excellent idea! This allows us to use the getExecutablePath both in > a compiled program and for testing in GHCi. I'm a bit undecided whether this distinction is useful. If the user is really looking for the executable path there's not much to do except call error if the return value is Script or Interactive. In addition, I don't know how to implement this function correctly. For example, if you alias ghc to another name the heuristic in the executable-path package fails: -- | An experimental hack which tries to figure out if the program -- was run with @runghc@ or @runhaskell@ or @ghci@, and then tries to find -- out the directory of the /source/ (or object file). -- -- GHC only. getScriptPath :: IO ScriptPath getScriptPath = do fargs <- getFullArgs exec <- getExecutablePath let (pt,fn) = splitFileName exec case fargs of [] -> return (Executable exec) _ -> case map toLower fn of #ifdef mingw32_HOST_OS "ghc.exe" -> do #else "ghc" -> do #endif case find f1 fargs of Just s -> do path <- canonicalizePath $ init (drop n1 s) return $ RunGHC path Nothing -> case findIndex f2 fargs of Just i -> return Interactive Nothing -> return (Executable exec) _ -> return (Executable exec) where f1 xs = take n1 xs == s1 s1 = ":set prog \"" n1 = length s1 f2 xs = xs == "--interactive"
http://www.haskell.org/pipermail/libraries/2012-June/018017.html
CC-MAIN-2014-41
refinedweb
296
55.88
>> the webapp Framework [ID:720] (3/5) in series: A Gentle Introduction to the Google App Engine Python SDK video tutorial by Kyran Dale, added 05. While one could build a CGI based web-site by hand, the usual way to build applications is by using a web framework such as Django. The Google App Engine (GAE) supports any CGI framework written in pure Python or compliant with the the Python WSGI library. The SDK features its own webapp framework, a simple but practical way to start developing web-sites and a good introduction to more mature, featureful Python frameworks. import wsgiref.handlers - builds - features - django - good - knowledge - aims - html - app - management - walkthrough - forms - server - library - starting - frameworks - while - jobs - power - WSGI - pylons - practical - CGI - webapp - handlers - SDK - google-app-engine - hello-world - user-management Got any questions? Get answers in the ShowMeDo Learners Google Group. Video statistics: - Video's rank shown in the most popular listing - Video plays: 202 screencast. I wish the audio quality was little more crisp.. Thank you for the careful deconstruction of the new helloworld.py. I would have missed the function call MainPage in main() : def main(): application = webapp.WSGIApplication( [('/', MainPage)], debug=True) I forget that functions can be referenced like objects in Python. I like the way you copied and pasted the code, I think it gave you more time to explain the exact functioning of the code which is great for a novice like myself. Thanks webapp framework seems something to be used by web development beginners. Like me! Video published, thanks for contributing to ShowMeDo Video published, thanks for contributing to ShowMeDo Your video has been edited. This is an automatic post by ShowMeDo.
http://showmedo.com/videotutorials/video?name=2690020
CC-MAIN-2015-48
refinedweb
283
61.67