text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Extracting definition of customized messages from bag files
TL;DR: How can write a node that subscribes to topics under customized messages that's published from a bag when I don't have access to the definition of the customized messages (the .msg files)?
I have a bag file containing topics in customized messages, for example
dbw_mkz_msgs/SteeringReport.
While I don't have access to the
dbw_mkz_msgs or the definition of
dbw_mkz_msgs/SteeringReport, I can use tools such as
rqt_bag or
rostopic echo -b to view them (their fields and values to be more specific). I can even use the
rosbag python api to edit them and write to another bag.
However, what I cannot do is write a node that subscribes to one of these topics in
dbw_mkz_msgs/SteeringReport since I cannot
from dbw_mkz_msgs.msgs import SteeringReport. Is there a way to extract the definition of a customized message from a bag file?
In python, have a look at the
get_type_and_topic_info()function on the bag,... and...
Thanks for you response, @ahendrix. However, it seems like get_type_and_topic_info() will give me the type of the message (a string
dbw_mkz_msg/SteeringReport), and its md5hash. It still doesn't give me the
SteeringReport()class that I would get if I had access to the
dbw_mkz_msgspackage. | https://answers.ros.org/question/230120/extracting-definition-of-customized-messages-from-bag-files/ | CC-MAIN-2021-43 | refinedweb | 212 | 61.46 |
#include <OP_GalleryManager.h>
Definition at line 32 of file OP_GalleryManager.h.
Adds an extra category even if no entry subscribes to it.
Gets the sort order of categories (when parent name is not a string). Optionally the parent category may be provided in the form of a string encoding subcategories separated with a slash. When the parent category name is given, returns the order of sub-categories, if parent is found, otherwise ther returned array will be empty.
Gets a list of extra categories added for the given optable.
Returns the array of all keywords used by entries in all galleries that match the requirement of the optable. | http://www.sidefx.com/docs/hdk/class_o_p___gallery_manager.html | CC-MAIN-2018-51 | refinedweb | 108 | 56.25 |
Why does the "bloat" have to continue?
Way back in the '80's, I remember being irritated, as a user/gamer, that the Sierra, et al., went through a long interactive install routine, configured the game according to your/its responses/findings, and then went and stuck every driver to every known machine on your machine, anyway. Now as a developer, I have tools that sometimes take a full 1Gb of space. Lots of us complain about bloatware, and generally rightfully so, but will there ever be a way for a program to never "download or install anything until it's needed"? Mozilla's Firefox and Thunderbird may be a small step in the right direction, but could "web services" or somesuch really be used as a, say, namespace for only using s/w when needed, and, say, after XX days, returning the s/w via, say, "remote garbage collection" to get stuff off one's h/d that is rarely, if ever used? Ideas?
Lao Xuesheng
ps. "Back then", a 33Mb drive was paradise, too. Maybe this question is just OBE with the huge and inexpensive drives out these days, but with music and video making their moves, maybe this will become an issue, again.)
Lao Xuesheng
Friday, June 18, 2004
Microsoft was originally going to have that feature on XP, that it would first compress software not used recently, then automatically uninstall it if it wasn't used in a long time. But test groups hated it, so they'd probably be against a web scheme like that for the same reasons.
Rick
Friday, June 18, 2004
will there ever be a way for a program to never "download or install anything until it's needed"?
Office works this way - if you customize the install, you can pick the options you want installed. The installed footprint ranges between something like 150MB and 450MB depending on the options you pick.
You can do the same thing with VS.Net.
FWIW, I always do a blanket install of both products, because quite simply I don't need to be hunting for the damn Office CD at 2am when I have a presentation due the next day. [grin]
Philo
Philo
Friday, June 18, 2004
The office way is pretty good when it works. Sometimes it buggs out and like refuse to open a document because the spellchecker for US english is not installed. I suspect this is less of a problem if you only work with one language though. (That was office 2k btw)
Personally though, I just look for leaner apps to begin with. There is such an abundance of software out there so you can often find an app that fits pretty closely to your ideal.
So, for me, bloat only become a problem when an employer/client requires me to use a specific app.
Eric Debois
Friday, June 18, 2004
Bloat is only a real problem for people who are overly anal.
I've got a giant harddrive and a really fast processor.
I don't even blink at 2 gigabyte installs anymore.
So stop whining, boy!
Mr Fancypants
Friday, June 18, 2004
I was involved with the group that integrated the Office MSI technology into Windows (Windows Installer) and a major part of the original reasons for MSI was that it would allow clean componentization of the application and reduce the storage footprint.
The key scenario that the Office team saw (this was in 1998) was addressing customer complaints that the install size of Office was too large, and so this was a major feature of the new installer - "Install on Demand." The idea being, customers could mix and match components and only install the features they used. If they tried to use a feature that wasn't installed, it would just magically install at that time. Life would be good, customers would be happy, etc.
We even went sort of crazy, including a feature in our Active Directory-integrated deployment technology (this is what I worked on, among other things) that allowed install-on-demand by CLSID and PROGID activation - i.e., you run something that tries to instantiate a COM object you don't have installed already, it just magically gets pulled down to your system. No bloat necessary, you only use what you need...a blissful nirvana of componentized software and lean-and-mean installations.
What happened? Most people hated "install-on-demand" - much to my surprise, I turned out to be one of them. The first time you browsed to some web page and suddenly saw a Windows Installer dialog saying "Please wait while Office 2000 is configured", you wanted to throw the computer out a window.
The reality: hard drive space is cheap, customers did not exactly respond glowingly to efforts like MSI's attempt to make features necessary for "light" installs only install when you needed them, and most people couldn't care less how much hard drive space an applications uses.
Also, "bloat" is an overused term - the "bloat" usually includes massive amounts of new functionality (most often developed directly based on customer feedback) that adds value, not some unnecessary waste. If you really are pining for your old copy of WordPerfect 5.1 that fit on a 320K floppy, by all means go installs it and use it. It probably works with all the app compat shims in XP.
Mike Treit
Friday, June 18, 2004
Yes, especially when "install on demand" means you had/have to go find the darn install disks, which you may not have even received on many machines or inconveniently just can't locate in that pile of junk in the corner. (Especially after you have already paid for it, in the first place.) I have to agree with those that think that space (or speed) is not *now* a problem, yet, for argument, why should I pay for the many tools in, for example, Studio.Net when I may only want the VB.Net or the C#.Net. I don't really mean to pick on Microsoft; it has no monopoly on "filler material". (Even our genes have it, they say, though they are beginning to wonder about that, too.) Though long in the tooth, I am new enough to CS to have an idealist attitude, I guess.
Still, for the sake of argument, take it down to the bone: in many small devices, the size of the OS and the apps. is critical. One size does not fit all. I assume the the Mars rovers didn't take many bytes of software that weren't expected to be frequently needed. Couldn't we extend the "only what is necessary" to make the machines smaller, faster, more efficient? (Our new slogan -- "What would the Mars Rover Programmers do?" :))
Unless the s/w is freeware, upgrading is rarely an option of mine, mostly due to cost. Maybe if the software only installed what I really needed as I needed it (perhaps on a leased basis) and removed all traces of the pieces I no longer need, with my permission, , the price would be lower and I would get to upgrade the pieces I do use more often. I realize that commercial software reality (read: bundling) is not on my side, but that argument sounds a bit like saying you shouldn't mind having to drive an Abrams tank with radar, IFF, satellite connectivity, etc., when all you want is a motocross bike. Maybe costs would come down, too. Maybe not. Have a good weekend all.
Lao Xuesheng
Lao Xuesheng
Friday, June 18, 2004
Yeah, the install on demand feature of Office 2000 was really a bad idea. It always asked a CD when you didn't have it near you. Uninstall, install a service pack, and you needed the damn CD ! They sort of improved that with Office 2003 and it's good. Anyway, I always do a full install : HD space is soooo cheap nowadays, why bother to gain 0.01% more space ? That makes no sense.
Nobody wants to put the CD in the drive to work with an application. Even Encarta is fully installed on my system : 2-3 Go, who cares ? You can buy a 200Gb HD for about 100$. I really don't care about bloatware anymore. Same for internet downloads : I have a broadband cable and tonight I downloaded Opera at the rate of 500Kbytes/s (yes bytes, not bits). So Firefox could be 100Mb, I wouldn't care if it has all the features I want.
My point is this : bloatware is only relative to the resources you have.
EnsH
Friday, June 18, 2004
>>"Bloat is only a real problem for people who are overly anal.
I've got a giant harddrive and a really fast processor."
Exactly.
Having started out in the days when 20 meg was considered a big hard drive, it took me a long time to quit being so anal about "bloat". But now that I've stopped worrying about it and just went ahead and loaded everything onto my pair of 120 gig drives, my life is much more enjoyable.
Nigritude Ultramarine Hater
Saturday, June 19, 2004
And, sometimes, ms does fix things, and reduce things by a huge amount.
The access 2000 runtime could be as much as a 150 megs in size (it forced IE4 as part of the install).
Today, the brand new access 2003 royalty free runtime system (which uses the msi installer) now can package up ms-access in only 30 megs.
While 7 or 8 years ago, 30 megs seemed like a huge runtime install, today, that installing is VERY light in terms of disk space. Heck, 30 megs is even downloadable by anyone with high speed net.
And, burning a ms-access runtime install to a cd occurs in a flash with any cd burner. I can’t even copy files to a floppy disk that fast.
I mean, just the full Visio 2003 can be 500 megs.
So, I was bracing myself for what the size of the new ms-access runtime was going to be when I use the package and deployment kit.
It turns out it is the smallest it has been in years! A very nice surprise.
So, with some reduction in the size. and the low cost of disk drives…I can’t remember the last time when I felt that the cost and size of installing my software seemed so low….
Albert D. Kallal
Edmonton, Alberta Canada
kallal@msn.com
Albert D. Kallal
Saturday, June 19, 2004
These days I don't mind the install _size_ of software, including Microsoft's. The install _time_, however, is actually getting worse than it was in the 20 floppy disk days. The Visual Studio .NET installer is so slow, it feels like I'd be faster typing in hex dumps!
Chris Nahr
Saturday, June 19, 2004
That, too, is the fault of another MSI feature: self repairing installations.
It's always pleasant to install something that predates the MSI craze, like SQL Server 2000.
Brad Wilson (dotnetguy.techieswithcats.com)
Saturday, June 19, 2004
The clutter bothers me more than the size. I would prefer if applications went back to installing everything in their own directory and nothing but their own directory.
It would use up extra space because some DLLs and other files would be duplicated, but there would be less clutter since all you'd have to do remove an application is delete the directory.
T. Norman
Saturday, June 19, 2004
Chris,
I'll second your comment on the .Net installer. Arg.
Only one guy in my group is actively using it now. I was using it for a while, but after losing that machine, I'm not willing to give up a day to put it back on....
KC
Monday, June 21, 2004
Recent Topics
Fog Creek Home | https://discuss.fogcreek.com/joelonsoftware5/153884.html | CC-MAIN-2018-51 | refinedweb | 1,987 | 70.33 |
24 November 2011 10:48 [Source: ICIS news]
SHANGHAI (ICIS)--French chemical producer Arkema plans to start up its new emulsion production facility in ?xml:namespace>
The plant, with nameplate capacity of more than 50,000 tonnes/year, will supply products to northeast and southeast Asian regions, said Marc Schuller, Arkema's executive vice president.
Arkema will also open its research and development (R&D) centre at the same site in Jiangsu province in the middle of 2013, Schuller said.
The R&D centre includes polymer science and latex synthesis, he added.
“The Changshu site in
Arkema did not disclose specific financial details for the emulsion facility or the R&D centre.
The company is expected to achieve earnings before interest, tax, depreciation and amortisation (EBITDA) of more than €1bn for 2011.
Please visit the complete ICIS plants and projects database | http://www.icis.com/Articles/2011/11/24/9511126/frances-arkema-to-start-up-china-emulsion-facility-in-late-2012.html | CC-MAIN-2014-42 | refinedweb | 141 | 52.29 |
.NET/C#: fun with enums and aliases part 2
Posted by jpluimers on 2014/01/29
In the .NET/C#: fun with enums and aliases part 1 you saw that an enumerated type can specify an underlying type.
The underlying type is limited to a strict set of built-in C# types: , so you cannot use a CTS type for it.
So you might think that you can only define enumeration values by integer constant like this:
namespace BeSharp { enum TwoState { False = 0, True = 1, } enum ThreeState { False = 0, True = 1, Unknown = -1, } }
Well, you can do it like this too, since Operations between different enum types are allowed in another enum declaration:
namespace BeSharp { enum TwoState { False = 0, True = 1, } enum ThreeState { False = TwoState.False, True = TwoState.True, Unknown = -1, } }
You cannot use this outside enum declarations however, so it is impossible to write something like this:
namespace BeSharp { class Program { public static void Main() { ThreeState value = ThreeState.False; if (value == TwoState.False) Console.WriteLine("False"); } } }
The enum fun goes on even further: you can use any operator compatible with enums for declaring your values, and even mix/match types. Like
enum Animal { Giraffe = Fruit.Apple * Shape.Square << DayOfWeek.Thursday, }
–jeroen | https://wiert.me/2014/01/29/netc-fun-with-enums-and-aliases-part-2/ | CC-MAIN-2021-25 | refinedweb | 202 | 58.82 |
This is by no means an authoritative discussion about SIP. Rather, it is a chronicle of the adventures and misadventures of a bumbling newbie trying to learn to use a great tool with little documentation. Some references that are essential in conjunction to this include:
For the convenience of the reader, I've included a concise summary of the sip and python API items used here.
The string example and fraction example may be downloaded.
sip has changed from version to version. I'm using python 2.2 and sip 3.2.4. Depending on the verion you're using, sip will behave a little differently. The most notable addition to the new version of sip is support for classes that are wrapped inside namespaces.
I'm currently working on C++ software, and want to create an interface to my libraries in a more user-friendly programming language. I make use of Qt, so the fact that Sip is committed to supporting Qt is a big plus from my point of view.
Here we present a simple example of sip: an implementation of the C++ string class. There are two main aspects to implementing the bindings: writing the .sip file that the sip preprocesses uses to generate bindings, and automating the build process. First, let's create a sip file. Creating this file is pretty simple. The sip file itself is here . First, we need to declare the name of our python module:
%ModuleThen we include the declarations given in the header file:
%HeaderCode #include <string.h> %EndThen we declare the interface. This is largely uneventful.
namespace std { class string { public: string(); string(const char*); string(const std::string&); bool empty(); int length(); int size(); /* blah blah blah */
The only comments worth making here are:
One could leave it at that and just declare those methods. However, there are some other things one may want a string class to do ...
First, let's implement __str__. We start with some code to declare the method:
void __str__ () /NonLazy/; %MemberCodeFirst, we declare some variables. ptr is a pointer to the underlying C++ object, and s is a temp to hold the return value of the c_str() method that we use to extract a C-style string.
const char* s; std::string* ptr;
Note that we don't have to declare the parameters, because sip declares them for us when it processes the file. Also note that it doesn't matter what return type we declare -- the python function simply returns whatever we return in the MemberCode. The parameters are given in sipArgs. The variable sipArgsParsed is also declared in the generated code. We need to unpack the function argument first. sip offers the sipParseArgs() function for it. The format is
int sipParseArgs(int* sipArgsParsed, PyObject* sipArgs, const char* format, ... )where sipArgsParsed returns the number of arguments parsed, sipArgs is the argument tuple, format is a scanf-like string that tells the function what types of arguments to look for, and the other arguments are dependent on format. In this example, we use "J1" for the format. This tells sipParseArgs to expect an object of a sip class type, and expects two arguments: an argument of type PyObject* which should be the class object expected (in Python, each class is an object, so for each sip wrapped object, there is a corresponding type object named sipClass_classname), and a pointer-to-pointer to the corresponding C++ type. Brief summary of the specifiers: This returns a value of true if the call was succesful. So we add this code:
if (sipParseArgs(&sipArgsParsed,sipArgs,"J1",sipClass_std_string, &ptr)) { // do stuff }
Having obtained ptr, we obtain the char* pointer from that, using c_str(), and convert to a python string using the PyString_FromString function in the python API. See section 7.3.1 the python API reference, which comes with the python distribution for more information about this function.
if (sipParseArgs(&sipArgsParsed,sipArgs,"J1",sipClass_std_string, &ptr)) { s = ptr->c_str(); /* Python API reference, P40 */ return PyString_FromString(s); }
Next, we'd like to implement [] notation. This is done by implementing the __getitem__ method. Start by declaring the method:
void __getitem__ /NonLazy/; %MemberCode
Now for the C++ member code. First, we declare and extract the this pointer as per the previous example. Then we need to check that the requested index is not out of bounds it is important to do this, uncaught C++ exceptions will cause python to abort !. So it's necessary to either prevent C++ exceptions from occuring, or to trap them and propogate python exceptions.
if (a1 >= ptr->length()) { /* Python API Reference, Ch 4 */ PyErr_SetString ( PyExc_IndexError ,"string index out of range" ); return NULL; }The PyErr_SetString function is part of the python API and is explained in the API reference. What these lines of code do is raise an exception if the argument is out of bounds. Otherwise, we may return a value:
return Py_BuildValue("c", ptr->at(a1));
Py_BuildValue is documented in the Extending and Embedding Python, 1.3, 1.7 P8-11.
So the problem we now have is to compile our code. Here's what works for me: I have a
The top level Makefile is pretty simple, it just runs sip and then builds in the sub-directory.
SIP=/usr/bin/sip sip: $(OBJS) $(SIP) -s ".cc" -c sipcode string.sip cd sipcode && make
sip will generate the following files in this example:
Note the way the namespace is munged into the names. Now the Makefile to build this in the sipcode directory looks like this:
module=String class=stdstring objs=$(module)cmodule.o sip$(module)$(class).o PYTHON_INCLUDES=-I/usr/include/python2.2 SIP_INCLUDES=-I/usr/local/include/sip PYTHONLIBS=-L/usr/lib/python2.2/site-packages %.o: %.cc $(CXX) $(CXXFLAGS) -c -I.. -I. $(PYTHON_INCLUDES) $(SIP_INCLUDES) $< all: libs libs: $(objs) $(CXX) -shared $(PYTHONLIBS) -lsip -o lib$(module)cmodule.so *.o clean: rm -f *.o *.so *.cc *.h sip_helper *.py *.pyc
And that's it! Now we've got a String module that compiles. Installing it into the proper directories just involves copying the .so file and the .py file into the site-packages directory in your python installation.
An example that gives us a chance to play with more operators, and other finer points of sip, is that of implementing a fraction data type. Consider the fraction class with the declaration frac.h and member function definitions frac.cc. This class was written for completeness, and economy of the coders time (-; so a lot of operators are implemented in terms of others. To implement a python version of this, we redeclare the member functions, excluding operators:
Some comments about the class:
declaring the basic interface is straightforward:
class Fraction { public: Fraction(int , int=1 ); Fraction(const Fraction&); int numerator() const; int denominator() const; /* we'll put more code here later */ }; int gcd(int, int);The interesting part is declaring the operators. Implementing the arithmatic binary operators +,-,*,/ involves essentially the same code, and most of the code just does error conversion and type checking. It would be nice to break this boilerplate code off into its own function. The strategy we use is to have a function that takes a function pointer as an argument. The function pointer is to a function that invokes one of the operators +,-,*,/.
%HeaderCode #include <frac.h> typedef Fraction* (*binary_fraction_op_t) (Fraction*,Fraction*); PyObject* BinaryOp(PyObject* a0, PyObject* a1, binary_fraction_op_t op); Fraction* plus (Fraction* x, Fraction* y); Fraction* minus (Fraction* x, Fraction* y); Fraction* mult (Fraction* x, Fraction* y); Fraction* div (Fraction* x, Fraction* y); %End
Then we need to implement the body of these functions. The operator functions are very simple, they just perform the operation. Note that we use pointers all the time, because all our Fraction objects are heap allocated. Also note the use of sipNewCppToSelf(). This function is used to wrap a dynamically allocated (with new) C++ object in python. The arguments are the object to be wrapped, the PyObject* corresponding to the python class to be used, and flags (which are in practice always SIP_SIMPLE|SIP_PY_OWNED)
%C++Code Fraction* plus (Fraction* x, Fraction* y) { return new Fraction(*x + *y); } Fraction* minus (Fraction* x, Fraction* y) { return new Fraction(*x - *y); } Fraction* mult (Fraction* x, Fraction* y) { return new Fraction(*x * *y); } Fraction* div (Fraction* x, Fraction* y) { return new Fraction(*x / *y); } PyObject* BinaryOp(PyObject* sipArgs, binary_fraction_op_t op) { Fraction *ptr1, *ptr2; PyObject* res; int sipArgsParsed = 0; /** Extract operands and make sure that they are "really" fractions */ if (sipParseArgs(&sipArgsParsed,sipArgs,"J1J1", sipClass_Fraction, &ptr1, sipClass_Fraction, &ptr2 ) ) { /* ptr1 and ptr2 point to fractions */ return sipNewCppToSelf ( op(ptr1, ptr2), sipClass_Fraction, SIP_SIMPLE | SIP_PY_OWNED ); } return NULL; } %End
This makes implementing arithmatic the operators simpler. Note that all the binary arithmatic operators return type PyObject*, and take two arguments of type PyObject*. The functions that implement these operators are documented in the API reference, section 6.2, the number protocol. For example, the __add__ method corresponds with the function PyNumber_Add. While the function signatures are obvious for a lot of the data types, some of them like __coerce__ require one to read the documentation to understand how to implement them.
So here's the code to implement our operations:
void __add__() /NonLazy/; %MemberCode return BinaryOp (sipArgs,plus); %End void __sub__ () /NonLazy/; %MemberCode return BinaryOp (sipArgs,minus); %End void __mul__() /NonLazy/; %MemberCode return BinaryOp (sipArgs,mult); %End void __div__() /NonLazy/; %MemberCode return BinaryOp (sipArgs,div); %EndWe'd also like to permit explicit conversion to floating point numbers. We do this by implementing __float__. Note the python API function Py_BuildValue.
void __float__() /NonLazy/; %MemberCode Fraction* ptr; if (sipParseArgs( &sipArgsParsed,sipArgs,"J1",sipClass_Fraction,&ptr )) { double x = (double)ptr->numerator() / ptr->denominator(); return Py_BuildValue ( "d", x ); } %End
We're almost done. A desirable feature to make our fraction data type better interoperate with pythons numerical types would be an implementation of the "type promotion" feature, the __coerce__ method. Implementing this is a little tricky. Referring to the python API documentation on the Number Protocol, 6.2 P29, we see)'
So, it's a good thing we read the documents. While one might reasonably guess the meaning of the return code, and deduce that the arguments are supposed to be overwritten with the return values, the reference counts are a potential trap. If we didn't read the document, our code would have segfaulted (like mine did when I was learning this!)
The basic aim of the coerce method then is to end up with two objects of the same type, and we know that the first is a fraction. We start by declaring some variables:
void __coerce__() /NonLazy/; %MemberCode Fraction* ptr; long i; bool success = false; PyObject *a0, *a1;
a0,a1 store the two arguments. We need to extract these from the arguments and check their type (alternatively, we could use sipParseArgs, but I'd like to illustrate a different approach)
a0 = PyTuple_GetItem ( sipArgs,0 ); a1 = PyTuple_GetItem ( sipArgs,1 );It's important to be aware of what python API functions do with reference counts. The Python documentation asserts that PyTuple_GetItem returns a borrowed reference which means that the reference count of that tuple element is not increased. Next, we make sure that the first object is of the right type (it should be!)
if (!sipIsSubClassInstance(a0, sipClass_Fraction)) return NULL; // this should not happenThen we check to see if both objects are of the same type. We already know that *a0 is of type Fraction, we need to perform a check for *a1. To check types, we use a special function to check the type of *a1, sipIsSubClassInstance().
int sipIsSubClassInstance(PyObject *inst, PyObject *baseclass
This function returns a true value if inst is an object whose class is some derived type of baseclass
if (sipIsSubClassInstance(a1, sipClass_Fraction)) { return Py_BuildValue("(OO)",a0,a1); }
If the arguments are not both fractions, we need to check for integral types, and convert. We do this using the checking functions PyLong_Check() and PyInt_Check() and the conversions PyLong_AsLong() and PyInt_AsLong() (again, described in 6.1, API ref). Note that if these conversions are succesful, we increment the reference count of *a0. This has the same effect as returning a copy of *a0 with a reference count of 1.
if ( PyLong_Check(*a1) ) { success = true; i = PyLong_AsLong (*a1); } else if ( PyInt_Check(*a1) ) { success = true; i = PyInt_AsLong (*a1); } if (success) { return Py_BuildValue("(Oi)",a0, i ); } else { return NULL; }
And that's it. We now have a complete sip file An interesting exercise would be to implement other operators, and/or implement __coerce__ for floating point data types, but this is sufficient to get the reader started.
const void *sipGetCppPtr (sipThisType*,PyObject *); PyObject *sipMapCppToSelf (const void *,PyObject *); PyObject *sipNewCppToSelf (const void *,PyObject *); int sipIsSubClassInstance (PyObject *,PyObject *);
PyObject *sipNewCppToSelf (const void * object,PyObject * class, int flags);
Convert a C++ object to a Python object. This function is used to return values from functions. For the flags, you will nearly always want to use SIP_SIMPLE | SIP_PY_OWNED. This means that Python is responsible for managing the object. I am still not clear what SIP_SIMPLE means, but it has something to do with the internal representation of the object.
PyObject *sipMapCppToSelf (const void * object,PyObject * class);
Convert a C++ object to a Python object. This function is used to convert C++ objects to Python objects. C++ bears responsibility for deallocating the memory (so use sipNewCppToSelf to construct return values)
const void* sipGetCppPtr(sipThisType* object, PyObject* class);
Returns C++ class pointer. This function is unsafe, in that the way it's usually used involves a cast from a generic PyObject pointer to a sipThisType pointer. So it should only be used if the argument is known to be a sip object.
int sipIsSubClassInstance (PyObject * object,PyObject * class);
Check to see if object belongs to class or some subclass of class. This is useful for type checking, if you don't know anything about an objects type.
These functions are all documented in the API reference, but are listed here for convenience. This is not meant to be comprehensive (for that, there's the Python API reference).
int PyInt_Check(PyObject* o); int PyLong_Check(PyObject* o); int PyFloat_Check(PyObject* o);Returns true value if o is respectively a python int, long or float object.
int PyInt_AsLong(PyObject* o); long PyInt_AsLong(PyObject* o); longlong PyInt_AsLongLong(PyObject* o); double PyLong_AsDouble(PyObject* o); double PyFloat_AsDouble(PyObject* o);
Convert python objects. Several conversions are offered for Pythons long data type, because it is an arbitrary precision type. Conversions involving long raise OverflowError if unsuccesful.
Overlading numeric operators amounts to partially implementing pythons number protocol described in the API reference. In particular, one implements functions that implement the functionality documented in the API reference. The python functions delegate to the method table built by sip, so sip__add__classname does the work when __add__ (or PyNumber_Add, which has the same effect) is called. There are a lot of arithmatic operators, and they're all documented in the API reference. Here, we present the ones used in the examples.
PyObject* PyNumber_Add(PyObject* left, PyObject* right); PyObject* PyNumber_Subtract(PyObject* left, PyObject* right); PyObject* PyNumber_Multiply(PyObject* left, PyObject* right); PyObject* PyNumber_Divide(PyObject* left, PyObject* right);
Respectively add, subtract, multiply, and divide two python objects. Same as (respectively) the methods __add__, __sub__, __mul__, __div__ in Python. Methods should return NULL if unsuccesful. Python takes care of type checking and promotion to make sure both operands are of the same type.
This is a little complex, so I will quote the API reference verbat)'
PyObject* PyNumber_Int(PyObject*); PyObject* PyNumber_Long(PyObject*); PyObject* PyNumber_Float(PyObject*);
implement the float(), int() and long() operators respectively.
PyObject* Py_BuildValue(char* format, ... );
Construct a python object from C variables. Return NULL on failure. The specific rules re format are quite long and described in the Extending and Embedding guide.
PyErr_SetString(PyObject* exception, char* message);Exceptions behave in a way that may seem strange to a C++ programmer. One throws an exception by "setting" an exception flag. The exception objects are defined in python (See API reference, P16 4.1 for a complete list). One usually throws a standard exception type.
One should very rarely have to use any of these when writing sip bindings. They are needed occasionaly to implement a python method, eg __coerce__.
void Py_XINCREF(PyObject* o); void Py_XDECREF(PyObject* o);
Respectively increment the reference count of o. If o is a null pointer, this has no effect.
Deriving from QWidget proves a tricky problem, because sip needs to be informed that the QWidget class has been defined elsewhere in python. To do this, one needs to use the %Import directive in sip. The other aspect of it that is quite tricky is invoking sip correctly, and finding the sipQtFeatures.h file. I've included a simple (perhaps even trivial), but working, example. | http://www.panix.com/~elflord/unix/siptute/ | CC-MAIN-2016-26 | refinedweb | 2,782 | 54.22 |
Tutorial 1c: Making some activity¶
In the previous part of the tutorial we found that each neuron
was producing only one spike. In this part, we alter the model so
that some more spikes will be generated. What we’ll do is alter
the resting potential
El so that it is above threshold, this
will ensure that some spikes are generated. The first few
lines remain the same:
from brian import * tau = 20 * msecond # membrane time constant Vt = -50 * mvolt # spike threshold Vr = -60 * mvolt # reset value
But we change the resting potential to -49 mV, just above the spike threshold:
El = -49 * mvolt # resting potential (same as the reset)
And then continue as before:
G = NeuronGroup(N=40, model='dV/dt = -(V-El)/tau : volt', threshold=Vt, reset=Vr) M = SpikeMonitor(G) run(1 * second) print M.nspikes
Running this program gives the output
840. That’s because
every neuron starts at the same initial value and proceeds
deterministically, so that each neuron fires at exactly the
same time, in total 21 times during the 1s of the run.
In the next part, we’ll introduce a random element into the behaviour of the network.
Exercises¶
- Try varying the parameters and seeing how the number of spikes generated varies.
- Solve the differential equation by hand and compute a formula for the number of spikes generated. Compare this with the program output and thereby partially verify it. (Hint: each neuron starts at above the threshold and so fires a spike immediately.)
Solution¶
Solving the differential equation gives:
V = El + (Vr-El) exp (-t/tau)
Setting V=Vt at time t gives:
t = tau log( (Vr-El) / (Vt-El) )
If the simulator runs for time T, and fires a spike immediately at the beginning of the run it will then generate n spikes, where:
n = [T/t] + 1
If you have m neurons all doing the same thing, you get nm spikes. This calculation with the parameters above gives:
t = 48.0 ms n = 21 nm = 840
As predicted. | https://brian.readthedocs.io/en/1.4.3/tutorial_1c_making_some_activity.html | CC-MAIN-2019-30 | refinedweb | 338 | 57 |
The barriers to setting up a persistent data store for your web or mobile app seem lower than ever. One product that lowers these barriers in a delightfully simple yet feature-rich way is Twilio Sync. Twilio Sync's JavaScript SDK provides a straightforward abstraction using websockets to:
- persist data in a handful of data structures ("objects"),
- and alter persisted data from multiple clients.
In this tutorial we're going to leverage the convenience Twilio Sync affords us, along with a fantastic open-source project, to make a fully functional multi-client Kanban board.
Ingredients
- Python 3.6 or newer. If your operating system does not provide a Python interpreter, you can go to python.org to download an installer.
- A Twilio account. You can create one here if need be. You can review the features and limitations of a free Twilio account here.
Mise-en-Place
In a French kitchen "mise-en-place" means "preparing all your ingredients for easy access".
Creating a project directory
Begin by creating the directory where you will store your project files. Open a terminal window, find a suitable parent directory, and then enter the following commands:
$ mkdir twilio-sync-kanban $ cd twilio-sync-kanban
This project is going to have a Python back end and a JavaScript front end. Create separate subdirectories for these two:
$ mkdir back $ mkdir front
Creating a Python virtual environment
Following Python development best practices, let’s create a virtual environment where you will install your Python dependencies.
If you are using a Unix or MacOS system, open a terminal and enter the following commands to do so:
$ cd back $ python -m venv venv $ source venv/bin/activate (venv) $ pip install twilio flask python-dotenv flask-cors faker
If you’re using Windows (PowerShell), enter the following commands in a command prompt window:
$ cd back $ python -m venv venv $ venv\Scripts\activate (venv) $ pip install twilio flask python-dotenv flask-cors faker
The last command uses
pip, the Python package installer, to install the Python packages that you are going to use in this project, which are:
- The Twilio Python Helper library, for generating access tokens for the front end
- The Flask framework, to create the web application
- Python-dotenv, to import the contents of your .env file as environment variables
- Flask-CORS, to provide cross-origin request sharing support to the Flask application
- Faker, to generate random usernames
To preserve the list of dependencies it is a good idea to generate a Python requirements file:
$ pip freeze > requirements.txt
Configuring Twilio Sync
Log in to your Twilio account to access the Console. In this page you can see the “Account SID” assigned to your account. This is important, as it identifies your account and is used for authenticating requests to the Twilio API.
Because you are going to need the Account SID later, click the “Copy to Clipboard” button on the right side. Then create a new file named .env, still in the back subdirectory and write the following contents to it, carefully pasting the SID where indicated:
TWILIO_ACCOUNT_SID=<your-twilio-account-sid>
The Twilio Sync service also requires a Twilio API Key for authentication, so in this next kanban you saved earlier.
Open the .env file again in your text editor, and add two more lines to it to record the details of your API key:
TWILIO_ACCOUNT_SID=<your-twilio-account-sid> TWILIO_API_KEY=<your-twilio-api-key-sid> TWILIO_API_SECRET=<your-twilio-api-key-secret>
Once you have your API key safely written to the .env file you can leave the API Keys page. Note that if you ever lose your API Key Secret you will need to generate a new key.
To complete the Twilio account setup, you need to create a Twilio Sync service. Go to the Twilio Sync section of the Twilio Console, click on Services, and then on the red
+ sign to create a new service. Give it the name kanban or something similar. The next page is going to show you some information about the new service, including the “Service SID”. Copy this value to your clipboard, go back to your .env file and add a fourth line for it:
TWILIO_ACCOUNT_SID=<your-twilio-account-sid> TWILIO_API_KEY=<your-twilio-api-key-sid> TWILIO_API_SECRET=<your-twilio-api-key-secret> TWILIO_SYNC_SERVICE_SID=<your-twilio-sync-service-sid-here>
The information contained in your .env file is private. Make sure you don’t share this file with anyone. If you plan on putting your project under source control it would be a good idea to configure this file so that it’s ignored, because you don’t want to ever commit this file by mistake.
Back end
The Python back end of this project is going to be dedicated to generating access tokens that will enable the front end to access the Twilio Sync service from the browser.
Copy the code below into a file named app.py in the back subdirectory of the project:
import os from dotenv import load_dotenv from faker import Faker from flask import Flask, jsonify from flask_cors import CORS from twilio.jwt.access_token import AccessToken from twilio.jwt.access_token.grants import SyncGrant dotenv_path = os.path.join(os.path.dirname(__file__), ".env") load_dotenv(dotenv_path) fake = Faker() app = Flask(__name__) CORS(app) @app.route("/token") def randomToken(): identity = fake.user_name() token = AccessToken( os.environ["TWILIO_ACCOUNT_SID"], os.environ["TWILIO_API_KEY"], os.environ["TWILIO_API_SECRET"], ) token.identity = identity sync_grant = SyncGrant(service_sid=os.environ["TWILIO_SYNC_SERVICE_SID"]) token.add_grant(sync_grant) token = token.to_jwt().decode("utf-8") return jsonify(identity=identity, token=token) if __name__ == "__main__": app.run(debug=True, host="0.0.0.0", port=5001)
The application starts by loading the .env file, which will make all the variables included in it part of the environment.
An instance of the
faker package is also initialized.
Next a Flask application instance is created, and the Flask-CORS extension is initialized with default settings, which will allow any origins to make calls into the server.
The /token route is going to be called by the front end to request a Twilio access token. The implementation of this endpoint uses the Twilio Python Helper Library to generate the token using the credentials imported from the .env file. Note that the token is given a
SyncGrant configured with the Sync service created earlier.
In a normal application the client would be submitting login credentials, which would be verified. Since this application is not going to have a user database, the
faker package is used to generate a random username that is added to the token as the user identity.
You can start the back end server with the following command:
$ python app.py
The output should look like the following:
*: 116-844-499
An important note, especially if the above didn't’t work: If you have any Twilio credentials in your computer’s environment which use the same names as any of those in .env, they will be used instead of the .env variables: Remove them from your environment if necessary, then open a new terminal, activate the virtual environment and run the above command again.
The back end is done and ready. 🚀
Minimum viable front end
Now let's use our single /token endpoint to sprint towards a nice, fulfilling "hello world", shall we?
Twilio provides a twilio-sync.js library that can be used from the browser. According to the documentation we need to instantiate a Sync client using a token obtained from our back end endpoint.
Open a second terminal window (leave the first terminal running the back end server), and go into the front directory:
$ cd twilio-sync-kanban/front
Create an index.html file in this directory with the following contents:
<!doctype html> <html> <head> <title>Twilio Sync Kanban</title> <meta charset="utf-8"> <script type="text/javascript" src=""></script> </head> <body> !') } }) } window.onload = setupTwilioClient </script> </body> </html>
Here we're using the async/await syntax instead of promises, mostly to keep things nice and readable. What are we doing? We are:
- Loading the Twilio Sync JavaScript SDK from the Twilio CDN.
- Waiting for the page to load (
window.onloadevent).
- Calling our Flask endpoint to get a token.
- Using that token to instantiate a
syncClientinstance.
Make sure your Flask back end is running, then open the index.html in your browser. For this use the “Open File…” option in the browser, navigate to the front directory and select index.html.
The web page is going to be completely blank, but if you open your web browser’s console you should see this message, indicating the front end is able to connect to Twilio Sync:
Sync is live!
Actually display something
Is this an MVP yet? I'd say we need at least one thing painted on the screen before we pitch investors.
The Twilio Sync service allows you to store data in the cloud. When several clients are connected at the same time and one of them makes a change to this data, the others are notified so that they can refresh. For this application we’ll store a list of tasks.
Let's get ready to add a few things at the end of the
setupTwilioClient() function:() console.log(items) }
These two new methods — syncClient.list() and tasks.getItems() — are documented in the ”Client” and "List" sections of the JavaScript SDK documentation, respectively. The first creates or retrieves a list object with the name
tasks, and the second obtains the list of items stored in it.
Refresh the page in your browser to run the updated code. The output of the above in the browser console should be:
e {prevToken: null, nextToken: null, items: Array(0), source: ƒ} items: [] nextToken: null prevToken: null source: ƒ (e) hasNextPage: (...) hasPrevPage: (...) __proto__: Object
Let's add a few items to the list using list.push():
const tasks = await syncClient.list('tasks') await tasks.push({name: 'buy milk'}) await tasks.push({name: 'write blog post'}) const items = await tasks.getItems() console.log(items)
Refresh the page once again. The list should now show two items. If you expand the
items attribute it should look like this:
items: Array(2) 0: e data: dateExpires: null dateUpdated: ... index: 0 lastEventId: 0 revision: "0" uri: "<service_id>/Lists/<list_id>/Items/0" value: {name: "buy milk"} 0: e data: dateExpires: null dateUpdated: ... index: 0 lastEventId: 0 revision: "0" uri: "<service_id>/Lists/<list_id>/Items/0" value: {name: "write blog post"}
As a next step let’s now render the items from the list on the page. First add a new
<div> element at the start of the
<body>:
<body> <div id="tasks"></div> ... </body>
Then update the bottom of the
setupTwilioClient() function to look as follows:
const tasks = await syncClient.list('tasks') // await tasks.push({name: 'buy milk'}) // await tasks.push({name: 'write blog post'}) const items = await tasks.getItems() const tasksDiv = document.getElementById('tasks') items.items.forEach(item => { const itemDiv = document.createElement('div') itemDiv.className = "task-item" itemDiv.innerText = item.data.name tasksDiv.appendChild(itemDiv) }) // console.log(items)
If all went well, when you refresh the page you should now have a very simple display of the two tasks added earlier:
User Interface
Now that we've gotten the Sync client working and our page rendering the list's contents, let's enable a user to add new items to the list.
If we weren't concerned with sharing our "tasks" list's state across devices, we could just make an <input> field, have it populate and append additional
task-item divs, then pop open a cold brew. However, this is not the magic we're after: We want to
push() an item on to the
tasks list and then have it show up on this and any other device which is connected to the application.
In our current state of the art, what happens to the
tasks div if we add an item to the list after the page was rendered?
Let’s try this by adding a
push() right after the
forEach render loop:
items.items.forEach(item => { ... }) await tasks.push({name: 'figure out how event listeners work'})
Reloading the page after this change yields... absolutely nothing. This is because there's nothing telling the page the task list has changed. Refreshing the page once more will render the new item, but what we're after is real-time rendering of state changes without having to reload the page.
Delete the
push() added above before you continue.
Knowing this flaw in the current code, let's push forward and create an
<input> which does such a
push() call, then work to ensure this update is rendered.
Add a form at the top of the page
<body> element:
<body> <form onsubmit="addTask(event)"> <input id="task-input" type="text" name="task" /> <input type=submit /> </form> ... </body>
Then add the
addTask() function that handles the above form submission below
setupTwilioClient():
const addTask = async event => { event.preventDefault() newTaskField = event.target.elements.task const newTask = newTaskField.value console.log(newTask) newTaskField.value = '' const tasks = await syncClient.list('tasks') tasks.push({name: newTask}) }
Here we collect a new task item, clear the input field, then push the item to the Twilio Sync list. As before, we can't see the new item until we manually reload the page.
One of the key features of the Twilio Sync JavaScript SDK is the ability to add event listeners for changes to a given Sync object. Let’s add an event handler for the
itemAdded event, where we can update the list as new items are inserted. Add the following code right after the
forEach loop that renders the task list:
tasks.on('itemAdded', item => { const itemDiv = document.createElement('div') itemDiv.className = "task-item" itemDiv.innerText = item.item.data.name tasksDiv.appendChild(itemDiv) })
Now, when you (or anyone else using the application) add an item to the
tasks list, it will be rendered on the page within a fraction of a second. Go wild!
A Wee CR(no U)D Library for Sync
There are a few helper functions we can anticipate needing in our forthcoming Kanban board. We've got “Create” and “Read” handled, and now we are going to add “Delete”. For our MVP we’ll omit any “Update” functionality.
Let’s begin by adding a “Delete” button next to each list item. Below you can see the complete updated version of the index.html page, with all the changes necessary to add the delete functionality:
<html> <head> <title>Twilio Sync Kanban</title> <meta charset="utf-8"> <script type="text/javascript" src=""></script> <style> div { margin: .5em; } span { padding: .5em; } </style> </head> <body> <form onsubmit="addTask(event)"> <input id="task-input" type="text" name="task" /> <input type=submit /> </form> <div id="tasks"></div> () const tasksDiv = document.getElementById('tasks') const renderTask = item => { const containerDiv = document.createElement('div') containerDiv.className = "task-item" containerDiv.dataset.index = item.index const itemSpan = document.createElement('span') itemSpan.innerText = item.data.name containerDiv.appendChild(itemSpan) const deleteButton = document.createElement('button') deleteButton.innerText = "delete" deleteButton.addEventListener('click', () => deleteTask(item.index)) containerDiv.appendChild(deleteButton) tasksDiv.appendChild(containerDiv) } items.items.forEach(renderTask) tasks.on('itemAdded', item => renderTask(item.item)) tasks.on('itemRemoved', item => { const itemDiv = document.querySelector(`.task-item[data-index="${item.index}"`) itemDiv.remove() }) } const addTask = async event => { event.preventDefault() newTaskField = event.target.elements.task const newTask = newTaskField.value console.log(newTask) newTaskField.value = '' const tasks = await syncClient.list('tasks') tasks.push({name: newTask}) } const deleteTask = async (index) => { console.log('delete:' + index); const list = await syncClient.list('tasks') list.remove(index) } window.onload = setupTwilioClient </script> </body> </html>
The most important change is the addition of the
renderTask() function. This function contains what before was inside the
forEach render loop, extended to add the “Delete” button next to each item. We extract this logic into a function here because we need to use it both a) when we render the initial list and but) when we create a new task through the form. Now that both the render loop and the
itemAdded event handler call this function.
To make the tasks and the delete buttons look nicely spaced we have added a small
<style> block in the
<head> section of the page.
The delete button next to each task has an event handler associated with it which invokes the
deleteTask() function with the item index as the argument.
To make this work we add the
index of each item to it’s container
<div> element as a
data attribute. We reference this attribute in an event handler for the Sync list’s
itemRemoved to specify which element to remove.
Refresh the page after making the above updates. The application should now function like so:
Quick Detour: Multiple Client Demo
Backtracking just a tiny bit, let's quickly prove the magic of Twilio Sync. An interesting experiment that you can try is to open a second browser window on this application and see how changes made in one of the windows update in real time in the other.
jKanban
This is where I tell you about Sir Riccardo Tartaglia, the contemporary Italian noble who has done the public service of creating a fully functional, open-source vanilla JavaScript Kanban board. Please follow links there to buy him a cup of coffee: I bought several. ☕☕
To incorporate jKanban in the project we can follow the documentation and download and add two files to the front subdirectory. Use the links below to get these files:
After you add the files to your front end, you can add them to the index.html page, and at the same time update our CSS definitions:
<html> <head> <title>Twilio Sync Kanban</title> <meta charset="utf-8"> <script type="text/javascript" src=""></script> <script src="jkanban.min.js"></script> <link rel="stylesheet" href="jkanban.min.css"> <style> form { margin: 1em; } span { padding: .5em; } [data-class=card]:hover { cursor: grab } .dragging { cursor: grabbing !important } </style> </head> <body> ... </body> </html>
At the top of the
<script> element, instantiate a kanban board instance with three columns labeled “todo”, “doing” and “done”:
<script> const board = new jKanban({ element: "#tasks", gutter: "10px", widthBoard: "350px", dragBoards: false, boards: [ {"id": "todo", "title" : "todo"}, {"id": "doing", "title" : "doing"}, {"id": "done", "title" : "done"}, ], }) ... </script>
Next, change the implementation of the
renderTask() function to create kanban cards using the
addElement() method of the board. Let’s also change the “Delete” button to a nice icon from Hero Icons:
const renderTask = item => { board.addElement("todo", { id: item.index, title: ( `<span>${item.data.name}</span>` + `<svg onclick="deleteTask(${item.index})" xmlns="" height=15 width=15 <path stroke- </svg>` ), class: "card", drag: el => el.classList.add('dragging'), dragend: el => el.classList.remove('dragging'), }) }
And finally, update the
itemRemoved event handler to properly remove list items from the kanban board using the
removeElement() method:
tasks.on('itemRemoved', item => { board.removeElement(item.index.toString()) })
Refresh the page to see the first version of the kanban board in action. For now all the items in the Sync list render in the first column, but you can drag them to other columns. For now these column changes aren’t shared to other clients.
Note how we toggle the cursor between "grab" and "grabbing" when a task is being dragged. There are also some nice default animations that come with the jKanban for adding/moving/deleting cards.
Last Thing: Persisting the Columns
We're getting close to having a fully functional Kanban board! But you may notice that if you drag cards to other columns, those changes are lost when you refresh the page and all the cards go back to the first column.
To persist the column each card is associated with, let’s add a second attribute to each item that we save to the Sync list. The current items have the structure
{“name”: “task text”}, to which we’ll add a
list attribute indicating which of the three lists the task is located.
In the
addTask() function we add the new
list attribute set to
todo, which is the first column, to all new tasks that are created:
const addTask = async event => { event.preventDefault() newTaskField = event.target.elements.task const newTask = newTaskField.value console.log(newTask) newTaskField.value = '' const tasks = await syncClient.list('tasks') tasks.push({name: newTask, list: 'todo'}) }
Next we add a “drop” event handler for the kanban board. This is added as an attribute in the initialization options of the board:
const board = new jKanban({ ... dropEl: async (el, target, source) => { const sourceId = source.parentElement.dataset.id const targetId = target.parentElement.dataset.id if (sourceId === targetId) { return } const itemId = el.dataset.eid const name = el.innerText const tasks = await syncClient.list('tasks') tasks.set(itemId, {name, list: targetId}) } })
When we add a task to the page in the
renderTask() function, we use the
list attribute of the list item if available, defaulting to the first column if not:
const renderTask = item => { board.addElement(item.data.list || "todo", { ... })
Finally, we add an event handler for the
itemUpdated event of the Sync list, so that we can update all connected applications when a task is moved between columns:
tasks.on('itemUpdated', ({ item }) => { const id = item.index.toString() const element = board.findElement(id) board.removeElement(id) board.addElement(item.data.list, {id, 'title': element.innerHTML, class: "card", drag: el => el.classList.add('dragging'), dragend: el => el.classList.remove('dragging'), }) })
Wrapping Up
I don't know about you, but I find it so enjoyable to be building on the web at this moment. It's difficult to estimate how many lines of code Twilio Sync and its JavaScript SDK saved me here, let alone the massive savings of having a great open-source library like jKanban. However, 116 lines of HTML + JS later I have to imagine that it's significant. For such a simple application it's also great to have real-time data synchronization out of the box without deploying any microservices or databases.
Something I want to make note of, too, is how "cheap" it was to incorporate Sync into the flow of data in this application: we've used an event-driven pattern not unlike Svelte's store or React's Redux, and without any additional logic the kanban board is synced across all clients.
Testing Outside Your Wifi Network
If you'd like to try the application out with your friends and family, here's what I'd suggest:
- Install ngrok and run
ngrok http 5001on a separate terminal window to create a temporary public URL that tunnels requests into your locally running Flask server.
- Add the ngrok forwarding URL to the singular
fetch()call in index.html.
- Share the ngrok URL with your friends!
Next Steps
If I was going to spend another several hours on this project, I'd work on these tasks:
- Add undo/redo with keyboard shortcuts
- Make the cards editable
- Make it look nicer, probably using Tailwind UI so as to not get lost in the weeds
- Lock a card when it's being dragged or edited to prevent conflicts between clients
- Show visually that a card is being dragged by another user
- Add an "Add card" button at the bottom of each column
- Support and preserve reordering within a single column
- Wrap jKanban in a Svelte component and go to town
What about you?
Zev Averbach does data engineering at Caterpillar Inc., runs (and codes for) a transcription company and does web development on the side. Find me here (averba.ch) or on Twitter. | https://www.twilio.com/blog/real-time-kanban-board-python-javascript-twilio-sync | CC-MAIN-2021-10 | refinedweb | 3,899 | 55.74 |
Your browser does not seem to support JavaScript. As a result, your viewing experience will be diminished, and you have been placed in read-only mode.
Please download a browser that supports JavaScript, or enable it if it's disabled (i.e. NoScript).
On 10/08/2017 at 09:41, xxxxxxxx wrote:
I'm confused from the documentation:
How do you get/set the pixel value of a GRAYSCALE image (i.e. one created with a color mode of c4d.COLORBYTES_GRAYf).
bitmap.GetPixel(x,y) returns a tuple of R, G, B, not a float
bitmap.SetPixel(x,y,r,g,b) doesn't allow for providing a float
Are there float versions of Get/SetPixel?
On 10/08/2017 at 10:22, xxxxxxxx wrote:
A grayscale is basicly a vector with the 3 same component.
You can decompose a color where each component can be understand as a value from 0 to 255 (where 0 red is no red and 255 red is full red for exemple) so if you want to convert them in color space of c4d (c4d oftenly use vector from 0 for no color and 1 for the full color) so with those functions you can have everything you need
import c4d
def rgb_to_vector(data) :
rgbv = c4d.Vector(data[0], data[1], data[2])
return rgbv ^ c4d.Vector(1.0 / 255.0)
def vector_to_rgb(vector) :
return vector ^ c4d.Vector(255.0)
def main() :
vector = rgb_to_vector(bmp.GetPixel(0,0))
print vector
rgb = vector_to_rgb(vector)
print rgb
if __name__=='__main__':
main()
Hope it make more sense
On 10/08/2017 at 12:27, xxxxxxxx wrote:
So, I guess what you're basically saying is that r, g, b for Set and GetPixel will accept/return floats :).
Problem is: The documentation clearly states that the three color components are INT with values from 0-255 (so basically a BYTE type). Assuming all of these are the same for gray values, you would basically end up with a maximum number of 256 gray values. This is obviously NOT correct...
On 10/08/2017 at 12:48, xxxxxxxx wrote:
Originally posted by xxxxxxxx
So, I guess what you're basically saying is that r, g, b for Set and GetPixel will accept/return floats :).
Originally posted by xxxxxxxx
So, I guess what you're basically saying is that r, g, b for Set and GetPixel will accept/return floats :).
I never told that but I agree I should be more clear.
I provided you functions to convert vector returned from 99% of color returned by c4d. Wich are normalized vector. But GetPixel/SetPixel didn't fit this rules. They return raw RGB data wich are as you said 3 INT with a range from 0 to 255 and where thoses 3 INT correspond in the respectif order to the Red channel, green one and blue one.
This method (to use 0/255 range for each channel) is oftently used for describe color. And yes you only get 256gray value and that end's up with a 8bit picture. If you want to deal with more than 8bit picture I suggest you to read wich basicly use SetPixelCnt for setting pixel.
On 10/08/2017 at 12:58, xxxxxxxx wrote:
Ah, thx! Good pointer with the SetPixelCnt. I'll give it a shot.
On 11/08/2017 at 08:08, xxxxxxxx wrote:
Hi,
the C++ docs have a manual about BaseBitmap, maybe the code snippet on Pixel Data might help future readers of this thread. | https://plugincafe.maxon.net/topic/10239/13728_bitmapgetpixel--setpixel | CC-MAIN-2021-39 | refinedweb | 585 | 70.94 |
In-place QuickSort in Python.
With this in mind, I tried to implement the QuickSort algorithm in Python, but shortly afterwards realized that in order to write an elegant piece of code I would have to pass parts of the array to the function itself while recursing. Since Python creates new lists each time I do this, I have tried using Python3 (because it supports the nonlocal keyword). The following is my commented code.
def quicksort2(array): # Create a local copy of array. arr = array def sort(start, end): # Base case condition if not start < end: return # Make it known to the inner function that we will work on arr # from the outer definition nonlocal arr i = start + 1 j = start + 1 # Choosing the pivot as the first element of the working part # part of arr pivot = arr[start] # Start partitioning while j <= end: if arr[j] < pivot: temp = arr[i] arr[i] = arr[j] arr[j] = temp i += 1 j += 1 temp = arr[start] arr[start] = arr[i - 1] arr[i - 1] = temp # End partitioning # Finally recurse on both partitions sort(start + 0, i - 2) sort(i, end) sort(0, len(array) - 1)
Now, I'm not sure whether I did the job well or am I missing something. I have written a more Pythonic version of QuickSort but that surely doesn't work in-place because it keeps returning parts of the input array and concatenates them.
My question is, is this the way of doing it in Python? I have searched both Google and SO but haven't found a true in-place implementation of QuickSort, so I thought it'd be best to ask.
Sure not the best way, plus this famous algorithm will have dozens of perfect implementations.. this is mine, quite easy to understand
def sub_partition(array, start, end, idx_pivot): 'returns the position where the pivot winds up' if not (start <= idx_pivot <= end): raise ValueError('idx pivot must be between start and end') array[start], array[idx_pivot] = array[idx_pivot], array[start] pivot = array[start] i = start + 1 j = start + 1 while j <= end: if array[j] <= pivot: array[j], array[i] = array[i], array[j] i += 1 j += 1 array[start], array[i - 1] = array[i - 1], array[start] return i - 1 def quicksort(array, start=0, end=None): if end is None: end = len(array) - 1 if end - start < 1: return idx_pivot = random.randint(start, end) i = sub_partition(array, start, end, idx_pivot) #print array, i, idx_pivot quicksort(array, start, i - 1) quicksort(array, i + 1, end)
Ok first a seperate function for the partition subroutine. It takes the array, the start and end point of interest, and the index of pivot. This functions should be clear
Quicksort then call the partition subroutine for the first time on the whole array; then call recursevely itself to sort everything up to the pivot and everything after.
ask if you dont understand something
Python Program for QuickSort, doesn't create any copies of the array or any of its subarrays. Introduction Divide and conquer: Quicksort splits the array into smaller arrays until it ends up with an empty array, or one that has In place: Quicksort doesn't create any copies of the array or any of its subarrays. It does however require stack memory Unstable: A stable sorting algorithm is
I have started learning python lately and the following is my attempt at implementing quicksort using python. Hope it is helpful. Feedback is welcomed :)
#!/usr/bin/python Array = [ 3,7,2,8,1,6,8,9,6,9] def partition(a, left, right): pivot = left + (right - left)/2 a[left],a[pivot] = a[pivot], a[left] # swap pivot = left left += 1 while right >= left : while left <= right and a[left] <= a[pivot] : left += 1 while left <= right and a[right] > a[pivot] : right -= 1 if left <= right: a[left] , a[right] = a[right], a[left] # swap left += 1 right -= 1 else: break a[pivot], a[right] = a[right] , a[pivot] return right def quicksort(array , left,right): if left >= right: return if right - left == 1: if array[right] < array[left]: array[right], array[left] = array[left] , array[right] return pivot = partition(array, left, right) quicksort(array, left, pivot -1) quicksort(array, pivot+1,right) def main(): quicksort(Array, 0 , len(Array) -1) print Array main()
In-place QuickSort in Python, Python program for implementation of Quicksort Sort. # This function takes last element as pivot, places. # the pivot element at its correct position in sorted. Also, in Python you don't have to use a temporal variable to do an exchange. Instead of: t = x x = y y = t You can do: x, y = y, x
Here is another implementation:
def quicksort(alist): if len(alist) <= 1: return alist return part(alist,0,len(alist)-1) def part(alist,start,end): pivot = alist[end] border = start if start < end: for i in range(start,end+1): if alist[i] <= pivot: alist[border], alist[i] = alist[i], alist[border] if i != end: border += 1 part(alist,start,border-1) part(alist,border+1,end) return alist
Quicksort tutorial: Python implementation with line by line , Sure not the best way, plus this famous algorithm will have dozens of perfect implementations.. this is mine, quite easy to understand Question Description..
Here's what I came up with. The algorithm is in-place, looks nice and is recursive.
# `a` is the subarray we're working on # `p` is the start point in the subarray we're working on # `r` is the index of the last element of the subarray we're working on def part(a,p,r): k=a[r] #pivot j,q=p,p if p<r: # if the length of the subarray is greater than 0 for i in range(p,r+1): if a[i]<=k: t=a[q] a[q]=a[j] a[j]=t if i!=r: q+=1 j+=1 else: j+=1 part(a,p,q-1) # sort the subarray to the left of the pivot part(a,q+1,r) # sort the subarray to the right of the pivot return a def quicksort(a): if len(a)>1: return part(a,0,len(a)-1) else: return a
Quicksort (In-place): Background & Python Code, Unlike its competitor Mergesort, Quicksort can sort a list in place, saving the need to create a copy of the list, and therefore saving on memory Quick Sort Algorithm in Python. Quicksort is a divide and conquer algorithm. Quicksort when implemented well it is one of the best sorting algorithm.
Explanation: Pivot is always the last element in given array. In my approach I keep track of the 'border' between numbers smaller and bigger than pivot. Border is an index of first number in 'bigger' group. At the end of each iteration we swap number under 'border' with the pivot number.
And the code:
def qs(ar, start, end): if (end-start < 1): return if (end-start == 1): if(ar[start] > ar[end]): tmp = ar[start] ar[start] = ar[end] ar[end] = tmp return pivot = ar[end - 1] border_index = start i = start while(i <= end - 1): if (ar[i] < pivot): if i > border_index: tmp = ar[i] ar[i] = ar[border_index] ar[border_index] = tmp border_index += 1 i+=1 ar[end-1] = ar[border_index] ar[border_index] = pivot qs(ar, start, border_index) qs(ar, border_index + 1, end) qs(ar, 0, n)
mertkahyaoglu/inplace-quicksort: Inplace QuickSort Python , Inplace QuickSort. Index l scans the sequence from left to right, and index r scans the sequence from right to left. A swap is performed when l is at an element As its quicksort most of the code is straightforward enough that comments would probably only add meaning when it comes to the swapping. You should probably try and use four spaces, rather than tabs, for indenting the code (it's just common practice).
How to implement QuickSort in Python, QuickSort is an in-place sorting algorithm with worst-case time complexity of n 2 n^{2} n2. Implementation. QuickSort can be implemented both iteratively In this video we'll take another look at the efficient Quicksort algorithm, specifically, we'll reimplement our prior approach to run in-place. After coding up our new solution, we'll compare the
QuickSort Algorithm in Python - programminginpython.com, Personally I like the style (especially the nice clear function and variable names). As its quicksort most of the code is straightforward enough Quicksort. Quicksort is a very efficient sorting algorithm invented by C.A.R. Hoare. It has two phases: the partition phase; the sort phase; Most of the work is done in the partition phase - it works out where to divide the work. The sort phase simply sorts the two smaller problems that are generated in the partition phase.
In-Place Quicksort in Python, Quicksort in Python is a divide and conquer sorting algorithm that sorts the items in O(nlogn). Python Program for QuickSort Like Merge Sort , QuickSort is a Divide and Conquer algorithm. It picks an element as pivot and partitions the given array around the picked pivot. | http://thetopsites.net/article/53078151.shtml | CC-MAIN-2020-34 | refinedweb | 1,514 | 54.05 |
Editorial exits on editor.set_selection() with arguments out of range
This may well be a known or fixed issue – Editorial crashes/exits on out-of-range set_selection()
To reproduce:
create an empty file
run a python script workflow action like:
#coding: utf-8
import editor
editor.set_selection(4,4) # out of range - buffer empty
Thanks, I'll look into this.
- MartinPacker
Funnily enough I use editor.set_selection(1,9999999) to select the whole document - and then do a replace. This is in one of my scripts. You'd think THAT would get an "out of range error" but it doesn't. I WOULD like a "select all" function, of course.
The bug only occurs if the start of the range is out of bounds, the length is capped automatically.
- MartinPacker
@omz Thanks. I still feel like I'm exploiting an undocumented loophole. Please consider documenting how to select the whole document. | https://forum.omz-software.com/topic/635/editorial-exits-on-editor-set_selection-with-arguments-out-of-range | CC-MAIN-2017-47 | refinedweb | 151 | 60.01 |
ons object is undefined using React binding
Hey all. I’m trying to use the ons.platform utility object in my OnsenUI-React app, but it is undefined.
Is there something I need to do to get the utility object to be defined?
Thanks,
Scott
Hi @sherscher - since you’re using react I would suspect you are using some module loading system.
If you are you should be able to do something similar to either of
import {platform} from 'onsenui'; var ons = require('onsenui');
If you’re not using such a system then you can just include it the old fashion way:
<script src="path/to/onsenui.js"></script>
And you will have the
onsvariable. | https://community.onsen.io/topic/598/ons-object-is-undefined-using-react-binding/3 | CC-MAIN-2022-33 | refinedweb | 115 | 64.81 |
Re: volatile Info
- From: crisgoogle <crisgoogle@xxxxxxxxx>
- Date: Fri, 6 Aug 2010 14:24:27 -0700 (PDT)
On Aug 6, 12:22 pm, Keith Thompson <ks...@xxxxxxx> wrote:.
First off, I totally agree that this is all rather academic unless
you actually own a DS9K.
And I can't quite put my finger on what I'm trying to get at here,
so bear with me =)
Let's say that there _is_ some mechanism by which an object's value
may change, independently of what the abstract machine would otherwise
dictate. Modifying your example above:
<code>
/* assume that this is an implementation-defined memory-mapped
register that _ought_ to be treated as volatile. */
int *reg = 0x4000
#include <stdlib.h>
/* The parameter here _ought_ be declared as volatile. */
int square(int *x) {
return *x * *x;
}
int main(void) {
*reg = 10;
return square(reg) == 100 ? EXIT_SUCCESS : EXIT_FAILURE;
}
</code>
(I think I got that right).
In parallel to your explanation with your original example, I think
you're suggesting that the implementation is non-conforming if
this program returns EXIT_FAILURE.
Does this code exhibit undefined behaviour? Unless I made a mistake
or missed something, I don't think so (ignoring overflow,
as usual).
So it seems peculiar to me that the programmer can render the
implementation non-conforming by omitting the volatile in the
declaration of square().
.
- Follow-Ups:
- Re: volatile Info
- From: Keith Thompson
- References:
- volatile Info
- From: manu
- Re: volatile Info
- From: Shao Miller
- Re: volatile Info
- From: Scott Fluhrer
- Re: volatile Info
- From: Seebs
- Re: volatile Info
- From: crisgoogle
- Re: volatile Info
- From: Malcolm McLean
- Re: volatile Info
- From: crisgoogle
- Re: volatile Info
- From: Keith Thompson
- Prev by Date: Re: "claim", etc. (was Re: C Standard Regarding Null Pointer Dereferencing)
- Next by Date: Re: volatile Info
- Previous by thread: Re: volatile Info
- Next by thread: Re: volatile Info
- Index(es): | http://coding.derkeiler.com/Archive/C_CPP/comp.lang.c/2010-08/msg00455.html | CC-MAIN-2016-30 | refinedweb | 310 | 59.03 |
Hi guys! I'm trying to fade out a lightbox when clicking a button. I wrote the following code but it just closes the lightbox, without the fade out animation:
import wixWindow from 'wix-window'; export function button1_click(event) { wixWindow.lightbox.close("fade"); }
What am I doing wrong? Thanks in advance.
You can't set the closing animation that way, it must be done with the GUI.
Hi! First of, thank you for your reply. I've been trying to do it with the GUI, but I can't seem to get it working. Could you please provide me the code to do it? Thanks.
@b4443569 A lightbox consists 2 elements:
1. lightbox
2. overlay
You can animate the closing of the lightbox element. But there's currently no way to animate the closing of the overlay.
@J. D. Hi J.D! Thank you for your answer, it works! Now, because it's impossible to animate the overlay, I've put the content inside the lightbox, but the problem is that it isn't as responsive as the overlay. Could you help me out with this? | https://www.wix.com/corvid/forum/community-discussion/problem-with-code-1 | CC-MAIN-2020-05 | refinedweb | 186 | 75.91 |
Python Programming, news on the Voidspace Python Projects and all things techie.
Python and Threading
In my Python programming so far I've managed to avoid threads altogether.
I learned a lot whilst working on ConfigObj with Nicola Larosa. He has a pathological hatred of threads, which I inherited by proxy.
In order to justify a prejudice like this you need to understand the issues. In his case the loathing almost certainly came from great pain in debugging thread related problems. I understand what problems threading could cause, but had no direct experience.
Working with IronPython and Windows Forms I've had to wrestle with threads a bit. In our production code Timers and Network Clients involve asynchronous callbacks which use threads. More to the point; in order to interact with our GUI a lot of the tests need to run on another thread.
We've had much fun working out timing and blocking issues. We're currently going through our test codebase and trying to remove as many 'voodoo sleeps' [1] as possible and actually resolve the issues they are attempting to work round.
Programming in IronPython and Windows Forms we're using a native GUI framework. My guess is that other Python GUI toolkits also use threads 'under the hood' for timer classes and the like. Because these frameworks are non-native, the threading doesn't normally interact with your Python code. Perhaps this a good thing.
In the last couple of days, working with the boss on a CPython script, I've used the Python threading API for the first time.
The basics are very easy, but there are a couple of noticeable oddnesses.
import time
TIMEOUT = 110
def LongRunning():
time.sleep(100)
print 'Finished'
thread = Thread(target=LongRunning)
thread.start()
#
# Thread is now running
thread.join(TIMEOUT)
if thread.isAlive():
print "Oh dear - the thread didn't terminate normally."
print 'We timed out instead.'
else:
print 'Terminated normally.'
One slightly odd thing is that the first argument to the Thread constructor is reserved, so you can't use it. Huh ?
Secondly, there is no way to terminate a thread. My boss was most perplexed by this (he is used to threading APIs from other languages which do allow this). If I understand correctly (which is possible - but perhaps not likely), the reason for this is "it could leave your objects in an inconsistent state to terminate at a random point, so you shouldn't do it".
This does seem at odds with the normal Python philosophy of not telling the developer how he ought to do things.
In our case we were trying to test a long running loop by spinning it off on another thread. We got round it by monkey patching one of the functions the loop used to raise an exception. We redirect standard error around the exception to reduce the noise. Not ideal by any means...
Like this post? Digg it or Del.icio.us it.
Posted by Fuzzyman on 2006-11-08 23:43:33 | |
Categories: Python, Hacking
Python Jobs on the HiddenNetwork
Big news: there are now Python jobs on the Hidden Network Jobs Board.
If you're searching for a Python job, or are looking to hire, this is the place.
This is great news. It means that Python jobs will be shown all across the Hidden Network. It's only just happened, so there are only a couple so far, but expect this to grow.
The HiddenNetwork is the job board started by the team responsible for The Daily WTF. Adverts are shown on a network of top programming blogs. That means that the posts get shown many thousands of times a day, and are read by top programmers from all around the world.
So is advertising on blogs an effective way of hiring ?
Well... a few months back I mentioned on this blog that we were hiring at Resolver Systems. We had an excellent developer, Christian Muirhead [1], who applied for the job. He's an excellent programmer.
We've got an interesting mix of skills now, and have been very lucky with hires. With a small team it's very important that everyone fits in. I feel very lucky to work at Resolver, as they're all very good blokes. The team includes :
- Andrzej with web development experience in Java and Ruby
- Giles (the boss) who has done a lot of Java development for a large investment bank
- William, who has done C++ (and Objective C etc) programming (including working on a win32 compatibility layer for porting games to the Mac)
- Jonathan with a lot of GIS experience in C, C++, C#
- Christian a web developer with ASP.NET and Python
- Me
(My apologies to my esteemed colleagues if I have misrepresented them.)
As you can see, we're a diverse bunch. This makes for great pair programming as we've all learned different patterns and idioms and all have different experiences to draw from.
Anyway, I've wandered further away from the point than usual. The theory is that techies who read blogs are more likely to have a passion for programming. It's not just Python jobs, they also have Java, Ruby, .NET, PHP jobs and all the usual suspects. So if you're looking to hire someone who's a cut above the average then the Hidden Network could be what you're looking for. Watch out though, you might get someone like you, or even worse; someone like me.
Oh, by the way: I'm off to Italy for a few days. You'll have to cope without me until next week.
Like this post? Digg it or Del.icio.us it.
Posted by Fuzzyman on 2006-11-08 23:07:44 | |
Categories: Python, Website, General Programming, Work
Movable Python 2.0.0 Beta 2
I'm pleased to announce the release of Movable Python 2.0.0 Beta 2.
There are now updated versions for Python 2.2 - 2.5, the Mega-Pack [1], and the free trial version.
The most important changes in this release are :
- Fixes for Pythonwin. The 'run' button and 'grep' dialog now work.
- You can now specify a directory for config files.
- The environment programs are run in is no longer pre-populated with variables. The Movable Python variables are now in a module called movpy.
You can see the full changelog below.
You can download the updated version from :
Movable Python Download Pages
You can get the free trial version, from
Movable Python Demo & Other Files.
Plus many other features and bundled libraries.
What's New ?
The changes in version 2.0.0 Beta 2 include :
(Many thanks to Schipo and Patrick Vrijlandt for bug reports, fixes and suggestions.)
Updated to Python 2.4.4 and wxPython 2.7.1
Fixed the bug with pylab support.
Fixed problem with global name scope in the interactive interpreter.
Everything moved out of the default namespace and moved into a module called 'movpy'.
commandline != '' if '-c' was used
go_interactive = True if '-i' was set.
interactive = True if we are in an interactive session
interactive_mode is a function to enter interactive mode
interactive_mode(localvars=None, globalvars=None, IPOFF=False, argv=None)
movpyw = True if we are running under movpyw rather than movpy
The docs menu option will now launch.)
Hopefully the next release will be 2.0.0 final, with a few minor changes and completed documentation
Like this post? Digg it or Del.icio.us it.
Posted by Fuzzyman on 2006-11-07 23:45:57 | |
Categories: Python, Projects
BlogAds Gadget Network
Voidspace is now part of the BlogAds Gadget Network, which is very cool.
BlogAds is an advertising network of specialist blogs. The Gadget Network features those with a focus on gadgets and techie subjects.
The adverts are pretty good, you can see them on the left sidebar. They can include an image and a good amount of texts.
There are certainly some higher traffic (and more expensive) blogs than mine, but according to the metrics on the network; an advert on Voidspace will get around 22 000 impressions in a week and cost you $10.
The adverts are site-wide, but they will mainly be seen by alpha geeks like you.
If you have any technical products to advertise, then the Gadget Network is a great place to do it, and especially Voidspace.
Like this post? Digg it or Del.icio.us it.
Posted by Fuzzyman on 2006-11-07 23:27:47 | |
Categories: Website, Computers
Movable Python Extras
There are now some Movable Python extras available for download.
These can be downloaded by anyone, but are especially useful for use with Movable Python. You can find them at :
Python files at Tradebit
The files available are :
- Python manuals (CHM files) for Python 2.3, 2.4 & 2.5
- PyWin32 Manual (CHM)
- Matplotlib (pylab) and Numarray for Python 2.4
- SPE 0.8.3c : the Python IDE for Python 2.3-2.5
Instructions on how to use SPE and matplotlib with Movable Python are included on the details pages for the files.
Like this post? Digg it or Del.icio.us it.
Posted by Fuzzyman on 2006-11-07 18:24:39 | |
Categories: Python, Projects
New Hard Drive
I've just bought (or at least paid for) a new hard drive.
The most cost effective size seemed to be 320 gigabytes. The two cheapest places I could find (for branded drives) were Dabs and Ebuyer. They were almost identically priced, but the service from Dabs has been better in the past.
The UK cost was the equivalent of about 40c per giga-byte.
Like this post? Digg it or Del.icio.us it.
Posted by Fuzzyman on 2006-11-04 14:59:45 | |
Python import and __main__ Oddness
Try creating a small Python script with the following code, and running it :
import imp
x = imp.new_module('__main__')
sys.modules['__main__'] = x
print x
What would you expect to see output ? In theory I suppose it ought to be a module object.
What happens (with Python 2.4.4) is that it prints None.
The reason for this is that when you replace sys.modules['__main__'] with another one, the reference count of the original __main__ module drops to zero. So garbage collection kicks in. The original of course is the script currently running.
Except shouldn't that result in a NameError. I guess the module is only partially garbage collected, hence the name isn't lost but its value is reset to None.
Hmmm... As the module is being executed it shouldn't be garbage collected at all, the interpreter should keep a reference to it [1]. And the moral of this story, don't do this at home.
Note
Python behaves differently if you try this in an interactive session. To see the same results as above you have to run this as a script.
Chris Siebenmann has written up an explanation.
Like this post? Digg it or Del.icio.us it.
Posted by Fuzzyman on 2006-11-04 10:55:45 | |
Categories: Hacking, Python
Archives
This work is licensed under a Creative Commons Attribution-Share Alike 2.0 License.
Counter... | http://www.voidspace.org.uk/python/weblog/arch_d7_2006_11_04.shtml | CC-MAIN-2015-40 | refinedweb | 1,867 | 75.3 |
23 July 2010 12:16 [Source: ICIS news]
LONDON (ICIS news)--Spolchemie plans to split its core businesses into several independent operating units, with the aim of attracting financial partners, divesting or even closing down unprofitable divisions, the troubled Czech Republic-based company said on Friday.
Spolchemie expects its lenders to approve a restructuring plan by the end of July, which involves extending standstill agreements on loans to the end of January 2011 in return for meeting the plan’s operational targets, CEO Francois Vleugels said.
If the restructuring plan is approved, the core epoxy resins, epichlorohydrin (ECH) and bisphenol A (BPA) business will form the main operating unit. Inorganics, focused on chlorine production, would be another.
Potassium permanganate production could form another group. However, Vleugels said: “This may be closed. It is the only producer in ?xml:namespace>
“We have got six months to create a permanent, positive solution for Spolchemie, with a focus on technologies which act independently.”
He said cooperation with external entities could be in the form of marketing agreements or financial partnerships. “We will consider closing some units and divesting some, but not the core epichlorohydrin, epoxy resin and polyester divisions.”
Spolchemie suffered badly during the economic downturn and was forced to turn all its assets over to lenders in 2009, lay off one-fifth of its 900 staff and implement a rescue plan under new leadership.
Vleugels said Spolchemie experienced price hikes of 70% for BPA between June and the end of April, although these have now stabilised somewhat thanks to some
“We could have sold 15% more if we’d had enough BPA,” he said. “This sent our working capital requirements through the roof. We’re still very tight on cash and have agreed a very aggressive repayment schedule with suppliers.”
Vleugels said he expects the company’s second-quarter earnings before tax, depreciation and amortisation (EBITDA) to be 50-60% higher than those in the first quarter.
Sales, which fell from €200m ($256.4m) in 2008 to €100m in 2009, are targeted to reach €160m-170m for 2010, he added.
Czech private equity group Viachem owns 63% of Spolchemie.
($1 = €0.78)
For more on BPA and other chemicals, | http://www.icis.com/Articles/2010/07/23/9378789/spolchemie-to-restructure-core-businesses-into-independent-units.html | CC-MAIN-2014-52 | refinedweb | 367 | 61.06 |
NPS:SPED:aldermen
From OLPC
Calendar 2012 Fiscal 2013
Review of City Solicitors budget
The city attorney expects more cases against families because of less state funding of social services.
See: Transcript of May 2012 Program and Services Meeting
"Unfortunately we will see more and more and more special education cases coming forward because of the breakdown in the state in funding social services programs for families. We’re responsible in large part for going in and even brushing a kid’s teeth in the morning so that they won’t have bad breath and won’t be made fun of and can have access to their programming when they come into school."
Goals in budget. Page 2 here
Outcome #3: Reduced education‐related litigation Target
- Strategy #1. Work with administrators and special educators
- Regular training Quarterly
- Assign staff members to answer daily education questions June 2012
- Strategy #2. Understand recent changes in education law
- Attend pertinent seminars Quarterly
- Communicate changes in law to clients Ongoing
- Network with other education law attorneys Monthly
Calendar 2011 Fiscal 2012
2011 Review of City Solicitor budget
No discussion of Special Education.
No mention of special education in the goals of the department or anywhere else.
Calendar 2010 Fiscal 2011
2010 Review of City Solicitors budget
No discussion of SPED.
Page 3 here:
Under accomplishments (page 3 or page 61 of document) it lists these two:
- Won appeal of contentious special education matter in federal court.
- Prevailed in case before the Bureau of Special Education Appeals concerning private placement.
I think this is the case they claim to have "prevailed in":
The school department education money was used to fight this case:
The child was in Learning Prep which is less expensive than the program Newton wanted, Reach. Newton lost but didn't pay for past years because the parents had not proved in advance that NPS was inappropriate. Newton pays part of the parent's legal fees but still extracts enough money to hire more city attorneys.
Calendar 2009 Fiscal 2010
2009 Review of City Solictors budget
Describe a great number of cases and gives example of using education funds for lawyers.
Page 8 "There are a greater number of special education cases that the School Department needs assistance in resolving or trying. They have a good working relationship with them and the idea is to do preventive law whenever possible. The numbers looked skewed in the salary account because the School Department is paying for 2 days a week for one of the attorneys."
Calendar 2008 Fiscal 2009
2008 Review of City Solictors budget
"Mr. Funk persuaded the School Department to pay the Law Department an additional $15,000 per year for work they’ve been doing for Special Education cases. This would fund one attorney for one day a week. These additional funds allowed fewer cuts in the department. There has been a big influx of special needs students into the school system and cases brought against the School Department when parents were not happy with the services their children were receiving. Quite a bit of money was at stake in these cases especially if a private placement was made which would stay in effect for many years to come."
For the case he is talking about see:
In this case the family went pro se to ask for help when their child suffered server anxiety due to bullying. The private school they sent their child to is less expensive than public school with associated services. The parents lost and Newton used the money taken from the parents to hire more attorneys.
Calendar 2007 Fiscal 2008
Funding for Interface System software to track children's mental health
Ald. Weisbuch said this is a program focused on mental health services for school aged children. There are only 19 funded similar models through the federal government across the country, and Newton has one. The grant for this project was supposed to run out in February, but Margaret Hannah, the director, has been able to extend it to June. The School Dept. used to have control of this project and they agreed to fund some part of its growth period. Through further discussions, Ald. Weisbuch said that although the concept is fantastic, it might not necessarily be the model the City wants to adopt immediately. He asked that the discussion of mental health services for children be kept active and the Committee decide how best to deal with this item for now. Commissioner Naparstek thought this needed further investigation and study. The Committee moved to hold this item.
Comment on Riverside and youth services funding
They are told that it’s always medical and they have to go to Riverside. Under clinical services on the org chart, one has to work down 3 levels to get to the youth outreach function. Restoring some social workers to work with the kids in a non-medical way seems important.
Funding was restored.
Page 11 Impact of Special Education Cases
The substantive change is the Law Department is the tremendous increase in the number of Special Education cases. For many years, Ouida Young did all the Special Ed matters in the office as there were a relatively small number of cases, periodic advice, and some settlement agreements. In the last few years, Donnalyn Kahn has been brought in to take on quite a bit of the cases they’re facing, and now Angela is added to the mix bringing the office to 3 attorneys working on Special Ed cases. ... He said that a philosophical change took place over the last few years whereby rather than settle with those families who were not satisfied with the plan that was proposed to them, the City is now saying they will do everything they can to do the right thing by these families, give them what they’re legally entitled to, but no more. They can’t afford it. As a result of that philosophy and a few other reasons, there are many, many more of these cases.
Pages collected under SPED namespace
BSEA:Decisions
Newton Public Schools
Barnstable Public Schools
Boston Public Schools
Northampton Public Schools | http://wiki.laptop.org/index.php?title=NPS:SPED:aldermen&redirect=no | CC-MAIN-2014-41 | refinedweb | 1,027 | 59.03 |
Code Monkey Projectiles
I’m a big fan of the Evil Overload List. It is a list of handy suggestions of things
not to do for anyone that wants to take over the world, such as “don’t
monologue”. Many of tips aren’t just for
evil overlords, but also are very applicable to software development. I believe the most important lesson from this
list for anyone in software development is: If my advisors ask "Why are you risking
everything on such a mad scheme?”, I will not proceed until I have a response
that satisfies them.
I consider having a good user story to be the most
import part of the development process. When requirements are handed to developers
they must understand the value that will give the end users. They need to
understand not only the solution that they are being asked to implement, but
the problem end users are having and how that problem impacts their
business. Digging into the reason behind each software change allows
developers to better understand the job the end users are trying to accomplish,
increases the developer’s domain knowledge, and allows them to become active
participants in recommending alternate solutions.
I simply will not put up with bad user stories. I have
to understand value behind any feature I’m asked to implement before I will
attempt to code it. Some developers don’t do this, and frankly that
scares me. Without a good user story there is a high potential for
introducing inconsistencies into the system and bloat the code to the point
that nobody understands how or why it works the way it does. This must be
avoided for the health of the product.
If any developer feels the need to improve the user
story, solution, or requirement they are presented with that developer needs to
be empowered, nay encouraged, to ask whatever questions are necessary to
improve that user story.
If there is a bad user story I will investigate further
until I am satisfied that I know why we are doing what we are doing. Some
people might get annoyed (or even offended) by me investigating
(second-guessing) their solutions/requirements. They may think that I am
just wasting time. They may try to squash the conversation, bypass the
explanation phase, and try to get me to jump to the coding.
There are a few ways they try to get me coding. The most common is to ask if “can” implement
their solution. They present it as a challenge
and are looking for a simple yes or no answer.
It’s a loaded question because they know their solution is possible to
implement. My response is, “I can make a
computer do anything I want it to. The
real question is, should I?” This “can
you” question sometimes comes in the form of asking the developer for an
estimate of a predetermined solution instead of asking them if they think it is
a good idea.
Another common way people try to bypass the user story is to
quote the “customer (or boss) is always right” rule. Having a stance that the customer/boss is
enlightened and it is somehow acceptable to keep developers in the dark is the
very definition of mushroom
management. Asking questions doesn’t
imply that their requested solution is wrong; I’m just asking them to share
their enlightenment.
I do understand the appeal of jumping to the solution. We praise the problem solver. They are treated as more virtuous that those
that simply gripe about problems and offer no constructive solutions for them. Although being a problem solver will likely
result in individual praise, there is no I in team. To build a good team and to be a good teacher
you need to ask good questions. You need
to let the team members at least have a chance to answer those questions before
doing all the thinking for them.
OK, enough rant. So
how can do you actual write a good user story?
I cover that in my next post, Reason Based User Stories.
Print | posted on Saturday, December 28, 2013 12:16 AM
Design by Bartosz Brzezinski
Design by Phil Haack Based On A Design By Bartosz Brzezinski | http://geekswithblogs.net/TimothyK/archive/2013/12/28/userstoryisthething.aspx | CC-MAIN-2020-10 | refinedweb | 708 | 61.87 |
How can I draw a raised border around an AWT component?
Created May 7, 2012
These insets define the area a container is reserving for its own use (such as drawing a decorative border). Layout managers must respect this area when positioning and sizing the contained components.
To create a raised, 3D border around whatever is contained within a Panel, define a subclass of panel and override its getInsets() and paint() methods. For example, you could define this border as being 5 pixels away from each edge of the container border, and reserving some extra room between it and the laid out components.
The class will look something like this:
public class BorderPanel extends Panel {
private static final Insets insets =
new Insets(10,10,10,10);
public Insets getInsets() {return insets;}
public void paint(Graphics g) {
Dimension size = getSize();
g.setColor(getBackground());
g.draw3DRect(
5,5,size.width-11, size.height-11, true);
}
}
To create the panel, you define a static Insets object that represents the space to reserve. Because that space won't change, you used a single static final instance of it. You'll return this instance anytime a layout manager (or anyone else) calls getInsets().
You then define a paint() method that gets the size of the
container into which it is painting, then draws a raised border
within that space. You can use the above class as follows:
Frame f = new Frame("Test");
f.setLayout(new GridLayout(1,0));
f.setBackground(Color.lightGray);
BorderPanel p = new BorderPanel();
p.setLayout(new GridLayout(1,0));
p.add(new Button("Hello"));
f.add(p);
f.setVisible(true);
f.pack(); | http://www.jguru.com/faq/view.jsp?EID=568847 | CC-MAIN-2020-29 | refinedweb | 270 | 55.64 |
Section [23.1.2], Table 69, of the C++ standard lists this function for all of the associative containers (map, set, etc):
a.insert(p,t);
where 'p' is an iterator into the container 'a', and 't' is the
item to insert. The standard says that “
t is
inserted as close as possible to the position just prior to
p.” (Library DR #233 addresses this topic,
referring to N1780.
Since version 4.2 GCC implements the resolution to DR 233, so
that insertions happen as close as possible to the hint. For
earlier releases the hint was only used as described below.
Here we'll describe how the hinting works in the libstdc++ implementation, and what you need to do in order to take advantage of it. (Insertions can change from logarithmic complexity to amortized constant time, if the hint is properly used.) Also, since the current implementation is based on the SGI STL one, these points may hold true for other library implementations also, since the HP/SGI code is used in a lot of places.
In the following text, the phrases greater than and less than refer to the results of the strict weak ordering imposed on the container by its comparison object, which defaults to (basically) “<”. Using those phrases is semantically sloppy, but I didn't want to get bogged down in syntax. I assume that if you are intelligent enough to use your own comparison objects, you are also intelligent enough to assign “greater” and “lesser” their new meanings in the next paragraph. *grin*
If the
hint parameter ('p' above) is equivalent to:
begin(), then the item being inserted should
have a key less than all the other keys in the container.
The item will be inserted at the beginning of the container,
becoming the new entry at
begin().
end(), then the item being inserted should have
a key greater than all the other keys in the container. The
item will be inserted at the end of the container, becoming
the new entry before
end().
neither
begin() nor
end(), then:
Let
h be the entry in the container pointed to
by
hint, that is,
h = *hint. Then
the item being inserted should have a key less than that of
h, and greater than that of the item preceding
h. The new item will be inserted between
h and
h's predecessor.
For
multimap and
multiset, the
restrictions are slightly looser: “greater than”
should be replaced by “not less than”and “less
than” should be replaced by “not greater
than.” (Why not replace greater with
greater-than-or-equal-to? You probably could in your head, but
the mathematicians will tell you that it isn't the same thing.)
If the conditions are not met, then the hint is not used, and the
insertion proceeds as if you had called
a.insert(t)
instead. (Note that GCC releases
prior to 3.0.2 had a bug in the case with
hint ==
begin() for the
map and
set
classes. You should not use a hint argument in those releases.)
This behavior goes well with other containers'
insert() functions which take an iterator: if used,
the new item will be inserted before the iterator passed as an
argument, same as the other containers.
Note also that the hint in this implementation is a one-shot. The older insertion-with-hint routines check the immediately surrounding entries to ensure that the new item would in fact belong there. If the hint does not point to the correct place, then no further local searching is done; the search begins from scratch in logarithmic time.
No, you cannot write code of the form
#include <bitset> void foo (size_t n) { std::bitset<n> bits; .... }
because
n must be known at compile time. Your
compiler is correct; it is not a bug. That's the way templates
work. (Yes, it is a feature.)
There are a couple of ways to handle this kind of thing. Please consider all of them before passing judgement. They include, in no chaptericular order:
A very large N in
bitset<N>.
A container<bool>.
Extremely weird solutions.
A very large N in
bitset<N>. It has been
pointed out a few times in newsgroups that N bits only takes up
(N/8) bytes on most systems, and division by a factor of eight is
pretty impressive when speaking of memory. Half a megabyte given
over to a bitset (recall that there is zero space overhead for
housekeeping info; it is known at compile time exactly how large
the set is) will hold over four million bits. If you're using
those bits as status flags (e.g.,
“changed”/“unchanged” flags), that's a
lot of state.
You can then keep track of the “maximum bit used” during some testing runs on representative data, make note of how many of those bits really need to be there, and then reduce N to a smaller number. Leave some extra space, of course. (If you plan to write code like the incorrect example above, where the bitset is a local variable, then you may have to talk your compiler into allowing that much stack space; there may be zero space overhead, but it's all allocated inside the object.)
A container<bool>. The
Committee made provision for the space savings possible with that
(N/8) usage previously mentioned, so that you don't have to do
wasteful things like
Container<char> or
Container<short int>. Specifically,
vector<bool> is required to be specialized for
that space savings.
The problem is that
vector<bool> doesn't
behave like a normal vector anymore. There have been
journal articles which discuss the problems (the ones by Herb
Sutter in the May and July/August 1999 issues of C++ Report cover
it well). Future revisions of the ISO C++ Standard will change
the requirement for
vector<bool>
specialization. In the meantime,
deque<bool>
is recommended (although its behavior is sane, you probably will
not get the space savings, but the allocation scheme is different
than that of vector).
Extremely weird solutions. If
you have access to the compiler and linker at runtime, you can do
something insane, like figuring out just how many bits you need,
then writing a temporary source code file. That file contains an
instantiation of
bitset for the required number of
bits, inside some wrapper functions with unchanging signatures.
Have your program then call the compiler on that file using
Position Independent Code, then open the newly-created object
file and load those wrapper functions. You'll have an
instantiation of
bitset<N> for the exact
N that you need at the time. Don't forget to delete
the temporary files. (Yes, this can be, and
has been, done.)
This would be the approach of either a visionary genius or a raving lunatic, depending on your programming and management style. Probably the latter.
Which of the above techniques you use, if any, are up to you and your intended application. Some time/space profiling is indicated if it really matters (don't just guess). And, if you manage to do anything along the lines of the third category, the author would love to hear from you...
Also note that the implementation of bitset used in libstdc++ has some extensions.
Bitmasks do not take char* nor const char* arguments in their constructors. This is something of an accident, but you can read about the problem: follow the library's “Links” from the homepage, and from the C++ information “defect reflector” link, select the library issues list. Issue number 116 describes the problem.
For now you can simply make a temporary string object using the constructor expression:
std::bitset<5> b ( std::string(“10110”) );
instead of
std::bitset<5> b ( “10110” ); // invalid | http://gcc.gnu.org/onlinedocs/gcc-4.8.2/libstdc++/manual/manual/associative.html | CC-MAIN-2016-44 | refinedweb | 1,298 | 62.17 |
I have tried to look at other error checks on 7/11, but none of them have worked for me. My code is this:
class Car(object):
condition = "new"
def init(self, model, color, mpg):
self = self
self.model = model
self.color = color
self.mpg = mpg
def display_car(self, model, color, mpg):
self = self
self.model = model
self.color = color
self.mpg = mpg
return "This is a %s %s with %s MPG." % (self.color, self.model, str(self.mpg))
my_car = Car("DeLorean", "silver", 88)
print my_car.condition
print my_car.model
print my_car.color
print my_car.mpg
print my_car.display_car(self, model, color, mpg)
I continue to get the error message "Oops, try again. Make sure you pass the self keyword to the display_car() method." Does ANYONE have a solution for my problem? I'm desperate! | https://discuss.codecademy.com/t/7-11-passing-error/24188 | CC-MAIN-2018-34 | refinedweb | 135 | 64.27 |
#include <stdbool.h>
#include <stdint.h>
#include <rte_atomic.h>
#include <rte_branch_prediction.h>
#include <rte_compat.h>
Go to the source code of this file.
RTE Seqcount
The sequence counter synchronizes a single writer with multiple, parallel readers. It is used as the basis for the RTE sequence lock.
Definition in file rte_seqcount.h.
A static seqcount initializer.
Definition at line 40 of file rte_seqcount.h.
Initialize the sequence counter.
Definition at line 53 of file rte_seqcount.h.
Begin a read-side critical section.
A call to this function marks the beginning of a read-side critical section, for
seqcount.
rte_seqcount_read_begin() returns a sequence number, which is later used in rte_seqcount_read_retry() to check if the protected data underwent any modifications during the read transaction.
After (in program order) rte_seqcount_read_begin() has been called, the calling thread reads the protected data, for later use. The protected data read must be copied (either in pristine form, or in the form of some derivative), since the caller may only read the data from within the read-side critical section (i.e., after rte_seqcount_read_begin() and before rte_seqcount_read_retry()), but must not act upon the retrieved data while in the critical section, since it does not yet know if it is consistent.
The protected data may be read using atomic and/or non-atomic operations.
After (in program order) all required data loads have been performed, rte_seqcount_read_retry() should be called, marking the end of the read-side critical section.
If rte_seqcount_read_retry() returns true, the just-read data is inconsistent and should be discarded. The caller has the option to either restart the whole procedure right away (i.e., calling rte_seqcount_read_begin() again), or do the same at some later time.
If rte_seqcount_read_retry() returns false, the data was read atomically and the copied data is consistent.
Definition at line 106 of file rte_seqcount.h.
End a read-side critical section.
A call to this function marks the end of a read-side critical section, for
seqcount. The application must supply the sequence number produced by the corresponding rte_seqcount_read_begin() call.
After this function has been called, the caller should not access the protected data.
In case rte_seqcount_read_retry() returns true, the just-read data was modified as it was being read and may be inconsistent, and thus should be discarded.
In case this function returns false, the data is consistent and the set of atomic and non-atomic load operations performed between rte_seqcount_read_begin() and rte_seqcount_read_retry() were atomic, as a whole.
Definition at line 151 of file rte_seqcount.h.
Begin a write-side critical section.
A call to this function marks the beginning of a write-side critical section, after which the caller may go on to modify (both read and write) the protected data, in an atomic or non-atomic manner.
After the necessary updates have been performed, the application calls rte_seqcount_write_end().
Multiple, parallel writers must use some external serialization.
This function is not preemption-safe in the sense that preemption of the calling thread may block reader progress until the writer thread is rescheduled.
Definition at line 201 of file rte_seqcount.h.
End a write-side critical section.
A call to this function marks the end of the write-side critical section, for
seqcount. After this call has been made, the protected data may no longer be modified.
Definition at line 232 of file rte_seqcount.h. | https://doc.dpdk.org/api-22.07/rte__seqcount_8h.html | CC-MAIN-2022-40 | refinedweb | 554 | 50.33 |
ava-fixtureava-fixture
This library helps you to write fixture tests: test-per-folder or test-per-file.
UsageUsage
For example, you are testing code that process files (e.g. a compiler, config-reader, etc).
You put each test case inside its own folder:
+ fixtures + cases + empty - somefiles + basic-case - someOtherFiles + single-line - ... + ...
You can run each test case like this:
import ava from 'ava'; import fixture from 'ava-fixture'; // Point to the base folder which contain the fixtures. // Use relative path starts from project root or absolute path const ftest = fixture(ava, 'fixtures/cases', 'fixtures/baselines', 'fixtures/results'); ftest.each((t, d) => { // d.caseName: 'empty', 'basic-case', 'single-line', etc // d.casePath: absolute path points to each test case folder // d.resultPath: absolute path points to each test result folder // Your test target reads from `d.casePath` and writes to `d.resultPath` target.process(d.casePath, d.resultPath) // d.match() will compare the result folder against the baseline folder return d.match() })
You can also use this library to run tests that only read files:
import ava from 'ava'; import fixture from 'ava-fixture'; // Point to the base folder which contain the fixtures. // Relative path starts from project root. const ftest = fixture(ava, 'fixture/cases'); ftest('test title', 'case-1', (t, d) => { // t is ava test assertion. t.is(d.casePath, 'absolut path the the case folder') const result = target.read(d.casePath) t.deepEqual(result, 'expected result') }); // test title can be omitted ftest('case-1', (t, d) => { // ... }) // go through each test ftest.each((t, d) => { // ... }) // or run certain test based on filter ftest.each(/some filter/, (t, d) => { // ... })
Other APIOther API
import test from 'ava'; import fixture from 'ava-fixture'; const ftest = fixture(test, 'fixture/cases'); ftest.only(...) ftest.skip(...) ftest.failing(...) ftest.only.each.failing(...)
For
before(),
beforeEach(),
after(),
afterEach(),
todo(), use
ava directly.
ContributeContribute
# right after clone npm install # begin making changes git checkout -b <branch> npm run watch # edit `webpack.config.es5.js` and `rollup.config.es2015.js` to exclude dependencies for the bundle if needed # after making change(s) git commit -m "<commit message>" git push # create PR
Npm CommandsNpm Commands
There are a few useful commands you can use during development.
# Run tests (and lint) automatically whenever you save a file. npm run watch # Run tests with coverage stats (but won't fail you if coverage does not meet criteria) npm run test # Manually verify the project. # This will be ran during 'npm preversion' so you normally don't need to run this yourself. npm run verify # Build the project. # You normally don't need to do this. npm run build # Run tslint # You normally don't need to do this as `npm run watch` and `npm version` will automatically run lint for you. npm run lint
Generated by
generator-unional@0.9.0 | https://libraries.io/npm/ava-fixture | CC-MAIN-2021-04 | refinedweb | 471 | 58.89 |
# React: Lifting state up is killing your app

> I now have a new shiny blog. Read this article with the latest updates there <https://blog.goncharov.page/react-lifting-state-up-is-killing-your-app>
Have you heard about "lifting state up"? I guess you have and that's the exact reason why you're here. How could it be possible that [one of the 12 main concepts listed in React official documentation](https://reactjs.org/docs/lifting-state-up.html) might lead to poor performance? Within this article, we'll consider a situation when it's indeed the case.
Step 1: Lift it up
------------------
I suggest you to create a simple game of tic-tac-toe. For the game we'll need:
* Some game state. No real game logic to find out if we win or lose. Just a simple two-dimensional array filled with either `undefined`, `"x"` or `"0".`
```
const size = 10
// Two-dimensional array (size * size) filled with `undefined`. Represents an empty field.
const initialField = new Array(size).fill(new Array(size).fill(undefined))
```
* A parent container to host our game's state.
```
const App = () => {
const [field, setField] = useState(initialField)
return (
{field.map((row, rowI) => (
{row.map((cell, cellI) => (
setField([
// Copy rows before our target row
...field.slice(0, rowI),
[
// Copy cells before our target cell
...field[rowI].slice(0, cellI),
newContent,
// Copy cells after our target cell
...field[rowI].slice(cellI + 1),
],
// Copy rows after our target row
...field.slice(rowI + 1),
])
}
/>
))}
))}
)
}
```
* A child component to display a state of a single cell.
```
const randomContent = () => (Math.random() > 0.5 ? 'x' : '0')
const Cell = ({ content, setContent }) => (
setContent(randomContent())}>{content}
)
```
[Live demo #1](https://stackblitz.com/edit/lifting-state-up-is-killing-your-app-1)
So far it looks well. A perfectly reactive field that you can interact with at the speed of light :) Let's increase the size. Say, to 100. Yeah, it's time to click on that demo link and change `size` variable on the very top. Still fast for you? Try 200 or use [CPU throttling built into Chrome](https://twitter.com/chromiumdev/status/961537247240753152?lang=en). Do you see now a significant lag between the time you click on a cell and the time its content changes?
Let's change `size` back to 10 and add some profiling to investigate the cause.
```
const Cell = ({ content, setContent }) => {
console.log('cell rendered')
return setContent(randomContent())}>{content}
}
```
[Live demo #2](https://stackblitz.com/edit/lifting-state-up-is-killing-your-app-2)
Yep, that's it. Simple `console.log` would suffice as it runs on every render.
So what do we see? Based on the number on "cell rendered" statements (for `size` = N it should be N) in our console it seems like the entire field is re-rendered each time a single cell changes.
The most obvious thing to do is to add some keys as [React documentation suggests](https://reactjs.org/docs/lists-and-keys.html#keys).
```
{field.map((row, rowI) => (
{row.map((cell, cellI) => (
setField([
...field.slice(0, rowI),
[
...field[rowI].slice(0, cellI),
newContent,
...field[rowI].slice(cellI + 1),
],
...field.slice(rowI + 1),
])
}
/>
))}
))}
```
[Live demo #3](https://stackblitz.com/edit/lifting-state-up-is-killing-your-app-3)
However, after increasing `size` again we see that that problem is still there. If only we could see why any component renders… Luckily, we can with some help from amazing [React DevTools](https://reactjs.org/blog/2019/08/15/new-react-devtools.html). It's capable of recording why components get rendered. You have to manually enable it though.

Once it's enabled, we can see that all cells were re-rendered because their props changed, specifically, `setContent` prop.

Each cell has two props: `content` and `setContent`. If cell [0][0] changes, content of cell [0][1] doesn't change. On the other hand, `setContent` captures `field`, `cellI` and `rowI` in its closure. `cellI` and `rowI` stay the same, but `field` changes with every change of any cell.
Let's refactor our code and keep `setContent` the same.
To keep the reference to `setContent` the same we should get rid of the closures. We could eliminate `cellI` and `rowI` closure by making our `Cell` explicitly pass `cellI` and `rowI` to `setContent`. As to `field`, we could utilize a neat feature of `setState` — [it accepts callbacks](https://reactjs.org/docs/hooks-reference.html#functional-updates).
```
const [field, setField] = useState(initialField)
// `useCallback` keeps reference to `setCell` the same.
const setCell = useCallback(
(rowI, cellI, newContent) =>
setField((oldField) => [
...oldField.slice(0, rowI),
[
...oldField[rowI].slice(0, cellI),
newContent,
...oldField[rowI].slice(cellI + 1),
],
...oldField.slice(rowI + 1),
]),
[],
)
```
Which makes `App` look like this
```
{field.map((row, rowI) => (
{row.map((cell, cellI) => (
))}
))}
```
Now `Cell` has to pass `cellI` and `rowI` to the `setContent`.
```
const Cell = ({ content, rowI, cellI, setContent }) => {
console.log('cell render')
return (
setContent(rowI, cellI, randomContent())}>
{content}
)
}
```
[Live demo #4](https://stackblitz.com/edit/lifting-state-up-is-killing-your-app-4)
Let's take a look at the DevTools report.

What?! Why the heck does it say "parent props changed"? So the thing is that every time our field is updated `App`is re-rendered. Therefore its child components are re-rendered. Ok. Does stackoverflow say anything useful about React performance optimization? Internet suggests to use `shouldComponentUpdate` or its close relatives: `PureComponent` and `memo`.
```
const Cell = memo(({ content, rowI, cellI, setContent }) => {
console.log('cell render')
return (
setContent(rowI, cellI, randomContent())}>
{content}
)
})
```
[Live demo #5](https://stackblitz.com/edit/lifting-state-up-is-killing-your-app-5)
Yay! Now only one cell is re-rendered once its content changes. But wait… Was there any surprise? We followed best practices and got the expected result.
An evil laugh was supposed to be here. As I'm not with you, please, try as hard as possible to imagine it. Go ahead and increase `size` in [Live demo #5](https://stackblitz.com/edit/lifting-state-up-is-killing-your-app-5). This time you might have to go with a little bigger number. However, the lag is still there. Why???
Let's take a look at the DebTools report again.

There's only one render of `Cell` and it was pretty fast, but there's also a render of `App`, which took quite some time. The thing is that with every re-render of `App` each `Cell` has to compare its new props with its previous props. Even if it decides not to render (which is precisely our case), that comparison still takes time. O(1), but that O(1) occurs `size` \* `size` times!
Step 2: Move it down
--------------------
What can we do to work around it? If rendering `App` costs us too much, we have to stop rendering `App`. It's not possible if keep hosting our state in `App` using `useState`, because that's exactly what triggers re-renders. So we have to move our state down and let each `Cell` subscribe to the state on its own.
Let's create a dedicated class that will be a container for our state.
```
class Field {
constructor(fieldSize) {
this.size = fieldSize
// Copy-paste from `initialState`
this.data = new Array(this.size).fill(new Array(this.size).fill(undefined))
}
cellContent(rowI, cellI) {
return this.data[rowI][cellI]
}
// Copy-paste from old `setCell`
setCell(rowI, cellI, newContent) {
console.log('setCell')
this.data = [
...this.data.slice(0, rowI),
[
...this.data[rowI].slice(0, cellI),
newContent,
...this.data[rowI].slice(cellI + 1),
],
...this.data.slice(rowI + 1),
]
}
map(cb) {
return this.data.map(cb)
}
}
const field = new Field(size)
```
Then our `App` could look like this:
```
const App = () => {
return (
{// As you can see we still need to iterate over our state to get indexes.
field.map((row, rowI) => (
{row.map((cell, cellI) => (
))}
))}
)
}
```
And our `Cell` can display the content from `field` on its own:
```
const Cell = ({ rowI, cellI }) => {
console.log('cell render')
const content = field.cellContent(rowI, cellI)
return (
field.setCell(rowI, cellI, randomContent())}>
{content}
)
}
```
[Live demo #6](https://stackblitz.com/edit/lifting-state-up-is-killing-your-app-6)
At this point, we can see our field being rendered. However, if we click on a cell, nothing happens. In the logs we can see "setCell" for each click, but the cell stays blank. The reason here is that nothing tells the cell to re-render. Our state outside of React changes, but React doesn't know about it. That has to change.
How can we trigger a render programmatically?
With classes we have [forceUpdate](https://reactjs.org/docs/react-component.html#forceupdate). Does it mean we have to re-write our code to classes? Not really. What we can do with functional components is to introduce some dummy state, which we change only to force our component to re-render.
Here's how we can create a custom hook to force re-renders.
```
const useForceRender = () => {
const [, setDummy] = useState(0)
const forceRender = useCallback(() => setDummy((oldVal) => oldVal + 1), [])
return forceRender
}
```
To trigger a re-render when our field updates we have to know when it updates. It means we have to be able to somehow subscribe to field updates.
```
class Field {
constructor(fieldSize) {
this.size = fieldSize
this.data = new Array(this.size).fill(new Array(this.size).fill(undefined))
this.subscribers = {}
}
_cellSubscriberId(rowI, cellI) {
return `row${rowI}cell${cellI}`
}
cellContent(rowI, cellI) {
return this.data[rowI][cellI]
}
setCell(rowI, cellI, newContent) {
console.log('setCell')
this.data = [
...this.data.slice(0, rowI),
[
...this.data[rowI].slice(0, cellI),
newContent,
...this.data[rowI].slice(cellI + 1),
],
...this.data.slice(rowI + 1),
]
const cellSubscriber = this.subscribers[this._cellSubscriberId(rowI, cellI)]
if (cellSubscriber) {
cellSubscriber()
}
}
map(cb) {
return this.data.map(cb)
}
// Note that we subscribe not to updates of the whole filed, but to updates of one cell only
subscribeCellUpdates(rowI, cellI, onSetCellCallback) {
this.subscribers[this._cellSubscriberId(rowI, cellI)] = onSetCellCallback
}
}
```
Now we can subscribe to field updates.
```
const Cell = ({ rowI, cellI }) => {
console.log('cell render')
const forceRender = useForceRender()
useEffect(() => field.subscribeCellUpdates(rowI, cellI, forceRender), [
forceRender,
])
const content = field.cellContent(rowI, cellI)
return (
field.setCell(rowI, cellI, randomContent())}>
{content}
)
}
```
[Live demo #7](https://stackblitz.com/edit/lifting-state-up-is-killing-your-app-7)
Let's play with `size` with this implementation. Try to increase it to the values that felt laggy before. And… It's time to open a good bottle of champagne! We got ourselves an app that renders one cell and one cell only when the state of that cell changes!
Let's take a look at the DevTools report.

As we can see now only `Cell` is being rendered and it's crazy fast.
What if say that now code of our `Cell` is a potential cause of a memory leak? As you can see, in `useEffect` we subscribe to cell updates, but we never unsubscribe. It means that even when `Cell` is destroyed, its subscription lives on. Let's change that.
First, we need to teach `Field` what it means to unsubscribe.
```
class Field {
// ...
unsubscribeCellUpdates(rowI, cellI) {
delete this.subscribers[this._cellSubscriberId(rowI, cellI)]
}
}
```
Now we can apply `unsubscribeCellUpdates` to our `Cell`.
```
const Cell = ({ rowI, cellI }) => {
console.log('cell render')
const forceRender = useForceRender()
useEffect(() => {
field.subscribeCellUpdates(rowI, cellI, forceRender)
return () => field.unsubscribeCellUpdates(rowI, cellI)
}, [forceRender])
const content = field.cellContent(rowI, cellI)
return (
field.setCell(rowI, cellI, randomContent())}>
{content}
)
}
```
[Live demo #8](https://stackblitz.com/edit/lifting-state-up-is-killing-your-app-8)
So what's the lesson here? When does it make sense to move state down the component tree? Never! Well, not really :) Stick to best practices until they fail and don't do any premature optimizations. Honestly, the case we considered above is somewhat specific, however, I hope you'll recollect it if you ever need to display a really large list.
Bonus step: Real-world refactoring
----------------------------------
In the [live demo #8](https://stackblitz.com/edit/lifting-state-up-is-killing-your-app-8) we used global `field`, which should not be the case in a real-world app. To solve it, we could host `field` in our `App` and pass it down the tree using [context]().
```
const AppContext = createContext()
const App = () => {
// Note how we used a factory to initialize our state here.
// Field creation could be quite expensive for big fields.
// So we don't want to create it each time we render and block the event loop.
const [field] = useState(() => new Field(size))
return (
{field.map((row, rowI) => (
{row.map((cell, cellI) => (
))}
))}
)
}
```
Now we can consume `field` from the context in our `Cell`.
```
const Cell = ({ rowI, cellI }) => {
console.log('cell render')
const forceRender = useForceRender()
const field = useContext(AppContext)
useEffect(() => {
field.subscribeCellUpdates(rowI, cellI, forceRender)
return () => field.unsubscribeCellUpdates(rowI, cellI)
}, [forceRender])
const content = field.cellContent(rowI, cellI)
return (
field.setCell(rowI, cellI, randomContent())}>
{content}
)
}
```
[Live demo #9](https://stackblitz.com/edit/lifting-state-up-is-killing-your-app-9)
Hopefully, you've found something useful for your project. Feel free to communicate your feedback to me! I most certainly appreciate any criticism and questions. | https://habr.com/ru/post/471300/ | null | null | 2,239 | 53.27 |
On 11/24/07, Neil Toronto <ntoronto at cs.byu.edu> wrote: [I'm summarizing and paraphrasing] If a name isn't in globals, python looks in globals['__builtins__']['name'] Unfortunately, it may use a stale cached value for globals['__builtins__'] ... >. As Greg pointed out, this isn't so good for sandboxes. But as long as you're changing dicts to be better namespaces, why not go a step farther? Instead of using a magic key name (some spelling variant of builtin), make the fallback part of the dict itself. For example: Use a defaultdict and set the __missing__ method to the builtin's __getitem__. Then neither python nor the frame need to worry about tracking the builtin namespace, but the fallback can be reset (even on a per-function basis) by simply replacing the fallback method. -jJ | https://mail.python.org/pipermail/python-ideas/2007-November/001242.html | CC-MAIN-2014-15 | refinedweb | 138 | 70.23 |
Re: One other related Q Re: basic Q: Only one way to make vars live outside of the scope of a function without globals?
- From: "Joe Earnest" <jearnest3-SPAM@xxxxxxxxxxxxx>
- Date: Sun, 1 May 2005 07:39:07 -0600
Hi,
"Ken Fine" <kenfine@xxxxxxxxxxxxxxxx> wrote in message
news:d51qsr$rp8$1@xxxxxxxxxxxxxxxxxxxxxxxxxx
>I have one other related question: if I instantiate a bunch of objects --
> say, recordsets -- in the context of a sub or function, will the objects
> be
> exposed to the rest of the page? (It would be nice to have
> "OpenPageRecordsets" and "ClosePageRecordsets " so that the page logic
> would
> be succinct and clear.)
>
>
> "Ken Fine" <kenfine@xxxxxxxxxxxxxxxx> wrote in message
> news:d51m0g$o92$1@xxxxxxxxxxxxxxxxxxxxxxxxxx
>> Great answer, Joe, thank you so very much for your thoughtful reply.
>>
>> Could I ask one other favor of you? You mention a system/convention of
>> prefixing that describes the scope of the variable and other useful
>> attributes to know. Can you show a sample of this convention, or maybe
>> recommend a book that describes the convention that you use? I'm very
>> interested.
>>
>> I've been on a six-month tear of reading through the best comp sci/comp
>> engineering literature I can find, and it's pretty cool to see all of the
>> tips, tricks and philosophies I've digested translated into markedly
> better
>> code. Your system sounds like something worth internalizing.
>>
>> -KF
>>
>>
>> "Joe Earnest" <jearnest3-SPAM@xxxxxxxxxxxxx> wrote in message
>> news:uKp1D%23eTFHA.3344@xxxxxxxxxxxxxxxxxxxxxxx
>> > Hi,
>> >
>> > <kenfine@xxxxxxxxxxxxxxxx> wrote in message
>> > news:eGg2QRdTFHA.616@xxxxxxxxxxxxxxxxxxxxxxx
>> > > This is a basic question about the design and intent of functions in
>> > > programming languages like VBscript.
>> > >
>> > > I know how to write VBScript functions and to pass parameter
>> > > variables
>> in
>> > > and out of them.
>> > >
>> > > As I've become a smarter programmer, I'm inclined to translate the
> lousy
>> > > code that I wrote when I didn't know what was doing into more
>> granularized
>> > > and encapsulated functions and/or classes.
>> > >
>> > > I was revisiting some VBScript browser detection code that looked
>> > > something
>> > > like this:
>> > >
>> > > ' check something
>> > > ' set a variable based on result of check
>> > > ' check something else
>> > > ' set a different variable based on result of check
>> > > ' check something else
>> > > ' set a different variable based on result of check
>> > >
>> > > The code sets about a dozen different variables of interest. The code
>> was
>> > > not organized as a function. It's certainly easy enough to make it a
> sub
>> > > or
>> > > function and to call it, but the variables that are set internally in
>> the
>> > > function don't live outside of the scope of the function.
>> > >
>> > > I want someone to confirm that the "correct" way/only way to make
>> > > many
>> > > variables survive outside of the function is to return an object
>> > > (e.g.
>> an
>> > > array) of values upon the function's completion.
>> > >
>> > > The function could write global vars, but that isn't good practice.
>> > >
>> > > Am I correct in this, or is there another way that I'm missing?
>> > >
>> > > Thank you,
>> > > Ken Fine
>> >
>> > For my two cents worth ...
>> >
>> > You missed a thread a couple of weeks ago where Al Dunbar (one of the
> MVPs
>> > here) and I had a lengthy "discussion" over a slightly more complex
>> version
>> > of your question.
>> >
>> > Given the straightforward nature of your question, I believe that the
>> answer
>> > is clearly "no," and I believe that most VBS scripters would agree,
> though
>> > perhaps in different ways.
>> >
>> > An array return (in scripting, "object" usually refers to a COM object
>> > instance and a specific data subtype) is great for similar multiple
> items.
>> > But it is the most obscure return, since you don't have the benefit of
> the
>> > variable name to provide quick identification. And at times you may
> want
>> to
>> > return quite different values -- both strings and object instances, or
>> > different types of references -- from a single function.
>> >
>> > VBS (unlike JS and some other languages) maintains the ByRef/ByVal
>> argument
>> > distinction, and defaults to ByRef. ByRef arguments provide for
>> > returns
>> > directly to the calling script, without having to use global variables
> to
>> > achieve the return. Indeed, there is no functional difference between
> the
>> > transitory function-name return and a ByRef argument variable return.
>> ByRef
>> > arguments have been the traditional method in BASIC programming, since
>> early
>> > DOS days, to get multiple return items from a subprocedure.
>> >
>> > mainNumRtn= myFunction(useValue1, useValue2, rtnObjVar, rtnStrVar)
>> >
>> > WMI functions are replete with return arguments -- indeed the WMI
> registry
>> > access system only works that way.
>> >
>> > The "trick", if you will, in using ByRef argument returns is either to
>> > document the function or to use a scope-oriented and functional
>> > variable
>> > prefixing system (instead of a simple variable-type system, which may
> not
>> be
>> > too useful in a pure variant language such as VBS). I strongly prefer
>> real
>> > prefixing. I can tell by looking at the first character of the
>> > argument
>> > variable name that I assign whether it's passed ByVal or ByRef, and if
>> > ByRef, whether its preserved, destroyed, coerced to the data type and
>> range
>> > required by the function, requires precise assignment, or set and
>> returned.
>> > The VBS option to use ByRef arguments is very efficient. But to take
>> > advantage of it, you must be willing to coerce or destroy some argument
>> > values, as well as reset some for return. This requires comment or a
>> > meaningful prefixing system, both to preserve your sanity and for reuse
> of
>> > the function in future scripts.
>> >
>> > A ByRef return argument for a function can be analogized to a property
>> > return for a method function. If your writing classes, you should
>> consider
>> > associated property returns. Since most I now write my fundamental
>> > functions as WSC VBS, I again use global variables for multiple return
>> > values, though these are declared in the parent XML script as
>> > properties
>> and
>> > returned to the calling script as properties. As you note, it is
>> generally
>> > better not to mix scope and use global variables for multiple return
>> values,
>> > in straight script, since it makes it hard to reuse the functions that
> you
>> > write.
>> >
>> > Regards,
>> > Joe Earnest
>> >
The short answer is no. Like all other local variable assignments, object
instances go out of scope when the local procedure goes out of scope. For
this reason, common scripting objects (FSO, Shell, WshShell, etc.) are often
used "against the grain" of the general rule not to use global objects in
local procedures, and are instead declared at the outset and used throughout
the script. Again, if you ever decide to use WSC files as general "utility"
and "include" files, the WSC file can declare the common objects globally on
its instantiation and pass them to the calling script through properties,
avoiding the need for the calling script to declare and instantiate
additional common object instances. But again, to use your WSC file
throughout your script, at both global and local levels, *it* would be
declared as an object in the global code and that global variable would be
used, again "against the grain" of the general rule, in local procedures.
Most sources do not explain COM object scoping and duplication well. The
following is one of my standard "schpeals" based on my own experience, but
also on some terribly useful explanatory posts in the past by Michael Harris
(MVP), Alex Angelopoulos (MVP), Chris Barber and Alexander Mueller.
Regards,
Joe Earnest
-----
When a typical COM class object is "accessed" by the creation of an
"instance" of the object, a pointer to the IDispatch interface and a new
virtual "object" is created as an interface definition of parameters with
assigned memory. Memory is allocated in which the new object's members may
store data. For the typical COM class object, a new pointer and virtual
object are created for each instance of the underlying object accessed.
If the COM class object is a "singleton" object, however, only a single
instance will be created, which will be shared not only by all object
variable instances in the same script, but by all scripts or other processes
accessing the object. In the case of a singleton object, the initiation
process first checks to see if an instance of the underlying object has
already been instantiated and, if so, returns a pointer to the same initial
virtual object. Thus (per the MS documentation): ."
When all object access references are fully terminated, VBS runs a cleanup
process that deletes the pointers and begins the process of eliminating the
interface and memory allocation. An object will be disconnected only after
all references to it have been released. Windows may retain certain
references, depending on the object type and how it was instantiated. It is
possible, for example, to lock an object, when employing multiple object
references. Care must be taken with techniques and object types that allow
the object instance to continue beyond its scope in the script or even
beyond the termination of the script.
A review of posts relating to concerns with continued object connections and
"memory leaks" seems to indicate a number of reasonably distinct situations,
including, among others: (1) failure to properly exit a With statement
block by progressing through the End With statement; (2) multiple or
duplicate object references, where object variables in higher levels of
scope are not released; and (3) recursive procedures that create multiple
copies of an object. The most pernicious "memory leaks" appear to involve
EXE or special DLL objects, that are essentially applications, and which,
once initiated, continue independently, despite being released from the
object connections to the script. With these types of objects, it is always
best to try to force them to shut down through internal means, usually a
Quit method, before terminating the object reference.
The VBS process releasing the object references is part of the cleanup
procedures involved anytime that an object reference goes out of scope. For
purposes of an object variable as a reference to an existing object
instance, the going-out-of-scope cleanup is initiated for the existing
object reference whenever: (1) the variable is reset to Nothing or to a
different object reference, or even to a subsequent instance of the same
object reference; (2) the variable is reset to Empty or to a non-object
subtype or value; (3) the script is terminated; (4) a procedure in which the
object instance has been assigned to a local variable is exited; (5) the
inherent CreateObject function is used directly for a transitory instance of
the object, instead of assigning it to a variable; or (6) the CreateObject
function is referenced by a With statement block, and the script progresses
through the End With statement.
Setting unused object variables to Nothing is the "preferred" cross-platform
coding practice, and it is useful to release system resources more quickly,
when the object instance will not otherwise promptly go out of scope, but it
ultimately accomplishes nothing more in VBS than does any other method of
going out of scope.
When passing object instances to multiple scripts and/or multiple hosts, the
object instance will go out of scope whenever the originating script or host
is closed.
The inherent IsObject function simply returns the result of a check for the
appropriate index code in the variable list. It does not tell you if the
variable references any currently instantiated COM class object. For
example, an object variable reset to Nothing will still be returned as an
object variable, as will a variable whose object instance connection has
been terminated or otherwise lost. The intrinsic TypeName function can be
used as a secondary test in these situations. If the variable has been set
to Nothing, Typename will return Nothing; if the variable is connected to an
object instance, TypeName returns the specific object's ProgId, or the
default property subtype, if the object has a default property; and if the
variable is no longer connected to an object instance, TypeName returns the
generic Object status.
The IsObject function should still be used as a primary test for a specific
object, since both the inherent VarType and TypeName functions may return
the data subtype of the object's default property, depending upon the
specific object tested.
.
- References:
- basic Q: Only one way to make vars live outside of the scope of a function without globals?
- From: kenfine
- Re: basic Q: Only one way to make vars live outside of the scope of a function without globals?
- From: Joe Earnest
- Re: basic Q: Only one way to make vars live outside of the scope of a function without globals?
- From: Ken Fine
- One other related Q Re: basic Q: Only one way to make vars live outside of the scope of a function without globals?
- From: Ken Fine
- Prev by Date: Re: Problems with script debugger
- Next by Date: Re: basic Q: Only one way to make vars live outside of the scope of a function without globals?
- Previous by thread: One other related Q Re: basic Q: Only one way to make vars live outside of the scope of a function without globals?
- Next by thread: Re: basic Q: Only one way to make vars live outside of the scope of a function without globals?
- Index(es): | http://www.tech-archive.net/Archive/Scripting/microsoft.public.scripting.vbscript/2005-05/msg00013.html | crawl-002 | refinedweb | 2,175 | 53.75 |
user Hercynium <!-- location:latitude=42.05.03,longitude=-71.39.23 --> <p> Stats: <ul> <li>Real Name - Stephen R. Scaffidi</li> <li>CPAN Home Page - <a href=""></a></li> <li>LinkedIn Page - <a href=""></a></li> <li>Current Employer - TripAdvisor</li> <li>Current Position - Senior Software Engineer</li> <li>Currant Cake - Delicious!</li> </ul> <p> My PerlMonks <a href="">XP Tracker</a> - Just because it's there. </p> <p> Couldn't help following the <a href="">links</a> and taking the <a href="">tests...</a>.<br/> I blame this [DigitalKitty|nerd]...<br/> <blockquote> <a href="">NerdTests.com says I'm a Dorky Nerd God.</a> </blockquote> </p> <p> <h3>Some favorite quotes</h3> <blockquote> "Money frees you from doing things you dislike. Since I dislike doing nearly everything, money is handy."<br /> - Groucho Marx </blockquote> <blockquote> "Good code is its own best documentation. As you're about to add a comment, ask yourself, 'How can I improve the code so that this comment isn't needed?' Improve the code and then document it to make it even clearer."<br /> - Steve McConnell </blockquote> <blockquote> "Always code as if the guy who ends up maintaining your code will be a violent psychopath who knows where you live."<br /> - Martin Golding </blockquote> <blockquote> "We are just an advanced breed of monkeys on a minor planet of a very average star. But we can understand the Universe. That makes us something very special."<br /> - Stephen Hawking </blockquote> <blockquote> "Progress is made by lazy men looking for easier ways to do things"<br /> - Robert A. Heinlein </blockquote> </p> <p> <h3>The JAPH's Prayer</h3> <blockquote> Our modules, who art in CPAN<br/> Hallowed be thy namespace<br/> Thy pragmas come<br/> Thy code be run<br/> With strict as it does with warnings<br/> Give us this day our map and grep<br/> And forgive us our string evals<br/> As we forgive those who refuse to try Moose<br/> And lead us not into Ruby or Java<br/> But deliver us from Guido.<br/> </blockquote> </p> <p> <h3>And now, a song...</h3> <blockquote> Everyday I get up and pray to <a href="">Schwern</a><br/> And he increases the number of refs by exactly one<br/> Everybody's coming to YAPC::NA these days<br/> Last night there were perl hackers on my lawn<br/> <br/> CHORUS:<br/> Take the perl-heads bowling, take them bowling.<br/> Take the perl-heads bowling, take them bowling.<br/> <br/> <br/> Some people say that bowling alleys have len(@lanes) < $small (have big lanes x2)<br/> Some people say that bowling alleys all eq $the_same (all the same x2)<br/> There's not a line that goes here that soundex_match($line, 'same') (rhymes with same x2)<br/> Had a dream last night but I forgot what delete $dream{about} (what it was x2)<br/> <br/> goto CHORUS;<br/> <br/> Had a dream last night about use strict and warnings<br/> Had a dream I wanted to sleep() next to plastic<br/> Had a dream I wanted to bless your hashref<br/> Had a dream it was about undef<br/> </blockquote> </p> 2015-11-24 14:43:32 1413 625749 658692 74 Uxbridge, MA 85 on | http://www.perlmonks.org/?displaytype=xml;node_id=619700 | CC-MAIN-2015-48 | refinedweb | 540 | 67.69 |
java.lang.Object
org.netlib.lapack.DSBGSTorg.netlib.lapack.DSBGST
public class DSBGST
DSBGST is a simplified interface to the JLAPACK routine dsbgSBGST reduces a real symmetric-definite banded generalized * eigenproblem A*x = lambda*B*x to standard form C*y = lambda*y, * such that C has the same bandwidth as A. * * B must have been previously factorized as S**T*S by DPBSTF, using a * split Cholesky factorization. A is overwritten by C = X**T*A*X, where * X = S**(-1)*Q and Q is an orthogonal matrix chosen to preserve the * bandwidth of A. * * Arguments * ========= * * VECT (input) CHARACTER*1 * = 'N': do not form the transformation matrix X; * = 'V': form X. * * UPLO (input) CHARACTER*1 * = 'U': Upper triangle of A is stored; * = 'L': Lower triangle of A is stored. * * N (input) INTEGER * The order of the matrices A and B. N >= 0. * *'. KA >= KB >= 0. * * AB (input/output) transformed matrix X**T*A*X, stored in the same * format as A. * * LDAB (input) INTEGER * The leading dimension of the array AB. LDAB >= KA+1. * * BB (input) DOUBLE PRECISION array, dimension (LDBB,N) * The banded factor S from the split Cholesky factorization of * B, as returned by DPBSTF, stored in the first KB+1 rows of * the array. * * LDBB (input) INTEGER * The leading dimension of the array BB. LDBB >= KB+1. * * X (output) DOUBLE PRECISION array, dimension (LDX,N) * If VECT = 'V', the n-by-n matrix X. * If VECT = 'N', the array X is not referenced. * * LDX (input) INTEGER * The leading dimension of the array X. * LDX >= max(1,N) if VECT = 'V'; LDX >= 1 otherwise. * * WORK (workspace) DOUBLE PRECISION array, dimension (2*N) * * INFO (output) INTEGER * = 0: successful exit * < 0: if INFO = -i, the i-th argument had an illegal value. * * ===================================================================== * * .. Parameters ..
public DSBGST()
public static void DSBGST(java.lang.String vect, java.lang.String uplo, int n, int ka, int kb, double[][] ab, double[][] bb, double[][] x, double[] work, intW info) | http://icl.cs.utk.edu/projectsfiles/f2j/javadoc/org/netlib/lapack/DSBGST.html | CC-MAIN-2017-51 | refinedweb | 324 | 56.45 |
Re: De-referencing pointer to function-pointer
From: Jack Klein (jackklein_at_spamcop.net)
Date: 05/12/04
- ]
Date: Tue, 11 May 2004 22:43:08 -0500
On Tue, 11 May 2004 13:54:48 +0100, "Edd"
<eddNOSPAMHERE@nunswithguns.net> wrote in comp.lang.c:
> Jack Klein wrote:
> > On Tue, 11 May 2004 03:47:55 +0100, "Edd"
> > <eddNOSPAMHERE@nunswithguns.net> wrote in comp.lang.c:
>
> [ 8< - - - snip ]
>
> >> int main(void){
> >> double (*f)(double);
> >> ARRAY funcs;
> >> InitArray(&funcs, sizeof(sin));
> >
> > Either your compiler is broken or you are not invoking it as a C
> > compiler and making use of some implementation-defined extension. It
> > is a constraint violation to apply the sizeof operator to a function
> > designator, and requires a diagnostic.
> >
> > The expression "sizeof(sin)" has literally no meaning in C. I have no
> > idea what value your compiler generates when you apply sizeof to a
> > function. Do you?
> >
> > [snip]
>
> I see. Indeed I don't understand what my compiler generates under these
> circumstances! However my compiler does not complain about this code in the
> slightest, even when I turn on all warnings and support strict ANSI C. I'm
> using MinGW under win2k with this command line:
>
> gcc -Wall -ansi ptrtest.c -o ptrtest.exe
>
> I just tried this on my University system with gcc on unix and I got errors.
> Is this a problem with my home compiler, do you think -- should it warn me?
Not just should, but it required to. When a source program violates
syntax or a constraint, the C standard requires the compiler to issue
a diagnostic, although it does not specify the format of the
diagnostic.
I haven't used GCC ports much, nor recently, but I think you might
need to add -pedantic.
> >> This program crashes when I run it. Am I doing something undefined
> >> here? I can't see what's going wrong. I think it may be my
> >> understanding of function-pointer syntax is a little lacking, but
> >> what I've got still seems fine to me.
> >
> > Yes, you are doing something undefined here. Function and array names
> > are not converted to pointers when used as operands of the sizeof
> > operator.
> >
> > sizeof(array_name) yields the size, in bytes, of an array, not of a
> > pointer to the element type of the array.
> >
> > sizeof(function_name) would request the compiler to yield the size, in
> > bytes, of the function, not of a pointer to the function. But
> > functions have no sizes accessible to a C program, and that use of
> > sizeof is specifically illegal under the C standard.
>
> I see. Thanks for the clarification.
> This leads me on to the obvious follow-up question -- is there a way of
> achieving the desired result? I can use the alternative method below (which
> works correctly), but it's not quite as elegant as I would like:
>
> int main(void){
> double (*f)(double);
> void *vptr;
> ARRAY funcs;
> InitArray(&funcs, sizeof(void*));
No, no, no, no. There is no correspondence between pointers to object
types and pointers to functions in C. Even attempting to convert
between a function pointer and a pointer to void, in either direction,
is completely undefined.
Fortunately, you don't need to. You already have a perfect operand
here, just replace the line above with:
InitArray(&funcs, sizeof f);
f is already a pointer to function, not the name of a function, so
applying sizeof to it is just fine and dandy. Also, since f is an
object and not a type, the parentheses are not necessary, but harmless
if you prefer them.
> /* Add some functions to the funcs ARRAY */
> vptr = sin;
> AddElement(&funcs, &vptr);
> vptr = tan;
> AddElement(&funcs, &vptr);
> vptr = exp;
> AddElement(&funcs, &vptr);
> vptr = log;
> AddElement(&funcs, &vptr);
Leave out vptr completely, just omit if from your program. Now that
you have uses sizeof f to initialize your structure, you can just do:
AddElement(&funcs, sin);
AddElement(&funcs, tan);
...and so on.
No problem with using the name of a function without () as an argument
passed to another function. Unlike with the sizeof operator, this is
well defined and automatically converts the name of the function to a
pointer to the function.
> /* Get the ARRAY element at index 2 */
> f = (double (*)(double))*(void**)GetElement(&funcs, 2);
>
> /* This should now display "f(1.0) = 2.718..."? */
> printf("f(%lf) = %lf\n", 1.0, f(1.0));
>
> return 0;
> }
>
> Thanks for you reply,
> Edd
I have copied this from your original post:
> /* Get the address of the kth element */
> void *GetElement(ARRAY *a, unsigned k){
> return (char *)(a->base) + (k * a->elsz);
> }
The first thing I would do is change the return type to "const void
*", but that's not mandatory.
To retrieve a function pointer from your array, you can get rid of all
that almost indecipherable casting by doing this:
void *vp;
vp = GetElement(&funct, 2);
memcpy(&f, vp, sizeof f);
The latter two lines can be combined, with rather less readability, to
eliminate the need for the pointer to void:
memcpy(&f, GetElement(&func, 2), sizeof f);
All the nasty casts are gone!
-- Jack Klein Home: FAQs for comp.lang.c comp.lang.c++ alt.comp.lang.learn.c-c++
- ] | http://coding.derkeiler.com/Archive/C_CPP/comp.lang.c/2004-05/1194.html | crawl-002 | refinedweb | 859 | 62.68 |
=wiki'):
Older requests can be found here:
Contents
- 1 Bug: map appears below map div at normal desktop window sizes
- 2 Links: Case Sensitive and Namespace
- 3 Limiting TOCs
- 4 MediaWiki:Lang/xx pages
- 5 per-page usage stats
- 6 too-many-function-calls warning on a lot of pages
- 7 ParserFunctions version
- 8 Mediawiki extension for inclusion of sections
- 9 Translations
- 10 Invert translated local pages
- 11 Searching templates
- 12 wiki dump
- 13 Navigation by use cases
- 14 wiki search
- 15 ORCID identifier template
- 16 Request for namespace with machine-readable definitions (XSD:)
- 17 Semantic MediaWiki
- 18 Proposal for creating group categories
- 18.1 Option 1 - Capitalize first letter of Group
- 18.2 Option 2 - Make the first letter of group lowercase
- 18.3 Option 3 - standardize group parameter as lower case for English, any case for other languages
- 18.4 Option 4 - standardize group parameter with first letter upper case for English, any case for other languages
- 18.5 Option 5 - automatic, but different standards based on language
- 18.6 Talks
- 18.7 Voting
- 19 Relation multipolygon and onRelation=no?
- 20 Wikibase
- 21 Use Listeria
- 22 <pre> and <code> should be LTR inside RTL context
- 23 Direction of preview for section editing in RTL mode
- 24 Map_Features not available - HTTP Error 500
- 25 269 categories for Belgium
- 26 Rewriting Template:Relation
- 27 Reworking )
Restored this discussion (bug in slippy maps still not fixed after years)
- Discussion restored from the archive, because the bug is still there in the old SlippyMap extension created for this wiki, which does not comply to basic requirements and isolation. We still need to fix this extension. For example it's still impossible to render multiple slippymaps on the same page. The only alternative is to use static maps.
- The CSS and Javascript code for the SlippyMap extension has not been fixed since years and contains numerous issues reported since long. But not maintained. It has even blocked at some time the migration to newer version of MediaWiki due to compatibility issue. The slippymap extension used in Wikipedia (different versions depending on languages) is much better, faster, and maintained. MEdiawiki will also soon standardize a new version that will work across wikis with less security issues and more accessibility. However it depends on Wikibase and this wiki still does not have this extension and cannot work with remote databases (such as Wikidata).
- So it would be good to revive the specific opensource code for the few extensions specific extensions used on this wiki, and finally fix all the severe issues reported. But this requires personal involvments of admins of this wiki, which are in fact not maintaining this wiki at all, and just too busy working on the OSM database instead. There's unfortunately NO active admins on this wiki to fix what they are the only allowed people to fix. And other people just have to find some workarounds, even if they are sometimes very slow or require lot of manual edits for the maintenance. — Verdy_p (talk) 15:35, 24 May 2018 (UTC)
Static map extension using HTTP, instead of HTTPS like this wiki
- Final note: the static map extension still uses MIXED HTTP content in this HTTPS wiki. This also causes issues and some people not seeing the static map rendered at all in their browsers using strict security rules (especially as the map tiles come from another domain than this wiki). The URLs used for linking to the static map requested should also use HTTPS. — Verdy_p (talk) 15:41, 24 May)
- I am in strong need of this ToC-depth-limiting functionality - I need to add some content to a page which would make the ToC very long, on account of many level 4 headings in the content. Can someone please add it as soon as convenient? --Contrapunctus (talk) 18:39, 4 May 2018 : - [1] - [2])
wiki dump
Is wiki dump service still available? This link is already dead. Kcwu (talk) 08:38, 17 January 2014 (UTC)
- It *still* isn't available. Who can get this fixed? --Tordanik 23:21, 2 April 2014 (UTC)
- reported at Mateusz Konieczny (talk) 16:17, 8 November 2017 (UTC)
I proposed a reorganisation of the wiki navigation, which is also somehow a larger restructuring project. I would be happy to get your opinions about it. --Cantho (talk) 05:22, 28 January 2014 (UTC)
wiki search
The search results that drop down from the search box while you type don't include redirecting pages anymore. Who can change that? --LordOfMaps (talk) 08:22, 22 April [3][4])
Wikibase
Hi all,
after having read this discussion (about machine-readability), and this one (about semantic wiki), this tagging mailing list thread (about tags organization, and specially this response) and after having found the interesting Taginfo/Taglists/Wiki project, what do we think about using Wikibase as a database for tags?
For example the Taglists project is really useful, but the lacking of a common database caused the not-so-stright Wiki -> Taginfo -> Wiki workflow... --NonnEmilia (talk) 21:11, 4 September 2017 (UTC)
- I think it's a great idea to have a Wikibase instance for tags, to replace the current template-based content. Another possible use case for Wikibase would be content such as Template:Software that is intended to be machine-readable. Using this at the moment relies on hacks like my TTTBot (which is currently broken anyway, although that's pretty much my fault). A clean solution for this kind of semantic data, and replacing the need to manually synchronize wiki content with queries, would be amazing. While the Taginfo solution is alredy a big step forward, Wikibase seems like it could be a very good choice. --Tordanik 16:32, 7 September 2017 (UTC)
- I have linked your word "Wikibase" - I hope the right target. I did not not know about this project before. --aseerel4c26 (talk) 08:00, 14 April 2018 (UTC)
- I undid your edit to comment of other person. Link is Wikibase Mateusz Konieczny (talk) 04:56, 15 April 2018 (UTC)
- Support as long as we don't duplicate efforts with Wikidata. Non-mission critical semantic data should be offshored to Wikidata as much as possible. Pizzaiolo (talk) 10:32, 15 April 2018 (UTC)
- Support with Pizzaiolo's caveat; scope creep is to be avoided, as is duplicating the efforts another open community project. Andy Mabbett (User:Pigsonthewing); Andy's talk; Andy's edits 10:44, 15 April 2018 (UTC)
- Note "However, due to complexity and dependencies, it requires some additional steps." and that Wikidata is quite hard to edit. Can you provide some examples how Wikibase would be useful, providing benefits that are larger than drawbacks? Mateusz Konieczny (talk) 15:52, 15 April 2018 (UTC)
- I strongly support using structured data to organize OSM tags, assuming this is a machine-readable tag metadata effort, and not a tag storage replacement. Wikibase can be hosted here, on this wiki, alongside the rest of content, but in separate namespaces (e.g. T for tag, and V for value). A few challenges/thoughts:
- Tags are strings (e.g. "name:en"), not integers (e.g. Q42). Wikibase needs to be customized to support that - to use a string as a primary key (and corresponding foreign keys), or have a "magical" property whose whose value must be set during creation, must be unique, and can never be changed. I do not know how difficult it would be to modify Wikibase for this (see [phabricator ticket)
- Storing "enum" values: For many tags, e.g. "religion", values are not arbitrary, but rather have a list of values. These values have similar requirements as tags above, but should exist in a separate namespace. NOTE: many of the values would simply be a redirect to Wikidata, e.g. each individual religion string should be mapped to it.
- Set up query service (similar to WDQS), allowing complex lookups into that data. Sophox is a good fit here, it can import this data relatively easily using existing tools.
- Modify main editing tools (iD, JOSM) to do lookups, fetch localized descriptions, and also get value suggestions and other recommended tags.
- P.S. I created a task for Wikibase to figure out the immutable keys.
- --Yurik (talk) 00:33, 16 April 2018 (UTC)
- update: Wikidata team suggested we use fake URLs to force uniqueness of IDs, e.g. we could store a URL back into key namespace -- as a sitelink, and wikibase engine won't allow multiple wikibase entries to share that same URL. We may need to customize UI, or possibly add some helper scripts to make it easier to manage. --Yurik (talk) 22:17, 22 May 2018 (UTC)
- Support – and not just for tags, either. There are a lot of efforts to parse content from this wiki, including calendar entries, user groups, and apps. Reliably extracting data from MediaWiki markup is notoriously tricky, though, so a machine-readable storage for such data would be very useful. On the wiki itself, this would also alleviate the need to manually sync infobox content across all translations of a page, which is currently a major cause of duplication and errors. --Tordanik 16:39, 25 April 2018 (UTC)
- @Tordanik It might make sense to use tabular datasets for this. Wikidata is a "facts" database, and it doesn't work very well for blobs of data. See here. --Yurik (talk) 17:05, 25 April 2018 (UTC)
- Most of my examples would involve moving content that's currently in infoboxes (e.g. Template:User group or Template:Software to Wikibase. This is very similar to what I'm seeing done with Wikidata. --Tordanik 22:12, 25 April 2018 (UTC)
- @Tordanik sure, that's a good usecase for Wikidata, not datasets. I wonder how difficult it would be to set up wikibase for multiple namespaces - because we wouldn't want to store tags and "free form" data in the same place, or it would quickly get very confusing. --Yurik (talk) 22:17, 25 April 2018 (UTC)
- note: Wikibase is not just meant to be used to query Wikidata. Other databases may be queried as well, including custom datastores (which could be hosted on this wiki in "files", or in or JSON pages (this wiki supports the JSON content model which can be used on any namespace). Newer versions of Mediawiki allow attaching any page with a custom CSS stylesheet as well. And with some security settings, you can also attach custom javascript (this is currently enabled only in "User:" namespace but they are read as executable scripts only when the user is connected to their own account on this wiki and where the Javascript is stored in his own user pages, and only provided that these pages are protected from editing by other people than the user himself. But we could have a javascript framework used on this wiki exposing a limited set of javascript and limited set of DOM API (for now this isolation framework is still not developed, so we still rely on wiki admins to install other CSS/Javascript in this wiki; and most addons used on Wikipedia and activable in their preferences are not usable on this wiki, as they still depend on core extensions not installed, notable Scribunto/Lua and Wikibase). — Verdy_p (talk) 15:51, 24 May 2018 (UTC)
Update: I have created a page outlining this proposal, and I have also gained access to an older version on the OSM Wiki, with which I will be experimenting. Please comment or update that page with details, e.g. how you would want to structure metadata, what properties we would need, etc. OpenStreetMap:Wikibase.
CC: @Aseerel4c26, @Mateusz Konieczny, @NonnEmilia, @Pigsonthewing, @Pizzaiolo, @Tordanik, @Verdy_p.
--Yurik (talk) 23:06, 16 August 2018 (UTC)
- There is now a demo site with the copy of the current wiki, and a Wikibase setup with a few examples. --Yurik (talk) 16:25, 20 August 2018 (UTC)
Thanks, looks great. I will comment on the page then... U30303020 (talk) 20:17, 20 August 2018 (UTC)
- Enabled in a read-only mode. Most frequent keys already imported, so Lua scripts can already be written to use that data. ReadWrite will be enabled shortly. See updated OpenStreetMap:Wikibase --Yurik (talk) 05:42, 18 September 2018 (UTC)
- Exciting stuff! One big thing I'm missing is a link to Wikidata. For instance, Item:Q104 should somehow link to w:d:Q787417, either with a sitelink or a dedicated property. Looking forward to playing around with this. Pizzaiolo (talk) 09:47, 18 September 2018 (UTC)
Use Listeria
Hi, Can we use listeria in wiki for generating lists ? @Yurik Is it possible Sophox help in creating list like Districts in Andhra Pradesh ? Something similar to [5]
- Using templates similar to
-
-
-- 04:18, 5 May 2018 (UTC) —Preceding unsigned comment added by Naveenpf (talk • contribs)
- @Naveenpf sure, it would make perfect sense to run Listeria from Sophox - the same technology, just a bigger dataset. Moreover, I wonder if the maintainer of Listeria could run it on this wiki too? Should be fairly trivial, as the wiki tech is the same as well :) --Yurik (talk) 04:25, 5 May 2018 (UTC)
<pre> and <code> should be LTR inside RTL context
Because the content of <pre> and <code> tags is almost always a piece of code or sth similar, then it need to be LTR and Left-Aligned.
Currently in Persian pages we need to explicitly add some css tags to adjust them.
for example, this is a Persian context which is RTL with a sample code: the same content. Here I used some css styles to adjust the display: not specific to this wiki. Note that the "pre" or "code" HTRML tags do not imply that the content is in English or Latin, they can as well contain plain Arabic text. But to change back to LTR in a RTL context, there's a better choice than long styles: Mediawiki predefines a class name (the effective styles are more complex than just direction and text-align styles, , there's also the behavior of margins, paddings, mirroring for graphics, directions for arrows, and there are rules about how these styles are inherited. Look at Template:Ar to see the class needed for correct inheritance.
- Does it mean that the correct way to include LTR in RTL context is to close the RTL block just before the LTR content and at the end of it start another RTL block? like this::
- Normally the example above is an inclusion within an unbroken sequence of Persian text, so you should not close ot and reopen it. Instead you should isolate the LTR inclusion, by effectively adding style or class to the "pre" block, where this switches to a non-Persian context which itself is isolable. Changing the direction and alignment is not enough, there should be a language tagging as well (this has en effect on the rendered font, as well as on semantic parsing for search engines, or for an automatic translator component used in browsers to make the proper guess): The Template:Fa (or Template:He, Template:Ar, Template:Ur, Template:Dv) does does both styles/class plus language tagging, and correctly uses the isolation mechanism; it also somrtimes tweaks a bit the fonts sizes rendered at typical resolution (because the default fize on the wiki is smaller than normal and only tuned for Latin, making Arabic difficult to read, so the font size should be inverted; some scripts also are tuned to use a wider line-height than the default 1.6 value, but HTML PRE blocks use a smaller line-height) ! — Verdy_p (talk) 13:21, 13 June 2018 (UTC)
- So I created a template for LTR inclusion: Template:Ltr.
-.
- What's your opinion? iriman (talk) 15:31, 14 June 2018 (UTC)
- That's the correct way to do that. Note that when you place a {{Fa}} template at top of a Persian page (named "Fa:*") you place it just after the {{Languages|Untranslated English title}} template, but you are not required to close it with a
</div>at end of page, because it is closed there implicitly by Mediawiki: this will ensloe the whole page in Persian. Then there you can use "{{Ltr}}Some block in English
</div>" in the middle of the page to enclose vertical English blocks, or {{Ltr|some text in English}} to enclose inline English text this avoids mirroring or reordering problems within the English extract embedded in an RTL paragraph.
- The only problem I see with "{{Ltr}}" is that it implies that what is inside is in English; you may use it for embedding other Latin-written languages but there's no optional "lang=*" parameter to override the "lang=en" which is generated. I think we could add it. — Verdy_p (talk) 04:16, 16 June 2018 (UTC)
What if we use {{En}} instead of {{Ltr}} and update the template of other LTR languages to work simillar to {{Ltr}}?
- You seem to have found why: {{En}} is already used for something else (adding an annotation on an non-English page after a link that the target is for now in English (and possibly not available in the language of the current page: this may be checked in order to replace the link target and then only remove the {{En}} inclusion). The {{En}} template (it has also several other aliases) is used since long on many pages of this wiki.
- Yes this may seem a bit insconsistant but not dramatic, and there's no emergency to fix that. In pagew with RTL language, using {{Ltr}} is almost always to include untranslatable technical elements or code in English, I think it will be exceptional if this ever refers to another LTR language and for now there's not ben any need for it (this may change later if there's a new goal to simplify some large maintenance, and in that case we can still create a cleanup task as there will be many edits to do before applying the change. For now simplicy is favored when it covers most practical cases (for the other exceptional cases, we can do them without using any template, or by setting an optional "lang=*" parameter to {{Ltr}} to override this default "lang=en" value). — Verdy_p (talk) 19:18, 25 June 2018 (UTC)
Yes, it's an extra effort that is not needed yet. thanks for your kind help. -- iriman (talk) 11:52, 27 June 2018 (UTC))
Map_Features not available - HTTP Error 500
Hi,
since a couple of days I am frequently facing issues to access the articel Map_Features. The issue occurs for the artikel in EN as well as for DE. Usually the error ERR_EMPTY_RESPONSE returns, today I got once HTTP error 500. I tried different devices PC and Android as well as different access points. In very rare cases the page is loaded sucessfully. Am I doing somenthing wrong or are others facing similar issues? --Meinf (talk) 15:56, 6 August 2018 (UTC)
- Well, the page seems fairly long and full of links. Could that be the cause for the problems? I could view both versions on my computer today but it took some time to load. Maybe the server was under heavy load when you requested the pages? U30303020 (talk) 22:15, 17 August 2018 (UTC)
269 categories for Belgium
This number doesn’t include categories such as category:Brussels that don’t use the template.
--Andrew (talk) 18:48, 4 September 2018 (UTC)
- Error? Seems ok to me. Tigerfell
(Let's talk) 20:38, 4 September 2018 )
Reworking
I expanded this article. It describes the general setup of the deletion process. My main source was my experience during clean up actions, so you may want to recheck that and assure that I got everything right. Thx. U30303020 (talk) 21:36, 17 September 2018 (UTC) | https://wiki.openstreetmap.org/wiki/Site_improvements | CC-MAIN-2018-39 | refinedweb | 3,305 | 58.11 |
Original title: StringBufferInputStream is deprecated, without suitable replacement
java.io.StringBufferInputStream
has been deprecated. However, I cannot find anything
to replace its full functionality.
Specifically, java.util.Properties.load() expects
an InputStream. I want it to read from a String.
StringBufferInputStream was previously the
perfect way to do this.
Unfortunately, the recommended replacement is
java.io.StringReader, which is subclassed from
Reader, and is NOT usable as an InputStream.
The ideal fix would be to make Reader
implement InputStream.
A lesser fix, would be to document in the
deprecation notes for StringBufferInputStream,
what to do if you really need an InputStream
derived from a String
InputStreamReader seems to go from InputStream to
Reader. Now you need something to go from
Reader to InputStream
(Review ID: 20734)
======================================================================
Posted Date : 2005-07-27 04:36:37.0
There is a suggested fix from xxxxx@xxxxx (see Comments).
Posted Date : 2005-11-28 17:59:30.0
A suggested fix from (company - Self , email - xxxxx@xxxxx -mlv.fr)
is attached to this report as filename 593432.txt (too large to paste here.)
Posted Date : 2005-11-28 17:59:30.0
======================================================================
A better solution would be to provide a load(Reader)/store(Writer) interface.
Posted Date : 2005-08-23 08:19:59.0
Contribution-Forum:
Posted Date : 2005-12-06 17:49:07.0
This is my problem EXACTLY. I have not found a
suitable replacement.
This particular problem also makes it difficult
to use the java.sql.PreparedStatement.setAsciiStream
method as it requires an InputStream.
ditto.
I cannot find a workaround.
This is particularly irksome, since I was forced
to use the Properties class by the deprecation
of getenv(). Come on, guys! If it ain't broke,
don't break it!!
Because there is no properties replacement method
that takes a reader it is very annoying. Either
consistantly eliminate inputStream and
outputStream EVERYWHERE replacing all methods that
use them with reader and writer equivelents, or
provide a way to switch BOTH BACK AND FORTH.
-G
I have the same problem...I can't go from Reader
to InputStream...
Does anyone have a workaround?
I have the problem of writing protocol handlers
that need to return an InputStream from getInputStream(),
but since StringBufferInputStream() is deprecated
with a suggestion to use StringReader, can't override
getInputStream to return StringReader. Please
provide a bridge.
Oh, I'm sorry: it's not a bug (of course), it's a
'request for enhancement' which is 3 1/2.
I really like the workaround
"leave the deprecated class in the code ..." (given in
4217782).
The API sais "This class does not properly convert
characters into bytes.". And I should continue to use it?
What does *that* mean?
This bug is 3 1/2 years old now. Impressive.
seems like there are exactly 12 people in the world needing
this... come on sun, make 12 people less unhappy.
** 13 ** people need a solution!
It's a lot more than 13!
Java Eng: Come on! We're comin up on 1.4 - it's time to
take a stand!
With that said, here's a freakishly bad hack just to avoid
the deprecation warning. It still suffers from the encoding
problem:
ByteArrayInputStream stream = new
ByteArrayInputStream(string.getBytes());
Properties props = new Properties();
props.load(stream);
stream.close();
do something about this. or maybe even give load method in
Properties to accept Reader.
Who wants a laugh? You could use only boolean properties,
and use boolean's getBoolean(String name) which does not
see if a string is true or false, but instead sees if a
system property is true or false. Get Integer does the
same thing, but with integers. Hey Sun, how about a
getString that gets the string value of a system property?
This is an absurd, this bug was introduced in Nov 24, 1997.
How the hell did sun manage to avoid fixing this. The
workaround is not good enough.
Please have some shame, and fix this. It's a core functionality
and its missing since 1997, let me repeat 1997.
I used the StringBufferInputStream with a charset for
special encoding.
with stringReader there goes no charset and all (500)
webpages were rendered unreadable.
I changed it back to StringBufferInputStream and it
worked fine.
i fear the version of the jdk where the class is droped
Does anyone have a work around for converting from Reader to InputStream
OK, now we have Tiger RC and no way of converting a StringReader into an InputStream is in sight. Do we really all have to write our own adapter class ReaderInputStream, that implements InputStream and whose constructor takes an Reader, that it reads from?
- Maybe i have to wait another seven years?
I too am astonished this did not get addressed in Tiger. I don't want a StringBufferInputStream, I just want to be able to use java.util.Properties.load( ) without funky workarounds. The ones I know:
Workarounds:
1. convert the String to its byte array, and pass that into a ByteArrayInputStream.
String s = ...;
Properties p = new Properties();
byte[] bArray = s.getBytes();
ByteArrayInputStream bais = new ByteArrayInputStream(bArray);
p.load(bais);
or
byte[] byteArray = myString.getBytes("ISO-8859-1"); // choose a charset
ByteArrayInputStream baos = new ByteArrayInputStream(byteArray);
2. When converting to a ByteArrayInputStream, this uses the platform's default character
encoding. This can potentially lose information. Another option is save to a
temporary file and read in from file. Be sure to enable delete on exit for the file so it doesn't
hang around.
3. put info in a permanent file, then read from that file with a FileInputStream. For example:
import java.io.FileInputStream;
import java.util.Properties;
public class PropertiesTest {
public static void main(String[] args) throws Exception {
// set up new properties object
// from file "myProperties.txt"
FileInputStream propFile = new FileInputStream(
"myProperties.txt");
// note this initializes the new properties object with the current set
Properties p = new Properties(System.getProperties());
p.load(propFile);
// set the system properties
System.setProperties(p);
// display new properties
System.getProperties().list(System.out);
}
}
try this
new InputStreamReader(new ByteArrayInputStream(Result.getBytes()),
thnx chengyukpong , this works fine
but are there any possible problems which could arise from using certain encoding/character table ?
this should not happen in my case, however, one never knows :)
in the comment of chengyukpong, you should use Result.getBytes("ISO-8859-1"), because property files are latin-1 encoded. | http://bugs.sun.com/bugdatabase/view_bug.do%3Fbug_id=4094886 | crawl-002 | refinedweb | 1,060 | 61.22 |
On Tue, 2004-07-06 at 19:45, Jumpei Aoki wrote: > Hello, > > I do a programming for hobby, > and I want to create a bookmark interchange software for my own study. > I think XBEL is a great format to use, and I wish to use this format, > but a few questions came along and I was wondering if you could help. > > 1) Is there are "namespaces" for these XBEL elements? > If so, is it ""? > If it does not exist, could I use the above as the namespace, > or do I have to leave the namespace out? There is no namespace. > 2) If I have enough skill, I would want to create a > freeware and distribute it over the net. > Is there are licenses for XBEL? In other words, > is there anything that I need to do if I use XBEL in my software? > I read but I could not find > any statements about licenses. The XBEL DTD is public domain. > 3) Am I free to extend XBEL? I don't think I would need to, but > if there is need, could I extend XBEL and add some other elements? Yes, you are free, and in fact XBEL already provides some handy slots for extension. -- Uche Ogbuji Fourthought, Inc. Perspective on XML: Steady steps spell success with Google - Use XML namespaces with care - Managing XML libraries - Commentary on "Objects. Encapsulation. XML?" - A survey of XML standards - | https://mail.python.org/pipermail/xml-sig/2004-July/010358.html | CC-MAIN-2017-43 | refinedweb | 235 | 83.05 |
Table of Content
Qt 5 product definition
This document summarizes the outcome of Qt5 product definition discussions at QtCS. This summary needs review and discussion, see the qt5-feedback mailing list.
Background
The Qt5 product definition sessions at QtCS covered and their naming conventions:
The naming QML import statement, C++ naming and library naming for different module types is shown in the table below:
Notes:
- Current documentation, web site etc. use the term “module” to refer to libraries such as QtGui and QtXmlPatterns. We’ll keep this terminology, so we don’t have to rename the existing materials
- Modules are not 1:1 the same as repositories. The users of Qt should not usually need to know that certain modules are developed together in the same repository.
- Modules usually have a C++ API and a QML API
- The QML import statement version number, the module version number in documentation, and the library’s so version number should be the same — unlike currently for example with QtWebKit
- Not all modules in a Qt release will have the same version number
- Experimental modules cannot be a part of Qt Essentials. They follow the naming conventions for add-on modules
Open questions
- Should we start from version 5.0 for all modules in Qt 5?5 is different from Qt4 (e.g. Qt Script). We should not move any modules from Qt Essentials of Qt 5.0 into Add-ons later during the Qt 5 series. Because of the Qt5 source compatibility target, we will not put the former Qt4. The C++ namespace is removed, and a complete API review should be done to make the API consistent with the rest of Qt Essentials.55 differs from Qt44:: | http://qt-project.org/groups/qt-contributors-summit-2011/wiki/Qt5ProductDefinition/revision/8575 | CC-MAIN-2013-48 | refinedweb | 285 | 53 |
Install OpenCV 3.0.0 in Windows 8 the easy way
276 14 73720
Uploaded on 11/26/2014
This is the first video in the OpenCV Moments series. You will learn how to install OpenCV 3 in Windows 8 without any effort.
This instructions are for those of you who are total beginners and don't need any special feature of the library, you just want to install it and start learning
Popular Videos 758
Submit Your Video
If you have some great dev videos to share, please fill out this form.
By anonymous 2017-09-20! :-)
TL;DR
To use OpenCV fully with Anaconda (and Spyder IDE), we need to:
cv2.pydto the Anaconda site-packages directory.
(Read on for the detail instructions...)
Prerequisite
Install Anaconda
Anaconda is essentially a nicely packaged Python IDE that is shipped with tons of useful packages, such as NumPy, Pandas, IPython Notebook, etc. It seems to be recommended everywhere in the scientific community. Check out Anaconda to get it installed. Youtub\Lib\site-packagesin my case) contains the Python packages that you may import. Our goal is to copy and paste the
cv2.pydfile to this directory (so that we can use the
import cv2in our Python codes.).
To do this, copy the
cv2.pydfile...
From this OpenCV directory (the beginning part might be slightly different on your machine):
To this Anaconda directory (the beginning part might be slightly different on your machine):
After performing this step we shall now be able to use
import cv2in Python code. BUT, we still need to do a little bit more work to get FFMPEG (video codec) to work (to enable us to do things like processing videos.)
Set Enviromental Variables.
Append
%OPENCV_DIR%\binto the User Variable
PATH.
For example, my
PATHuser variable looks like this...
Before:
After:
This is it we are done! FFMPEG is ready to be used!
Test to confirm
We need to test whether we can now do these in Anaconda (via Spyder IDE):
Test 1: Can we import OpenCV?
To confrim that Anaconda is now able to import the OpenCV-Python package (namely,
cv2), issue these in the IPython Console:
If the package
cv2is imported ok with no errors, and the
cv2version is printed out, then we are all good! Here is a snapshot:
import-cv2-ok-in-anaconda-python-2.png
Test 2: Can we Use the FFMPEG codec?
Place a sample
input_video.mp4video file in a directory. We want to test whether we can:
.mp4video file, and
.avior
.mp4etc.)
To do this we need to have a test python code, call it
test.py. Place it in the same directory as the sample
input_video.mp4file.
This is what
test.pymay look like (I've listed out both newer and older version codes here - do let us know which one works / not work for you!):
(Newer verison...)
(or the older version...)
This test is VERY IMPORTANT. If you'd like to process video files, you'd need to ensure that Anaconda / Spyder IDE can use the FFMPEG (video codec). It took me days to have got it working. But I hope it would take you much less time! :)
Note: one more very important tip when using the Anaconda Spyder IDE. Make sure you check the Current Working Directory (CWD)!!!
Conclusion
To use OpenCV fully with Anaconda (and Spyder IDE), we need to:
cv2.pydto the Anaconda site-packages directory.
Good luck!
Original Thread | https://dev-videos.com/videos/EcFtefHEEII/Install-OpenCV-300-in-Windows-8-the-easy-way | CC-MAIN-2018-13 | refinedweb | 574 | 75.61 |
10 July 2013 16:37 [Source: ICIS news]
Correction: In the ICIS story headlined "Crude futures extend gains on massive draw in US crude stocks" dated 10 July 2013, please read in the first paragraph ... much larger-than-expected draw ... instead of ... much larger-than-expected build. ... A corrected story follows.
LONDON (ICIS)--Crude futures extended gains on Wednesday after the US Energy Information Administration published its weekly stock report showing a much larger-than-expected draw in US crude oil stocks last week.?xml:namespace>
Before the report was published at 14:20 GMT, the front-month August NYMEX WTI contract was trading around $105.00/bbl and it gained 50 cents/bbl 20 minutes later, or 10 minutes after the report was published.
Similarly, the August ICE Brent contract was trading around $107.90/bbl, before the report was issued and it gained 35 cents/bbl to trade around $108.25/bbl, 10 minutes after the report was published.
Analysts’ predictions for this week’s US stock figures were that they would show a draw on crude stocks of about 3.10m bbl, a build on distillate of around 700,000 bbl and a build on gasoline of around 600 | http://www.icis.com/Articles/2013/07/10/9686547/corrected-crude-futures-extend-gains-on-massive-draw-in-us-crude-stocks.html | CC-MAIN-2015-11 | refinedweb | 202 | 64.81 |
Opened 9 years ago
Last modified 9 months ago
#4264 new defect
Default CSS should match default trac design
Description
The plugin is great - but the default design shipped for sure breaks the design for 99% of all trac installations. Wouldn't it be more suitable to ship it with a design matching the trac defaults (the current design could be shipped optionally in addition)? Not everybody is firm enough to "port it back".
Besides of that, I love the plugin. I installed it into a test environment - but because of the CSS stuff cannot roll it out to the production server (which would have been no problem if it used default trac style)... Not that easy to figure out.
Attachments (3)
Change History (17)
comment:1 Changed 9 years ago by
comment:2 Changed 9 years ago by
Go see #4274 for a new CSS more suitable to the default trac theme and for the current design.
comment:3 Changed 9 years ago by
(In [5014]) MenusPlugin: - Added z-index. refs #4274 #4264
comment:4 Changed 9 years ago by
Thanx - fits much better with the standard trac layout now - though it looks not like the original one (I can attach you what I made up to know, maybe you want to take something of it).
Some idea: Splitting up the CSS into 2 files (
essential.css and the personal style, i.e.
trac.css and
bluefish.css), together with two options for the
trac.ini would ease customization:
[tracmenu] menu_stylesheet = trac # trac|bluefish|external #external_css = <URL>
The
essential.css would always be included. The second CSS then is included depending on the settings given here: "trac" => the
trac.css from the egg, "bluefish" => the
bluefish.css from the egg. If set to "external", the URL specified by
external_css is used. This way, when customizing the styles, there is no need to neutralize conflicting style definitions. Plus you can contribute alternative stylesheets easily. What do you think?
comment:5 Changed 9 years ago by
I just attached a more trac-like CSS. I'm not 100% satisfied with this yet, but maybe someone more firm with CSS is faster than I am - so I thought I attach what I already figured out... Still, there is a problem due to the missing "class=active" setting for menus defined by trac itself (see #4267), so you will still miss the activated menu item. But that has to be fixed somewhere in the Python code...
Changed 9 years ago by
Trac-Style CSS (to be used in addition to the "essential" settings)
comment:6 Changed 9 years ago by
(In [5021]) MenusPlugin: - More trac-alike look thanks to Izzy. refs #4290 #4264 #4274
comment:7 Changed 9 years ago by
Thanks for adapting! But I just found out there's (at least) one thing to be added:
.sf-menu li > a { border-left: none !important; }
This is due to the fact that the menu is shifted to float:left (instead of the original float:right), which is also the reason we needed to add the right border (which is already done in
.sf-menu li). Without the above code, the border between two items appears doubled.
There are still some issues left, and I'm not sure whether they are to be solved in CSS or Python code: If you e.g. define a brand new item like this:
new_item.parent = top new_item.label = MyLabel new_item.hide_if_no_children = 1
Just to assign (move) some items below this, it is parsed into the mainnav like
<li><a>MyLabel</a></li>
i.e. an empty a tag. I don't know why, but at least in my installation it then misses the padding and right-hand border - unless I add
new_item.href= to above entries (it does not matter whether an URL is put here or not - the main thing seems to be the
a tag has an (href) attribute, whether it is empty (
href="") or not seems not to matter.
comment:8 Changed 9 years ago by
Besides: With your changes to my CSS, the "sub-indicator" is invisible for the not-current-selected menu item - unless the item is hovered. That is since the color of the image is too close to the background color of the item. In my original
trac.css I skipped part of yours, so the image was not displayed at all (but the ">>" instead) - so I didn't notice. Maybe the image (arrows-ffffff.png) has to be replaced?
comment:9 Changed 9 years ago by
Solution for comment:8 is in fact an updated PNG (I attached it), plus some changes to the CSS:
.sf-sub-indicator { background-position: 0 -69px !important; } a > .sf-sub-indicator { /* give all except IE6 the correct values */ top: .8em !important; background-position: 0 -69px !important; /* use translucent arrow for modern browsers*/ } /* apply hovers to modern browsers */ a:focus > .sf-sub-indicator, a:hover > .sf-sub-indicator, li:hover > a > .sf-sub-indicator, li.sfHover > a > .sf-sub-indicator { background-position: -10px -69px !important; /* arrow hovers for modern browsers*/ } /* point right for anchors in subs */ .sf-menu ul .sf-sub-indicator { background-position: -10px -33px !important; } .sf-menu ul a > .sf-sub-indicator { background-position: 0 -33px !important; } /* apply hovers to modern browsers */ .sf-menu ul a:focus > .sf-sub-indicator, .sf-menu ul a:hover > .sf-sub-indicator, .sf-menu ul li:hover > a > .sf-sub-indicator, .sf-menu ul li.sfHover > a > .sf-sub-indicator { background-position: -10px -33px !important; /* arrow hovers for modern browsers*/ } /* for the active ones, use a different arrow color */ li.active > a > .sf-sub-indicator { background-position: 0 -100px !important; } li.active a:hover > .sf-sub-indicator { background-position: -10px -100px !important; /* arrow hovers for modern browsers*/ }
Maybe you want to fine-tune a bit - but basically, this way it works again as expected: Lightgrey/White arrow for active/hover, and darkgrey/black for inactive/hover (yepp, this contrast is needed ;)
Changed 9 years ago by
comment:10 Changed 9 years ago by
So here comes the missing CSS for the right-pointing arrows:
/* Point right */ li.active ul li .sf-sub-indicator { background-position: 0 0 !important; } li.active ul li a:hover > .sf-sub-indicator, li.active ul li a:focus > .sf-sub-indicator, li.active ul li:hover > a > .sf-sub-indicator, li.active ul li.sfHover > a > .sf-sub-indicator { background-position: -10px 0 !important; }
But what I may have "messed up" is the special handling of MSIE. Here you may need to re-adjust - sorry, but here it is MS free zone ;)
If you want me to attach the complete resulting CSS file (so you don't have to mess around with all the patches), just let me know.
comment:11 Changed 9 years ago by
I just investigated the issue with HREF mentioned in comment:7, and I'm pretty sure this can be tracked down to a CSS issue. The reason for this to happen is: Most of the CSS references rely on pseudo-attributes as
:link and
:visited - which are only available if an URL is referenced. If it is not, those styles do not apply - so one must count that as WAD (Works As Designed - even if that's not WAI, As Intended).
This means: For self-defined menu items (aka tree_nodes) with no href defined we need to imply either an empty string or '#' (excluding
menu_orig items here of course). I already played with the code concerning this - but obviously it is exceeding my knowledge :( I only got as far as filtering out the original items, and select the self-defined without
href set:
def _get_menu(self, req, menu_name, nav_orig): ... menu_result = [] menu_orig_names = [] for item in menu_orig: menu_orig_names.append(item['name']) ... for option in sorted(...): ... if not 'href' in option and not 'href' in config_menu.get(name, []) and not name in menu_orig_names:
But what to update now? I tried
tree_node.update(config_menu.get(name, {'href':'#'})), but it had no effect. I tried setting
config_menu[name]['href'] = '' and
tree_node['href'] = '', also nothing...
Besides: setting
my_item.href =
in
trac.ini gets removed the next time
trac.ini is updated e.g. via the web interface (or
trac_admin when upgrading the environment), so this is not really a solution.
my_item.href = #
could work (not verified thoroughly) until we fixed that issue.
comment:12 Changed 9 years ago by
I've got a better idea for a solution, but can test it not before Monday (if you are faster, go ahead). The idea is close to my last comment. The last code line should go where you assign the
class=active based on whether the URL matches. Without an URL, there will be no match - so that line catches the ones, and applies not a "fake href", but a
class=noref. With that class assigned, we can fix the CSS where it should be fixed: In the stylesheet.
This way has several advantages over the "fake href": While the latter would cause the page to be reloaded when accidentally clicked on, this way nothing happens. And it does not even look like it is a link - which it in fact is not. So I guess that would be a clean way.
If you want me to do that, simply assign the ticket to me. I then will do it, and attach necessary patches here so you can verify, apply, and check them in.
Changed 9 years ago by
Patch for the latest CSS: Arrows and borders
comment:13 Changed 9 years ago by
Just added a patch against the latest
tracmenus.css (to be used with the also attached attachment:arrows-ffffff.png, which might need to be updated for the 8-bit indexed alpha png for MSIE). Please check and apply to the repository.
What it does:
- fixing the left border of the menu "boxes" (without this they are doubled)
- using black arrows for the white "tabs", incl. hover effect
- moving the right-arrow in submenus to a more convenient position
Of course I already tested it here in my environment ;)
Great idea, lot of us don't need a special css just a standard fittable with original trac theme. That's what i tried to do before having a flat bad looking list menu. | https://trac-hacks.org/ticket/4264 | CC-MAIN-2017-51 | refinedweb | 1,716 | 74.29 |
How can I update a IDocument object for an empty file? Everything I try throws an exception saying that a document can not be updated during a PSI transaction.
I've tried this
IProjectFile newFile = AddNewItemUtil.AddFile(folder, unitTest + ".cs"); IDocument doc = newFile.GetDocument(); doc.InsertText(0, "public class test {}");
And I've tried this
IProjectFile newFile = AddNewItemUtil.AddFile(folder, unitTest + ".cs"); IDocument doc = newFile.GetDocument(); pModule.GetPsiServices().Transactions.Execute("Write test class", () => { doc.InsertText(0, "public class test {}"); });
Please, can someone answer. I have googled, browse codeplex and github, searched these forums, but can not figure it out.
You can't modify the document during a PSI transaction - ReSharper is expecting the PSI (that is, the abstract syntax tree) to be modified, and it will resync the document once the PSI transaction is committed. So, you could add content via the PSI, using something like CSharpElementFactory, or you need to start a document transaction, using DocumentTransactionManager.CreateTransactionCookie(DefaultAction.Commit, "useful debugging id")
Thanks Matt,
I've had success adding things to an existing ICSharpFile because I can get a reference to it from ICSharpContextActionDataProvider.PsiFile, but when I add a new file to a project I'm stuck.
How do I get a reference to ICSharpFile from the IProjectFile reference above?
Once I have ICSharpFile I think I'll be able to do the rest on my own.
You can use the ToSourceFile() extension method on IProjectFile.
Okay, but ToSourceFile() returns IPsiSourceFile and ICSharpFile does not implement that interface.
Am I to cast from IPsiSourceFile to ICSharpFile?
I think I figured it out.
Is this the correct way to get the ICSharpFile reference?
Gah! Half an answer. Sorry. That will do the trick, but it's better to call one of the overloaded GetPsiFiles methods. These defer to the cached files, but will also make sure the cache is primed first. There are also a couple of helper extension methods such as GetPrimaryPsiFile that simplify the method call (primary is the main language of the file, e.g. C#. Secondary is the tree for a "code behind file", e.g. the generated C# of an aspx file. Dominant is either the primary file, or the language of the code behind file, if there isn't a primary). And then once you've got the IFile, you need to downcast that to ICSharpFile. | https://resharper-support.jetbrains.com/hc/en-us/community/posts/205991169-How-to-add-text-to-an-empty-file- | CC-MAIN-2020-16 | refinedweb | 393 | 58.28 |
Using read_csv in Pandas,
I have imported a huge data set with >500K rows, each row contains a taxonomic code with location and abundance values from a specific date and station. These values repeat for different stations over time. I cannot create a unique time stamp because time was not recorded, thus I only have the date.
My columns are : Cruise Name, Station number, Latitude, Longitude, Date(YY/MM/DD), Taxonomic Code, Abundance
I need to rearrange the data such that my columns will be the individual taxonomic codes (n>400) as the column name with abundance as values for those columns, and the rows will be occurrence with unique index consisting of location and date information. To further complicate this, I need to include zeros where there were no observations for the taxonomic codes for those particular samples
df['ID'] = df[['timestamp','cruise','station','lat','long','depth']].apply(lambda x: ','.join(map(str, x)), axis=1)
df3 = pd.DataFrame([df.ID,df.TaxonomicCode,df.Abundance]
ID oldtxc zoodns100
0 1977-02-13 00:00:00,MM7701,2,41.1833,-70.6667,... 101 114.95
1 1977-02-13 00:00:00,MM7701,2,41.1833,-70.6667,... 102 40118.18
species = df3['TaxonomicCode']
cruise=df3['ID']
taxa=np.unique(species)
locats = np.unique(cruise)
aa=pd.DataFrame(index=locats, columns=taxa)
aa=aa.fillna(0)
2 100 101 102 103 104 105 106 107 108 ... 4500 4504 4601 4604 4700 5000 5100 5101 5150 9114
1977-02-13 00:00:00,MM7701,2,41.1833,-70.6667,33.0 0 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0
for d in range(len(df3)):
loc1 = df3.ID[d]
tax1 = df3.oldtxc[d]
locIndex = np.where(locats == loc1)[0][0]
taxIndex = np.where(taxa == tax1)[0][0]
aa[locIndex, taxIndex] = df3.zoodns100[d]
aa[row, column] = value
I can't understand your entire question, but I have a couple pointers that might help.
First, I don't think there's any reason for your statement:
aa=aa.fillna(0)
There's no benefit to preallocating all of those zeros, and it clutters your code.
I think instead it's more efficient for you to have something along the lines of:
aa=pd.DataFrame(index=locats, columns=taxa) for d in range(len(df3)) ... #build a Series (new_row) for Just one row ... aa = aa.append(new_row, ignore_index=True) #T/F depending on what you want
Also, you might want to reconsider your for loop. Pandas has an iterrows() function that you could use instead. You use it like this:
for row_index, row in df3.iterrows():
When it comes to concatenating, you may introduce new performance issues. There's a post here that discusses your options. But if you look, those are talking about millions, and yours are much less. So I think there's hope.
Along those lines, don't feel obligated to solve the entire problem in one iteration. That's another reason not to allocate everything in advance. If you have genuine performance issues, it might be possible to break it off in chunks. For example, every 1000 rows iterated, you could flush your current DataFrame to a .csv file, thereby releasing that memory. You might end up with 500 .csv files, but then a separate function would be able to read them all in. Assuming they are the only .csv files in the directory:
def concatinate_files(files_path): file_list= [] for file_ in os.listdir(files_path): if file_.endswith('.csv'): file_list.append(files_path + '/' + file_) combined_df = DataFrame() for file_name in file_list: df = pd.read_csv(file_name) combined_df = combined_df.append(df, ignore_index=False)
Hope that helps.
UPDATE 8/20 in response to your 'edit2'
Your problem in your most recent post is that if 'row' and 'column' are integers, than you are trying to use integer location indexing, but not calling the correct function (iloc). That causes columns to be appended. Try running this example code to see what I mean.
df = DataFrame(np.random.randn(4, 4)) df['1','2']=3 #not what you want print df df.iloc[1,2]=3 #what I think you mean print df
Again, this goes back to my original suggestion though. I don't think it's in your best interest to allocate the 419x27530 up front. I think some of your problems are from your mindset/insistence to try to fit things that way. Besides the preallocation, you mention that your data orientation is a problem, but I'm not clear on exactly how that is. It's perfectly valid to build your results as 27530x1, 27530x2 ... 27530x419 and then call DataFrame.Transpose (df.T) to get the 419x27530 orientation you want. | https://codedump.io/share/Kj1P3ctdbFmG/1/how-to-restructure-data-using-a-loop-without-crashing-computer-and-write-to-csv-in-python | CC-MAIN-2017-43 | refinedweb | 789 | 68.26 |
Hi again So I did give this interesting lxc-enter-namespace feature a try I found a few glitches that I thought might be of interest; I recall that I use a stock f20 box which probably is already behind as compared to the devel version root warhol ~/thierry # rpm -q libvirt libvirt-1.1.3.2-1.fc20.x86_64 So what I can see is that the exec’ed command seems to have a very limited PATH - if the feature is enabled at all: # virsh lxc-enter-namespace my-01 -- ls /etc/yum.repos.d/building.repo # virsh lxc-enter-namespace my-01 -- /usr/bin/ls /etc/yum.repos.d/building.repo /etc/yum.repos.d/building.repo So my comments are, - first that defining some even minimal PATH would help; - and second that in this first form, it would be great if virsh could write some error on stderr instead of being almost totally silent, it took me some time to figure that it kind of worked Does this mean we would lose any message sent on stderr ? — In the same conditions our own local tool would do this # lxcsu -ro my-01 -- ls /etc/yum.repos.d/building.repo /etc/yum.repos.d/building.repo Hope this helps — Thierry On 20 Jan 2014, at 19:03, Thierry Parmentelat <thierry parmentelat inria fr> wrote: > Oh, I had totally failed to spot that one.. > Thanks for the tip, I’ll give this a try :-) > > On 20 Jan 2014, at 18:59, Daniel P. Berrange <berrange redhat com>. >> >> Daniel >> -- >> |: -o- :| >> |: -o- :| >> |: -o- :| >> |: -o- :| > | https://www.redhat.com/archives/libvir-list/2014-January/msg00949.html | CC-MAIN-2015-14 | refinedweb | 263 | 55.58 |
The path element Vertical Line is used to draw a vertical line to a point in the specified coordinates from the current position.
It is represented by a class named VLineTo. This class belongs to the package javafx.scene.shape.
This class has a property of the double datatype namely −
Y − The y coordinate of the point to which a vertical is to be drawn from the current position.
To draw the path element vertical line, you need to pass a value to this property. This can be done either by passing it to the constructor of this class at the time of instantiation as follows −
LineTO line = new LineTo(x)
Or, by using its respective setter methods as follows −
setY(value);
To draw a vertical line to a specified point from the current position { } }
Create the path class object as follows −
//Creating a Path object Path path = new Path()
Create the MoveTo path element and set XY coordinates to the starting point of the line to the coordinates (100, 150). This can be done by using the methods setX() and setY() of the class MoveTo as shown below.
//Moving to the starting point MoveTo moveTo = new MoveTo(); moveTo.setX(100.0f); moveTo.setY(150.0f)
Create the path element vertical line by instantiating the class named VLineTo, which belongs to the package javafx.scene.shape as follows.
//Creating an object of the class VLineTo VLineTo vLineTo = new VLineTo();
Specify the coordinates of the point to which a vertical line is to be drawn from the current position. This can be done by setting the properties x and y using their respective setter methods as shown in the following code block.
//Setting the Properties of the vertical line element lineTo.setX(500.0f); lineTo.setY(150.0f);
Add the path elements MoveTo and VlineTo created in the previous steps to the observable list of the Path class as follows −
//Adding the path elements to Observable list of the Path class path.getElements().add(moveTo); path.getElements().add(VlineTo); draws a vertical line from the current point to a specified position using the class Path of JavaFX. Save this code in a file with the name − VLineToExample.java.
import javafx.application.Application; import javafx.scene.Group; import javafx.scene.Scene; import javafx.stage.Stage; import javafx.scene.shape.VLineTo; import javafx.scene.shape.MoveTo; import javafx.scene.shape.Path; public class VLineToExample extends Application { @Override public void start(Stage stage) { //Creating an object of the Path class Path path = new Path(); //Moving to the starting point MoveTo moveTo = new MoveTo(); moveTo.setX(100.0); moveTo.setY(150.0); //Instantiating the VLineTo class VLineTo vLineTo = new VLineTo(); //Setting the properties of the path element vertical line vLineTo.setY(10.0); //Adding the path elements to Observable list of the Path class path.getElements().add(moveTo); path.getElements().add(vLineTo); //Creating a Group object Group root = new Group(path); //Creating a scene object Scene scene = new Scene(root, 600, 300); //Setting title to the Stage stage.setTitle("Drawing a vertical line"); //Adding scene to the stage stage.setScene(scene); //Displaying the contents of the stage stage.show(); } public static void main(String args[]){ launch(args); } }
Compile and execute the saved java file from the command prompt using the following commands.
javac VLineToExample.java java VLineToExample
On executing, the above program generates a JavaFX window displaying a vertical line, which is drawn from the current position to the specified point, as shown below. | https://www.tutorialspoint.com/javafx/2dshapes_vlineto.htm | CC-MAIN-2019-47 | refinedweb | 579 | 54.22 |
LoPy4 lora nanogateway example not working
Good afternoon (o;
Found my old LoPy4 board and gave it a shot to test the TTN network with the lora nano gateway example from here:
So installed vscode and pymakr plugin and flashed the LoPy4 board to latest 1.20.3.b0..
as this is the only version where I get a WiFi connect message...
But it throws then an error message and the Lopy4 isn't pingable28,len:8 load:0x3fff0030,len:1992 load:0x40078000,len:12104 load:0x40080400,len:5032 entry 0x4008060c WiFi connected! Traceback (most recent call last): File "main.py", line 19, in <module> File "nanogateway.py", line 88, in start OSError: [Errno 202] EAI_FAIL Pycom MicroPython 1.20.3.b0 [v1.11-db33be7] on 2020-07-01; LoPy4 with ESP32
With 1.20.2.r4 I get the same EAI_FAIL error but after few minutes it responds to pings in second intervals:
entry 0x400a05bc WiFi connected! Traceback (most recent call last): File "main.py", line 19, in <module> File "nanogateway.py", line 88, in start OSError: [Errno 202] EAI_FAIL Pycom MicroPython 1.20.2.r4 [v1.11-ffb0e1c] on 2021-01-12; LoPy4 with ESP32
Is there any pybytes firmware or any nano gateway example that actually works?
Well if not I just throw away the boards...
cheers
richard
@davorin The version is slightly different, as it contains a few bug fixes. And there is no pybytes in it. Waiting for ntp.org sometimes happens.
@robert-hh
Hello Robert
Is this the same as the official 1.20.2.r4 I already flashed?
Hmm...just hangs there:.004] Starting LoRaWAN nano gateway with id: 30aea4fffe74c1b8 [ 4.568] WiFi connected to: TammiSiech [ 4.577] Syncing time with pool.ntp.org ...
Had to remove your pycom.pybytes_on_boot(False) as it isn't known...
AttributeError: 'module' object has no attribute 'pybytes_on_boot'
Went back to official 1.20.2.r4 and wiped FatFS....didn't hang so far...
but also don't see it connected to TTN at all....
thanks in advance
richard
@davorin Don't blame the board. It works well for me at least.
But as Lora gateway these are not well suited, simply because they support only a single channel.
Thank you for providing the core dump. It helps to trace software errors. May I ask you to try this firmware package:
I have the elf file for these, which allow me to locate the error.
Edit: This firmware uses LFS2.3, which is not backward compatible. So you have to reload all files. Anyhow, to have a clean start, it's better to do a full erase of the board before a new test.
@robert-hh said in LoPy4 lora nanogateway example not working:
import pycom
pycom.pybytes_on_boot(False)
Hello Robert
Ah this helped now...but getting a panic after it connects...hough don't see anyhting on the ttn console:
entry 0x400a05bc [ 3.082] Starting LoRaWAN nano gateway with id: 30aea4fffe74c1b8 [ 4.596] WiFi connected to: TammiSiech [ 4.605] Syncing time with pool.ntp.org ... [2932692.528] RTC NTP sync complete [2932693.481] Opening UDP socket to eu.thethings.network (52.169.76.255) port 1700... [2932693.481] Setting up the LoRa radio at 868.1 Mhz using SF7BW125 [2932693.481] LoRaWAN nano gateway online [2932693.481] You may now press ENTER to enter the REPL Pycom MicroPython 1.20.2.r4 [v1.11-ffb0e1c] on 2021-01-12; LoPy4 with ESP32 Type "help()" for more information. >>> Guru Meditation Error: Core 1 panic'ed (LoadProhibited). Exception was unhandled. Core 1 register dump: PC : 0x4010a8bb PS : 0x00060630 A0 : 0x800e0ff8 A1 : 0x3ffd91d0 A2 : 0x00000001 A3 : 0x00000001 A4 : 0x00000000 A5 : 0x00000001 A6 : 0x00000000 A7 : 0x00060023 A8 : 0x80109706 A9 : 0x3ffe73e0 A10 : 0x00000001 A11 : 0x00000001 A12 : 0x3f94f190 A13 : 0x3f94f190 A14 : 0x3ffe7570 A15 : 0x3ffe74d0 SAR : 0x00000018 EXCCAUSE: 0x0000001c EXCVADDR: 0x00000001 LBEG : 0x40094230 LEND : 0x4009425e LCOUNT : 0xffffffff ELF file SHA256: 0000000000000000000000000000000000000000000000000000000000000000 Backtrace: 0x4010a8bb:0x3ffd91d0 0x400e0ff5:0x3ffd91f0 0x400e1066:0x3ffd9210 0x400e1585:0x3ffd9240 0x400e17f7:0x3ffd9260 0x400e0199:0x3ffd92a0
Seems highly unreliable this LoPy board....
So into the trash can those boards and get a real LoRa gateway...
@davorin Use 1.20.2.r4 and disable pybytes.
import pycom
pycom.pybytes_on_boot(False)
The nanogateway software itself works. Take it from the github pages:, not from the doc pages.
I have it running here on a Lopy as a test, and as long as I do not reboot it manually, it runs. Once every few weeks I have to reboot it manually, when the local AP of my home network stalled and the LoPy does not reconnect by itself.
You have to be aware that this is a single channel gateway. if you want to use it, your node have to be tuned to that single channel. | https://forum.pycom.io/topic/6845/lopy4-lora-nanogateway-example-not-working | CC-MAIN-2022-33 | refinedweb | 787 | 77.94 |
If you use IBM Urbancode Deploy (as uDeploy is now called) at all, you will notice its simplicity. And rightly so as a deployment automation tool should not re-invent the way deployment automation is done. Urbancode Deploy simply organizes and collects deployment processes and steps. Once a deployment is successful, any good deployment automation solution should be able to repeat that deployment over and over again with no trouble.
On the other hand, the integrations with other tools, both as a consumer and a producer, provide the real value. And a valuable part of Urbancode Deploy’s integration capabilities is its REST API. There are 3 ways to get information about the API.
1) The documentation – this is unfortunately not your best bet. It is out of date and lacks a lot of necessary detail.
2) The Application WADL file – it does exists in the uDeploy server folder structure. But it is hard to decipher and also leaves out the json details.
3) Browser development tools – This is the method that has been the most successful for me. I use Chrome and its developer tools allow you to see the network traffic occuring as you navigate the uDeploy web pages. The uDeploy user interface heavily utilizes the REST API. But capturing the network traffic as you navigate, you can see the specific rest calls that are occuring and examine the json payload both in and out.
I sought to build something that exercises the uDeploy REST API. What I came up with is an example of how you can solve a common uDeploy requirement. The process of onboarding an application to uDeploy involves many mouse clicks and an understanding of how to navigate the user interface. As I said above, the concepts are easy once you understand them, but to onboard thousands of applications does not scale if you use the user interface. Plus you must train people on how to get their applications into uDeploy.
So I built a small website that captures some basic information about an application, its components, and its environments, and does the bulk of the setup work in uDeploy via the REST APIs. I wanted to be able to capture the information in a way that a development team would understand yet not need to know anything about uDeploy and its structure.
You can see a demo of this sample application at this link:.
I would welcome any feedback.
Thanks.
12 thoughts on “The uDeploy REST API”
Quick couple of questions if you have the time.
I am attempting to use the REST API, but having multiple issues.
My main issue is I am getting 2 errors.
First I am getting: The server committed a protocol violation. Section=ResponseStatusLine
and second I am getting a not authorized error.
To attempt to resolve the second, I changed my code to pass credentials and I still got the same error, so not sure what is going on here.
First one, not really sure what is causing the error yet.
Wondering if you have to pass credentials when calling the rest services, and if so, how did you accomplish this?
Also, I know its s big ask, but wondering if you would mind sharing the code you created to call the rest api of uDeploy. Working with samples that work is much better than going in blind.
Thanks up front for any help provided
First things first, let’s make using the REST API easier, that is if you are using Java. If you open up one of the uDeploy-* plugins (i.e. uDeploy-Environment), there is a jar file in the lib directory called uDeployRestClient.jar. This jar file encapsulates many of the common uDeploy REST calls and takes care of the process of calling the REST url with proper authentication.
You instantiate a new rest client like this:
UDRestClient udclient = new UDRestClient(url, clientUser, clientPassword);
You can then call any of its public methods like this one to create a new application as an example:
UUID appID = udclient.createApplication(Str_appName, Str_appDescription, Str_notificationSchem, bool_enforceCompleteSnapshots);
Unfortunately unless I have to use Java, I don’t. Just a preference. 😉
Here is working version of this in C# using the RestSharp library.
NOTE: I had used your advice and use Chrome tools to get the format of the JSON message and keys that were being passed. Once I had this it was a matter of getting the right format for serialization (note list of dictionary below).
Also have to use authentication and HTTPS override (not in code below) to get working.
You sent me down the right path, just needed to complete in a different language.
var client = new RestClient();
client.BaseUrl = “”;
client.Authenticator = new HttpBasicAuthenticator(“user”, “pass”);
var versionList = new Dictionary();
versionList.Add(“versionSelector”, “latestVersion”);
versionList.Add(“componentId”, “component_UUID”);
var propertiesList = new Dictionary();
var request = new RestRequest(Method.PUT);
request.RequestFormat = DataFormat.Json;
request.Resource = “rest/deploy/application/application_UUID/runProcess”;
request.AddBody(new
{
applicationId = “application_UUID”,
applicationProcessId = “applicationProcess_UUID”,
description = “”,
environmentId = “environment_UUID”,
onlyChanged = “false”,
properties = propertiesList,
scheduleCheckbox = “false”,
snapshotId = “”,
versions = new List<Dictionary> {versionList}
});
IRestResponse response = client.Execute(request);
Hi,
I have a question, Not sure if it is exactly related to the abovem but more to the integrartion with th Udeploy.
How ca i invoke an executable JAR file ebbedded with a udeploy process as an action?
Thank a lot i advance!
Hi,
I have a question not exactly about the topic but related – I would be happy if you could assist.
How can I create a Plugin integrated with the UDeploy that executes an executable JAR file?
Or – with out a plugin – add an action that does this operation?
Thanks in advance,
Freydie
Do you have the java source? You could easily call a java method within a plugin. Almost all of the downloadable plugins do this. If not, then you could call the executable jar file via java with something like this that I found via Google:
Runtime re = Runtime.getRuntime();
BufferedReader output;
try{
cmd = re.exec(“java -jar MyFile.jar” + argument);
output = new BufferedReader(new InputStreamReader(cmd.getInputStream()));
} catch (IOException ioe){
ioe.printStackTrace();
}
String resultOutput = output.readLine();
First, Thanks Darell for your quick response!
I do have the java code… I misunderstand how do I call a java method with the plugin?
What kind of a plugin should I create?
I sa there are various of plugins while there is a Shell option – Do you refer to that one?
Can oyu please refer me to an additional data regarding the way I create a plugin that invokes the java method\jar integrating with the UDeploy?
Thanks again for your kind help,
Freydie
I you look at any of the downloadable plugins, each plugin step is typically implemented using a Groovy script. You can essentially treat a Groovy script like java and put an import statement at the beginning, for example:
import javax.mail.*;
Then in the Groovy script you can create objects and use them like you would any other Java object, for example:
Session lSession = Session.getDefaultInstance(mprops,null);
MimeMessage msg = new MimeMessage(lSession);
You would include the jar file in the lib directory of the plugin and make sure that you add these jars to the classpath of the plugin via the element in the plugin.xml file, like this:
–
There are materials on building plugins if you are needing that knowledge.
HI Darrell,
Need help on uDeploy Restart Rest API. once the agent goes down, the rest API to restart the agent is not working. Is there any way to start the agent without manually connecting to host and doing it.
Thanks,
Raghu
Raghu,
What type of environment is the agent running on (Linux or Windows)?
could you share the source code of the GUI that makes the actual PUT request to the rest api please ?
Hi,
I’m trying to get the history of all past deployments for an environment. From the wadl file, it seemed like a valid query, which allows a GET at this path – /rest/deploy/applicationProcessRequest/table. I’m using HttpBuilder
custom library to make the REST calls. I captured the REST call made by my web browser for this purpose and the parameters passed in the query is following.
rowsPerPage=10
pageNumber=1
orderField=calendarEntry.scheduledDate
sortType=desc
filterFields=environment.id
filterValue_environment.id=dd53b03b-7cb4-4dfc-bc8b-a2b85e44d252
filterType_environment.id=eq
filterClass_environment.id=UUID
outputType=BASIC
outputType=LINKED
But I’m unable to get this working. Groovy is complaining about parameter values in filterFields=environment.id saying environment property does not exist (groovy.lang.MissingPropertyException). Also it is asking me to enclose complex
parameters in parenthesis, which I’m guessing are the parameter names with underscore above. I did that and that particular error seemed to go away. But I’m stuck with the filterFields parameter value. It looks like they are using
environment.id as parameter value to further reference the environment id inside the query even though I’m passing it in another parameter. Any help will be greatly appreciated. | https://drschrag.wordpress.com/2013/10/03/the-udeploy-rest-api/comment-page-1/ | CC-MAIN-2018-30 | refinedweb | 1,506 | 56.55 |
On Wed, Jul 7, 2010 at 8:48 PM, John Meacham <john at repetae.net> wrote: > Are you sure you are interpreting what 'die' should do properly? Your > code makes sense if die should decrement your life counter and continue > along, however if 'die' is meant to end your whole game, then there is > another implementation that does type check. > > John > You're absolutely right, I sen't the wrong code, here's the "correct" one and a little bit more explanation about what checkpoint does. The result of die makes sense for the checkPoint function since there are three cases for it: 1) The player died and has no remaining lifes. The game can't continue, I just return Noting in the die function and in checkpoint make the corresponding case. 2) The player died and has remaining lifes. The game can be retried with a life subtracted. I would need to tell checkpoint that I died and I want to retry, that's where I think the result is important, because of the next case. 3) The player didn't died, it finished the particular game and checkpoint m equals m. Here I would need to see if the result of the game was different from the result from die, and continue. instance GameMonad Game where extraLife = Game $ \l -> Just ((),l+1) getLives = Game $ \l -> Just (l,l) die = do n <- getLives if n <= 0 then Game $ \_ -> Nothing else Game $ \_ -> Just ("player died",n-1) checkPoint a = do n <- getLives case execGame a n of Nothing -> Game $ \_ -> Nothing Just c -> gameOn $ fst c where gameOn "player died" = a >>= \_ -> (checkPoint a) gameOn _ = a Obviously this fails to compile because I'm returning a String and it doesn't match with a either, but the idea of what I think I need to do is right there. Ivan Miljenovic told me to use error, and actually I though something like that. in STM retry combined with atomically does something similar as what I need checkpoint and die to do, and they use exceptions to accomplish it. I really think that's the solution I want, but then I have another question, when I 'throw' the exception in die and 'catch' it in checkpoint to call it again, is the number of lives gonna be lives - 1? Thanks for answering so quickly, Hector Guilarte Pd: Here's an example run of how my homework should work after is finished printLives :: ( GameMonad m , MonadIO m ) = > String -> m () printLives = do n <- getLives liftIO $ putStrLn $ s ++ " " ++ show n test1 :: ( GameMonad m , MonadIO m ) = > m () test1 = checkPoint $ do printLives " Vidas : " die liftIO $ putStrLn " Ganamos ! " lastChance :: GameMonad m = > m () lastChance = do n <- getLives if n == 1 then return () else die test2 :: ( GameMonad m , MonadIO m ) = > m String test2 = checkPoint $ do printLives " Inicio " n <- getLives if n == 1 then do liftIO $ putStrLn " Final " return " Victoria ! " else do checkPoint $ do printLives " Checkpoint anidado " lastChance extraLife printLives " Vida extra ! " die AND THE OUTPUT TO SOME CALLS ghci > runGameT test1 3 Vidas : 3 Vidas : 2 Vidas : 1 Nothing ghci > runGameT test2 3 Inicio 3 Checkpoint anidado 3 Checkpoint anidado 2 Checkpoint anidado 1 Vida extra ! 2 Inicio 1 Finish Just ( " Victoria ! " ,1) -- > John Meacham - ⑆repetae.net⑆john⑈ - > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > > -------------- next part -------------- An HTML attachment was scrubbed... URL: | http://www.haskell.org/pipermail/haskell-cafe/2010-July/080050.html | CC-MAIN-2014-15 | refinedweb | 560 | 64.64 |
#include <FXQuatf.h>
#include <FXQuatf.h>
Inheritance diagram for FX::FXQuatf:
[inline]
Construct.
Copy constructor.
Construct from components.
Construct from array of floats.
0.0f
Construct from axis and angle.
Construct from euler angles yaw (z), pitch (y), and roll (x).
Construct quaternion from two unit vectors.
Construct quaternion from three axes.
Construct quaternion from 3x3 matrix.
Adjust quaternion length.
Set quaternion from axis and angle.
Obtain axis and angle from quaternion.
Set quaternion from roll (x), pitch (y), yaw (z).
Set quaternion from yaw (z), pitch (y), roll (x).
Set quaternion from roll (x), yaw (z), pitch (y).
Set quaternion from pitch (y), roll (x),yaw (z).
Set quaternion from pitch (y), yaw (z), roll (x).
Set quaternion from yaw (z), roll (x), pitch (y).
Set quaternion from axes.
Get quaternion axes.
Obtain local x axis.
Obtain local y axis.
Obtain local z axis.
Exponentiate quaternion.
Take logarithm of quaternion.
Invert quaternion.
Invert unit quaternion.
Conjugate quaternion.
Construct quaternion from arc a->b on unit sphere.
Spherical lerp.
Multiply quaternions.
Rotation of a vector by a quaternion. | http://fox-toolkit.org/ref16/classFX_1_1FXQuatf.html | CC-MAIN-2017-22 | refinedweb | 178 | 73.44 |
Ok, well, some things are unclear to me about how cin works.
If we for example do this:
And if the input looks likeAnd if the input looks likeCode:#include <cstdlib> #include <iostream> using namespace std; int main() { int answer = 0; string s; cin >> answer; cin >> s; cout << s << endl; system("PAUSE"); return EXIT_SUCCESS; }
not_integer
Why wont cin >> s pick it up?
if the first cin >> answer fails, it pushes the first readed character back to input. so the next cin should pick it up.
but this isnt the case. why?
edit: i also came accros information that cin's bad and easily error prone. Should i avoid using it?
I want my programs to be safe ofcourse.
I come from a C background, thats why this is strange to me. | http://cboard.cprogramming.com/cplusplus-programming/127711-cplusplus-input-stream-cin.html | CC-MAIN-2015-27 | refinedweb | 131 | 82.44 |
+++
Fault Injection is the act of artificially changing the behavior of an existing executable code to simulate various faults. FI is very useful for validation of error handling code paths and for improving code coverage.
There are several types of fault injection. In runtime fault injection, the fault injecting test modifies the execution logic of the application under test (AUT), by injecting faults, triggered by specific runtime conditions. One could for example implement a FI test with the following semantic:
Throw an out-of-memory (OOM) exception, whenever the application calls method CreateWidget of class WidgetManager.
The FI terminology is as follows:
- AUT (application under test) – this is the tested application, in which faults are being injected;
- Fault Rule – The fault rule is a central construct in an FI test that determines WHEN faults get triggered and WHAT TYPES of faults get triggered. A fault rule consists of:
- a Method Signature, determining the method where the fault will be injected;
- a Fault Condition, determining when the specific fault should be triggered (e.g. every Nth call)
- a Fault — Determines the type of fault (e.g. throwing an exception, returning a specific value, etc.) that occurs when the fault condition is met.
- Fault Session – The fault session is a collection of fault rules that are applied to a given AUT.
TestApi provides a simple, but powerful runtime fault injection API for injecting faults in managed code. The API was originally designed and implemented by Bill Liu et al from the “Essential Business Server” team, and adapted to TestApi by Sam Terilli from our WPF XAML team. The following content provides a quick introduction to the API.
Sample AUT
Following is the code of a trivial AUT that we will use for demonstration purposes:
// // This is a sample application used for demonstration purposes. // using System; class MyApplication { static void Main(string[] args) { int a = 2; int b = 3; for (int i = 0; i < 10; i++) { Console.WriteLine("{0}) {1} + {2} = {3}", i, a, b, Sum(a, b)); } } private static int Sum(int a, int b) { return a + b; } }
The result of running this application is of course:
> MyApplication.exe
0) 2 + 3 = 5
1) 2 + 3 = 5
2) 2 + 3 = 5
3) 2 + 3 = 5
4) 2 + 3 = 5
5) 2 + 3 = 5
6) 2 + 3 = 5
7) 2 + 3 = 5
8) 2 + 3 = 5
9) 2 + 3 = 5
A Simple Fault Injection Test
Now, let’s try to inject a fault in the AUT. Let’s assume that we want to modify the return value of Sum. Here’s how we can accomplish that:
// // Simple fault injection test // using System; using System.Diagnostics; using Microsoft.Test.FaultInjection; public class FaultInjectionTest { public static void Main() { // // Set up a fault rule to return –1000 the second time Sum is called. // string method = "MyApplication.Sum(int,int)"; ICondition condition = BuiltInConditions.TriggerOnNthCall(2); IFault fault = BuiltInFaults.ReturnValueFault(-1000); FaultRule rule = new FaultRule(method, condition, fault); // // Establish a session, injecting the faults defined by the fault rule(s) // FaultSession session = new FaultSession(rule); ProcessStartInfo psi = session.GetProcessStartInfo(@".\MyApplication.exe"); // // Launch the target process and observe faults // Process p = Process.Start(psi); p.WaitForExit(); } }
Fairly straightforward. Upon running the test (which will itself spawn the AUT) we observe the following output:
> FaultInjectionTest.exe
0) 2 + 3 = 5
1) 2 + 3 = -1000
2) 2 + 3 = 5
3) 2 + 3 = 5
4) 2 + 3 = 5
5) 2 + 3 = 5
6) 2 + 3 = 5
7) 2 + 3 = 5
8) 2 + 3 = 5
9) 2 + 3 = 5
As intended, we injected a runtime fault in MyApplication.exe, which resulted in Sum returning –1000 the second time it got called.
Under The Covers
Under the covers, the managed code Fault Injection API uses the CLR profiling API to modify the prologue of the intercepted method at runtime in order to inject the desired fault. The injected prologue instructions essentially call a method in the library, which then dispatches the call to the specified fault.
Because faults are injected at runtime, the code of the original application is not modified in any way. There is a certain performance degradation, which depends on the number of the injected faults.
The fault injection API provides a variety of built-in conditions and faults (in the BuiltInConditions and BuiltInFaults classes respectively). Users of the API can also create custom conditions and faults (by implementing the ICondition and IFault interfaces respectively). The API also provides a set of classes that expose the ability to fine tune and monitor the injected faults.
In addition, the API provides a facility to set “global faults”, which is useful for server application testing, where one application typically consists of and recycles many different processes.
I have attached the sample above, which should get you up and running with fault injection.
I am trying to use the FaultInjection libraries to simulate data access failures.
Is this proper method signature to get the ExecuteNonQuery() method on DbCommand?
Dim method = "System.Data.Common.DbCommand.ExecuteNonQuery()"
Thanks.
Yep, this should work.
Let me know if you have trouble and we will help.
I had success when using the fault injection libraries on some sample code I had written, but now that I’m trying to use it against our intended target code it doesn’t seem to be working.
I want to use the ReturnValueFault on an internal static method which returns an int. The method is within an internal sealed class with the SecuritySafeCritical attribute set.
I’m creating the fault rule like this:
FaultRule rule = new FaultRule("static FullNamespace.ClassName.GetDeviceId(uint, IntPtr, IntPtr, IntPtr)", BuiltInConditions.TriggerOnEveryCall, BuiltInFaults.ReturnValueFault(1000));
I also tried prepending out to IntPtr as these are specified as out parameters in the method signature. I didn’t have success either way. Am I doing something wrong here?
My other thought is that this class and method are in a library/dll, and not within the exe code itself which is the process I’m starting a fault injection session for.
Any help is greatly appreciated. So far I’ve been very happy with the low cost of entry for using this API and the good documentation.
Hi David, can you try to replace "IntPtr" with "System.IntPtr" in the parameters list of the method? the data type of the parameter need to be fully qualified name too.
let me know if it still doesn’t work.
I tried System.IntPtr and had no luck here either. Is it an issue if the binary I am trying to inject into is signed?
no, the issue is not related to the sign. i noticed that you mention about ‘out parameter’. please make sure that ‘out’ is also part of the method signature. i.e your method signature should be:
static FullNamespace.ClassName.GetDeviceId(uint, out System.IntPtr,out System.IntPtr,out System.IntPtr)
i wrote a dummy app with the same method signature above, and the fault injection api works for me without problem. if this still doesn’t work for you. most likely reason here is that the method signature still not correct somehow, i.e it does not match what clr expecting in the runtime). a few thing you can duble check:
1. make sure the method got invoked.
2. make sure the method signature is correct or no typo (i often see people has typo in the method signature by mistakes).
3. in the %temp% folder, there should have some log, check if there are some error.
I have tried with the out parameter and also it did not work. I coulnd’t find any logs in %temp% folder. I realized today that the AUT is a native program that runs the code from the managed assemblies by hosting a CLR runtime. Is this my problem, is there a way to still use TestApi fault injection or will I have to use some native fault injection library instead?
ha. that’s the problem. when i saw the IntPtr parameter at the first place, i suspected that your aut might be using navtive code or p/invoke stuff which is out scope of this tool that targets to managed code only. in you case, you have to use native fault injection tool or windows API hooker stuff.
good luck.
Darn, this would have been the perfect solution. Thanks for your help!
I am just getting around to responding to Ivan’s response to my original comment.
Ivan, I am unable to get ExecuteNonQuery() to throw an exception for me. Here is all my code from within a (slightly verbose) unit test:
Dim method = "System.Data.Common.DbCommand.ExecuteNonQuery()"
‘Dim condition = BuiltInConditions.TriggerOnNthCall(2)
Dim condition = BuiltInConditions.TriggerOnEveryCall
Const ExceptionMessage As String = "This is a fault-injected exception."
Dim fault = BuiltInFaults.ThrowExceptionFault(New DataException(ExceptionMessage))
Dim faultRule = New FaultRule(method, condition, fault)
Dim s = New FaultSession(faultRule)
Dim rp = TestDataFactory.CreateYrtReinsurancePolicy(PolicyNumberFactory.CreateUnique())
Dim isExceptionThrown As Boolean
Try
rp.Save()
Catch ex As DataException
If ex.Message.Equals(ExceptionMessage) Then
isExceptionThrown = True
End If
End Try
Assert.IsTrue(isExceptionThrown)
Note that I gave up on the NthCall and went with EveryCall just to see if I could get it to work.
Any ideas?
if this is your complete version of the code, the fault injection won’t work, because i only see you set FaultSession, but not call Launch method. referring to the document/samples, in order to have fault injection works, you have to provide an executable which execute your target API (ExecuteNonQuery, in your case), and you set the fault injection rule and call launch method to launch your executable.
I have a private version to better support unit test, but probably take a while to release publicly. for now, this is the high level work flow (suppose you want to inject fault into function Foo()).
1. write an app to call Foo().
2. set fault injeciton rule and session, as you did above.
4. call Launch method to run your app. and when your app call Foo(), the fault will be injected.
let me know if you have any questions.
The problem with that workflow is that I would need to write a process wrapper for each and every unit test that I write. I think my solution file will get out of control quickly.
Given that, I think that I will wait for your private version.
Thanks for the clarification. I just didn’t infer that launching an additional process was necessary.
Hi,
When i try to add the FaultInjectionEngine.dll as a reference to my code in VisualC#, it is giving an error which says, "Please make sure that the file is accessible and that it is a valid assembly or COM object". Please help me on this.
In your VS project, you should add reference to TestApiCore.dll, not faultinjectionengine.dll.
Yah did the same, Still the same error crops in.
hmm, that’s strange. if you haven’t, can you try to create a brand new project and add testapicore.dll?
what’s the version of your VS, OS, 32bit or 64 bit?
also check my blog which has step by step instructions. you can see if you missed any important one.
Hi
I am planning to use FI API's to introduce faults in to my application hosted on IIS 7.0.
Can you please guide me on this.. ?
Thanks
Re: fault injection in IIS 7.0:
blogs.msdn.com/…/using-fault-injection-api-for-web-application-or-windows-service.aspx
Hi, I am new to this, I copied the sample solution given here and tried on visual studio 2010 . It simply isn't working .. I mean the fault injection test is not injecting the fault at all.
Please help.
Hi , when i try to run the same sample project on VS 2010 (after conversion) on winXP(SP3) the fault injection simply doesnt work. Please help.
I have tried the sample code on VS 2008 on a windows 7. It does work there though I follow the same steps.
how to create custom fault and custom condition?
please help
for vs2010 or .net 4.0 application, you need to enable one env variable. see my blog for steps:
blogs.msdn.com/…/if-it-still-not-working.aspx
the following code snipet to show how to custom the condition. the condition is to trigger fault if the current machine is management server (regkey settings):
[Serializable]
class MyCondition : ICondition
{
public bool Trigger(IRuntimeContext context)
{
RegistryKey masterKey = Registry.LocalMachine.CreateSubKey("SOFTWARE\Microsoft\FaultInjection");
string reg = "";
if (masterKey == null)
{
Console.WriteLine("Null Masterkey!");
}
else
{
reg = masterKey.GetValue("ServerType").ToString();
Console.WriteLine("MyKey = {0}", masterKey.GetValue("ServerType"));
}
masterKey.Close();
return reg.Equals("Management", StringComparison.OrdinalIgnoreCase);
}
}
Wait a minute. I thought you said
"It uses the .NET Profiler API to dynamically instrument the binaries as they are running so that the binaries are only altered in memory and not on the hard disk. "
But from the examples it seems you always have launch the application under test itself using the test apis/exe but can't actually target an already launched application and inject code into it, even though that's what you claimed.
@kannan
That's correct. Notice that the claim and the code don't contradict each other.
The API allows you to dynamically inject code in a binary, without having to do any post-build instrumentation steps. At the same time, in order to inject code you have to set up the execution environment appropriately to enable profiling for the process that gets created when the exe gets run. I.e., in the general case, you have to restart the process that you want to inject in.
I hope this clarifies things,
Ivo
Thank you for sharing this very nice post, please keep continue the sharing of this types of information. Here we are waiting for more
@Performance Injector:
Thanks for the nice words! 🙂
We are looking into extending the TestApi facilities further.
i hate this it didnt give me what i need
@none
What didn't work? Can I help?
what if Class A implement Interface B, with method C. should I injection to specify to inject "A.C" or "B.C"? | https://blogs.msdn.microsoft.com/ivo_manolov/2009/11/25/introduction-to-testapi-part-5-managed-code-fault-injection-apis/ | CC-MAIN-2017-09 | refinedweb | 2,368 | 65.73 |
log1pl (3) - Linux Man Pages
log1pl: logarithm of 1 plus argument
NAME
log1p, log1pf, log1pl - logarithm of 1 plus argument
SYNOPSIS
#include <math.h> double log1p(double x); float log1pf(float x); long double log1pl(long double x);Link with -lm.
Feature Test Macro Requirements for glibc (see feature_test_macros(7)):
log1p():
- _ISOC99_SOURCE || _POSIX_C_SOURCE
>= 200112L
|| _XOPEN_SOURCE >= 500
|| /* Since glibc 2.19: */ _DEFAULT_SOURCE
|| /* Glibc versions <= 2.19: */ _BSD_SOURCE || _SVID_SOURCE
log1pf(), log1pl():
- _ISOC99_SOURCE || _POSIX_C_SOURCE
>= 200112L
|| /* Since glibc 2.19: */ _DEFAULT_SOURCE
|| /* Glibc versions <= 2.19: */ _BSD_SOURCE || _SVID_SOURCE
DESCRIPTIONThese functions return a value equivalent to
log (1 + x)
The result is computed in a way that is accurate even if the value of x is near zero.
RETURN VALUEOn success, these functions return the natural logarithm of (1For an explanation of the terms used in this section, see attributes(7).
CONFORMING TOC99, POSIX.1-2001, POSIX.1-2008.
BUGSBefore version 2.22, the glibc implementation did not set errno to EDOM when a domain error occurred.
Before version 2.22, the glibc implementation did not set errno to ERANGE when a range error occurred.
SEE ALSOexp(3), expm1(3), log. | https://www.systutorials.com/docs/linux/man/3-log1pl/ | CC-MAIN-2020-16 | refinedweb | 185 | 59.4 |
Re: Date/Time Validation in Forms
Have a look at DateTimeField from YUI. In any case, you can always create a FormValidator so it can validate several components. BTW, you should really change the fields to be Integer and not String. Eyal Golan egola...@gmail.com Visit: LinkedIn:
Re: Internal error parsing wicket:interface = :6
Thanks, Alex. So you are sure this is no problem of our web application, but rather users who change such URLs manually or have bookmarked non-static ones? Tom Alex Objelean wrote: In wicket 1.3.x you would get just WicketRuntimeException... by default you will be redirected to default
RE: Newbie question: fileupload AJAX progressbar ?
You need @Override protected WebRequest newWebRequest(HttpServletRequest servletRequest) { return new UploadWebRequest(servletRequest); } In your Application's class. I think you should definitly read the APIdoc (see UploadProgressBar)! Stefan
RE: Newbie question: fileupload AJAX progressbar ?
Hi, Add this in your application class SVRWebApplication: @Override protected WebRequest newWebRequest(HttpServletRequest servletRequest) { return new UploadWebRequest(servletRequest); } Best regards! Jing -Original Message- From: Ashika Umanga Umagiliya
Re: Internal error parsing wicket:interface = :6
If they would bookmark these url, they would get SessionExpired page... I'm pretty sure they have tried to hack the url. Alex Objelean Thomas Singer-4 wrote: Thanks, Alex. So you are sure this is no problem of our web application, but rather users who change such URLs manually or have
Re: Newbie question: fileupload AJAX progressbar ?
Thanks Stefan, That solved my problem. Since UploadProgreeBar is a component of 'wicket-extensions', i refered documentation at which is kind of updated (versoin 1.2) . I had to download documentation for 1.4 from the maven repository.
RE: Newbie question: fileupload AJAX progressbar ?
Hi Ashika, I pointed yopu tot he documentation because I was not sure if using UploadWebRequest has any side effects. Does not seem so. Stefan -Ursprüngliche Nachricht- Von: Ashika Umanga Umagiliya [mailto:auma...@biggjapan.com] Gesendet: Mittwoch, 19. August 2009 09:10 An:
Re: Ajax form submits in a Wicket portlet on Liferay
Having investigated the issue further, I noticed that the same code is working in some portlets and does not work in others. Therefore I created 2 portlets with exactly the same content except for the plugin name and the portlet-name: * in the first case the plugin name is wexample-portlet and
Question about threads inside wicket pages
Greetings all, Please refer to image at : I am going to invoke a webservice using Axis2 Client,asynchronically.To getback the results, I am using a Callback handler in axis2. Within my page, I am going to create the Callback object (axisCallbackHandler in
Re: Session listener
I think overriding WebApplication#sessionDestroyed should do the trick. On Wed, Aug 19, 2009 at 12:26 PM, David Leangenwic...@leangen.net wrote: Hi! What's the best way to get notified of a session timeout event from within a Wicket App when I don't have access to the deployment descriptor?
SubmitLink in List/DataView in Form not working?
Hi All, I almost finished 2 Wicket sites succesfully without any problems, and now I'm busy with a third one... but I'm lost... and can't find a solution in the list so far... I have a submitlink in a listview, together with several input fields (zie HTML below). In code I have: item.add(new
Re: Full integration Wicket - Blazeds. Is it possible?
Make your own filter and implement these methods this way: Ryan Gravener On Mon, May 4, 2009 at 1:01 PM, Fernando Wermusfernando.wer...@gmail.com wrote: Hi all, I am working with flex and wicket and I would like to get a full
Re: Ajax form submits in a Wicket portlet on Liferay
I renamed the plugin from server-utest-portlet to serverUtest-portlet to remove any other '-' characters. Still the issue is present. By the way, is it a Wicket limitation that the Liferay portlet plugin names should not contain '-' characters other than the one in the '-portlet' postfix? I am
swf into wicket page
Hello Friends, Shall we have(embed) a swf in wicket component(Panel or in wicket page). I have swf that shows chart for dynamic data. i want to show my .swf file into wicket page or wicket panel if possible please give me modal code. I need urgent reply -- Thanksregards, Gerald A
Re: swf into wicket page
Here's how I did it: Java public class VideoPlayer extends Panel { public VideoPlayer(String id, final Bedrijf bedrijf, final Video video) { super(id); add(new WebComponent(player){ @Override protected void
Modal window and SSL
Hello all, We are having a problem when we work on an SSL environment. Whenever we open a modal popup window, there's this annoying message of IE that the user is trying to open both secure and non secure content. We changed the IE settings and the message is gone. But I want to understand what's
How to get Project Folder's Relative Path ?
I want to get relative path of my current wicket project folder. I have used System.getProperty(user.dir) method but it returns the path of Tomcat's bin directory. I want to access files of my project through relative path, so that i can access these files on other System also. I have my
Re: SubmitLink in List/DataView in Form not working?
shame on me when I copied the raw html from the designers into my page, I forgot to remove a 'dummy' form object, so there was a nested form 2009/8/19 Martijn Lindhout mlindh...@jointeffort.nl: Hi All, I almost finished 2 Wicket sites succesfully without any problems, and now I'm busy
AjaxTabbedPanel with different forms on each tab
Hi, I have an AjaxTabbedPanel in a ModalWindow. There are two tabs, each containing a panel with a simple form. Looking at the html produced, the AjaxTabbedPanel moves the form element from the place in the panel to near the top of the tabbed panel html hierarchy and changes the id. All form
How do I get my DropDownChoice to refresh the list of choices with Ajax?
I have a page with a dropdown list on it with authors. If I enter a new book I want my fields to be filled based on the ISBN. This works just fine. But now I want those fields to be filled with data I collect from the internet, meaning that it isn't always already in my database, and therefor
Re: Question about threads inside wicket pages
I'm not 100% sure, but I'm pretty sure that it would depend on your servlet container more than Wicket. The threads for handling requests are spun up by the servlet container before Wicket is ever handed the request. And typically these threads are pooled - so it wouldn't be *destroyed*. But
Re: Shall we have(embed) a swf in wicket component(Panel or in wicket page).
You can use SWFObject that is in the wiki. You also can use LightWindow to show SWFObject. Look for SwfObject Javascript to make it to run and the examples. Take care that this panel cannot be put in a modal Window, I tried and failed. Anyway I pasted below public class SWFObject extends
Re: Modal window and SSL
it just means that you are on a https page but it links to some http resources, eg images or javascripts. so make sure when you are on a https page all your resources are brought in via relative urls and do not start with http:// one specific example is that you can be on an https page but you
Re: Newbie question: fileupload AJAX progressbar ?
I've just copied the example upload page from into my application with the same result: upload works but the progess bar is NOT updated. So, anything else needed besides overriding 'newWebRequest()' ? I'm using wicket,
Wicket and RSS
I've got a simple RSS feed I'm trying to display on a page. Is wicketstuff-rome dead? Is there a better way? I'm not a maven user (not even familar with it) - so are there any jars I can download somewhere for wicketstuff-rome, if it's not dead? Appreciate the help, thanks.
how to debug Wicket Application
Hi , I am new in wicket. I try to modify wicket in action examples and got such exception: WicketMessage: Error attaching this container for rendering: [MarkupContainer [Component id = content, page = wicket.in.action.chapter03.section_3_1.dbdiscounts.web.Index, path =
Re: how to debug Wicket Application
You can start by setting a breakpoint and running the application in debug mode. -Matej On Wed, Aug 19, 2009 at 7:39 PM, Oleg Ruchovetsoruchov...@gmail.com wrote: Hi , I am new in wicket. I try to modify wicket in action examples and got such exception: WicketMessage: Error attaching
Re: how to debug Wicket Application
Ok , thank you for quick responce. I am using eclipse and run my application on jetty web server. How can I configure application to be in debug mode? I mean what kind of configuration should I do? On Wed, Aug 19, 2009 at 8:41 PM, Matej Knopp matej.kn...@gmail.com wrote: You can start
Re: Modal window and SSL
I remember this issue begin fixed already: did someone change ModalWindow in the meantime? Am 19.08.2009 um 17:13 schrieb Igor Vaynberg: it just means that you are on a https page but it links to some http resources, eg images or javascripts.
Re: how to debug Wicket Application
Use the Start.java that came with the Wicket quickstart to start Jetty. From Eclipse, just choose to debug as - java application rather than run as. This will put it in debug mode and you can set breakpoints, etc. -- Jeremy Thomerson On Wed, Aug 19, 2009 at
Re: AjaxTabbedPanel with different forms on each tab
Tim, You have to place your modal window within a form. It's mandatory if you want to use Forms on a modal. You have to write code similar to the following: form wicket:id=outerForm div wicket:id=modalWithTabsmodal placeholder/div /form Form outerForm =new Form(outerForm);
Re: Modal window and SSL
Thanks Igor and Peter.Peter, we did change the JS that builds the ModalWindow. I'll look into it. Thanks, Eyal Golan egola...@gmail.com Visit: LinkedIn: P Save a tree. Please don't print this e-mail unless it's really
Re: DropDown where (type of model property) != (type of choices) -igor On Wed, Aug 19, 2009 at 3:17 PM, Eirik Lygreeirik.ly...@gmail.com wrote: I'm looking for the best way to implement a drop-down, where the type of the model property is not the type of the lists: For
Re: Session listener
What's the best way to get notified of a session timeout event from within a Wicket App when I don't have access to the deployment descriptor? I think overriding WebApplication#sessionDestroyed should do the trick. Perfect! Thank you.
Re: Newbie question: fileupload AJAX progressbar ?
Hi Robin, Are you using Safari , I saw in following threads that its not working properly with Safari .
RE: javascript effects before an ajax call
How can you make the script sources that you reference in your web page show up after the component wicket script includes? (assuming you don't want to do this inline) I would like to override the Wicket.replaceOuterHtml function because I would like to do some jquery dom post processing.
Changing the Model Value does not change the Form Parameters in POST
Please help.Changing the Model Value does not change the Form Parameters in POST.How can I change the parameters that are passed in POST requests? Try the new FASTER Yahoo! Mail.. Experience it today!
Re: Question about threads inside wicket pages
Hi Jeremy, I tried to call Page.info() inside the new thread created by axis2client , then it gives the message : EXCEPTION:you can only locate or create sessions in the context of a request cycle I guess,this means I must change the page data inside the same thread right? Thanks in
[announce] Wicket 1.4.1
Apache Wicket 1.4.1 Released The Apache Wicket project is proud to announce the first maintenance release of Apache Wicket 1.4. Download Apache Wicket 1.4.1 - You can download the release here: Or use
Re: swf into wicket page
Hi martijin i have write what u gave me but it shows that Bedrijf cannot be resolved to a type Video cannot be resolved to a type create a class fo rBedrijf and Video As i new to wicket i dont know how to write it please tell me how to do it. ThanksRegards, Gerald A On Wed, Aug 19, 2009 at
How to get Relative Path ?
Is there a way in wicket, to get relative path of current web project ? Thanks...
Re: [announce] Wicket 1.4.1
congratulations. On Thu, Aug 20, 2009 at 1:21 PM, Igor Vaynberg igor.vaynb...@gmail.comwrote: Apache Wicket 1.4.1 Released The Apache Wicket project is proud to announce the first maintenance release of Apache Wicket 1.4. Download Apache Wicket 1.4.1 | https://www.mail-archive.com/search?l=users%40wicket.apache.org&q=date:20090819&o=newest | CC-MAIN-2021-25 | refinedweb | 2,208 | 64.3 |
I am following this example but it is not that useful:
anyhow I am getting an run time error that says: The application is not configured yet.
but I made an application object .
the error happen a node = new Node();
what am I missing
this is my class:
using System;
using Urho.Audio;
using Urho;
using Urho.Resources;
using Urho.Gui;
using System.Diagnostics;
using System.Globalization;
namespace Brain_Entrainment
{
public class IsochronicTones : Urho.Application
{
/// Scene node for the sound component.
Node node;
/// Sound stream that we update.
BufferedSoundStream soundStream;
public double Frequency { get; set; }
public double Beat { get; set; }
public double Amplitude { get; set; }
public float Bufferlength { get; set; }
const int numBuffers = 3;
//protected IsochronicTones(ApplicationOptions options = null) : base(options) {}
public IsochronicTones(ApplicationOptions AppOption) : base (AppOption) { Amplitude = 1; Frequency = 100; Beat = 0; Bufferlength = Int32.MaxValue; } public void play() { Start(); }
protected override void OnUpdate(float timeStep)
{
UpdateSound();
base.OnUpdate(timeStep);
} protected override void Start() { base.Start(); CreateSound(); }
void CreateSound()
{
// Sound source needs a node so that it is considered enabled
node = new Node();
SoundSource source = node.CreateComponent();
soundStream = new BufferedSoundStream();
// Set format: 44100 Hz, sixteen bit, mono
soundStream.SetFormat(44100, true, false);
// Start playback. We don't have data in the stream yet, but the SoundSource will wait until there is data, // as the stream is by default in the "don't stop at end" mode source.Play(soundStream); }
void UpdateSound()
{
// Try to keep 1/10 seconds of sound in the buffer, to avoid both dropouts and unnecessary latency
float targetLength = 1.0f / 10.0f;
float requiredLength = targetLength - Bufferlength;//soundStream.BufferLength;
float w = 0;
if (requiredLength < 0.0f)
return;
uint numSamples = (uint)(soundStream.Frequency * requiredLength);
if (numSamples == 0)
return;
// Allocate a new buffer and fill it with a simple two-oscillator algorithm. The sound is over-amplified
// (distorted), clamped to the 16-bit range, and finally lowpass-filtered according to the coefficient
var newData = new short[numSamples];
for (int i = 0; i < numSamples; ++i)
{
float newValue =0;
if (Beat == 0)
{
newValue = (float)(Amplitude * Math.Sin(Math.PI * Frequency * i / 44100D));
}
else
{
w = (float)(1D * Math.Sin(i * Math.PI * Beat / 44100D));
if (w < 0)
{
w = 0;
}
newValue = (float)(Amplitude * Math.Sin(Math.PI * Frequency * i / 44100D));
}
//accumulator = MathHelper.Lerp(accumulator, newValue, filter);
newData[i] = (short)newValue;
}
// Queue buffer to the stream for playback
soundStream.AddData(newData, 0, newData.Length);
}
}
}
Answers
anyone ?
?
the "The application is not configured yet." usually means you are trying to initialize an Urho object and the app is not started yet. for example it may happen if you initialize field using "inline" syntax. I don't see any reason why this exception should be thrown in your case so if you could share the whole project - I'd try to reproduce. Or at least could you please paste the stack trace.
here is the full project: | https://forums.xamarin.com/discussion/comment/288939 | CC-MAIN-2020-29 | refinedweb | 469 | 51.04 |
Secrets of Maintainable Codebases
Editorial.
So today, I’d like to take a different tack in talking about maintainable code. Rather than discuss the code per se, I want to discuss the codebase as a whole. What are the secrets to maintainable codebases? What properties do they have, and what can you do to create these properties?
In my travels as a consultant, I see a so many codebases that it sometimes seems I’m watching a flip book show of code. On top of that, I frequently find myself explaining concepts like the cost of code ownership, and regarding code as, for lack of a better term, inventory. From the perspective of those paying the bills, maintainable code doesn’t mean “code developers like to work with” but rather “code that minimizes spend for future changes.”
Yes, that money includes developer labor. But it also includes concerns like deployment effort, defect cycle time, universality of skills required, and plenty more. Maintainable codebases mean easy, fast, risk-free, and cheap change. Here are some characteristics in the field that I use when assessing this property. Some of them may seem a bit off the beaten path.
A High-Value Test Suite
You had to see this coming, so I’ll get it out of the way right off the bat. You want unit tests. That notion is up there with death and taxes among life’s inevitabilities. And yet, so often we miss the mark with interim valuations.
People tend to look for presence of unit tests or the number of unit tests at first, and then mature to look for unit test coverage. I don’t, personally, find any of these metrics highly predictive of maintainable codebases. (Though the absence of any automated testing generally correlates with low maintainability).
Surprising? Not if you consider how easily people can spray low-value unit tests at an existing codebase. Want coverage? Write a bunch of tests that invoke that gigantic Singleton at the heart of your application, swallow all exceptions, and don’t assert anything. You can probably get your coverage up near 30% in just a few hours, while doing absolutely nothing to improve your maintainability.
I look for two basic things when doing a snap-evaluation of a test suite: if I delete a line of production code, does a test go red, and do the unit tests have high code churn? This indicates a high-value suite via Goldilocks triangulation. I know that any line in the codebase was created according to a plan and I know that the tests themselves are not unduly adding to the maintenance burden.
High Cohesion, Low Coupling
Here, I offer a broad concern, but that’s because I look for it thematically, in all facets of the codebase. The easiest way that I can express these concepts in a narrative sense is as follows.
- High Cohesion: does stuff that needs to change together occur together?
- Low Coupling: do you avoid making otherwise independent concerns dependent?
This represents a surprisingly holistic concern when talking about codebases. Developers will want to have a sense for these concepts when creating classes or even methods. But it rolls up from there as well. When eyeing more strategic application concerns, the team should consider cohesion and coupling for namespaces, projects, and even disparate applications as well. Does your codebase take on external service dependencies cavalierly or sparingly?
Low cohesion tends to drive risk and reputation related problems. If I use my couch to sleep, and I also put wheels on it and use it to get to work, my life gets weird in a hurry, and I offer explanations to others that make no sense. This happens because my couch is doing two disparate things. “Sorry I was late to the office — one of the legs broke off of my couch.” In code that becomes, “sorry you couldn’t log in for a while — we messed up the CSS for a maintenance screen.”
High coupling tends to create business capability nightmares. “Oh, yeah, I can’t move out of my apartment ever. It turns out that I cemented my couch to my refrigerator and now the whole thing won’t fit through the door.” Fusing weird things together in your code will make “no way” the answer to “can we add feature X” way too often.
Low Time to Comprehension
The last secret that I’ll mention covers more ground than it may seem at first. Frequently, I’ll see advice on maintainable code that includes things like, “indent consistently” or, perhaps more profoundly, “take the time to think of good names.” Yes, and yes. Definitely both of these pieces of advice will serve you well.
But the broader concern here, particularly at the codebase level, has to do with the 0 to comprehension. Generic pieces of advice about code readability address the wide audience of developers and assume a relatively fluid influx and efflux of team members. But the maintainability of a particular codebase, maintained by a particular team at a particular point in time, will hinge heavily on how quickly that team understands the code.
Obviously, a critical factor here is the development labor effort. If I get a defect in my queue and trace it to some part of the code, then I’ll have a fix more quickly when I understand the code more quickly. But low time to comprehend extends way beyond this simple concern. Absent barriers to comprehension, knowledge transfer and communication become simpler. The team benefits via more stability in general maintenance and development and via substantially increased likelihood of collaboration.
To put it another way, you’ll spend a lot less money on a codebase where no one says, “oh, that’s in Bob’s code… you’re going to have to take that up with Bob.” Even assuming Bob can fix his own, inscrutable stuff quickly, the cost of ownership of that code includes the caveat, “and it gets a lot more expensive if Bob ever takes a vacation.”
You Maintain Codebases, Not Code
It may seem, to some extent, as though I’m quibbling. But I really don’t think that’s the case at all. Nobody maintains a method or six files of code — they maintain applications.
Certainly applications consist of code, and certainly that code being maintainable helps the application’s maintenance footprint. And you should strive to keep any bit of code that you write clean and maintainable.
But don’t lose sight of the real goal here. Understand that you need to think about the application and that you need to think about the cost of owning that think in a whole variety of ways. When you start thinking that way, you’ll find yourself in on the secrets of maintainable codebases. | https://daedtech.com/secrets-maintainable-codebases/ | CC-MAIN-2022-40 | refinedweb | 1,139 | 62.88 |
#include <TreeIntVectSet.H>
For explanations of these functions please look at IntVectSet class when the documentation doesn't appear here.
Further details of how non-recursive TreeNode design works:
(for a 2D tree)
(m_tree) + -- 0
(a)+ - 0 1 1 + 1 <------you are here
+ - + - 0 0 1 0 1 0 0
for the node indicated, the 'index' vector would contain
index=[ 0 1 3 ...............] parents=[&m_tree &a ..................]
or directly refered to as m_tree.nodes[1].nodes[3]
the tree indicates a covering of an index space in either 1 or 0. 1 is stored in the tree by pointing at the static 'full' data member, 0 is stored as a 0.
every 'nodes' member of a TreeNode object can be either
0, &full, or a pointer .
The interpretation of the tree depends on what m_spanBox is. nodes[i] indicates whether the i'th quadrant of the parent Box is full, empty, or needs to be parsed deeper.
References clearTree(), and m_tree.
trade internals of two objects
or
and
and
and
not
not
not
returns true if
Primary sorting criterion: numPts(). Secondary sorting criterion: lexigraphical ordering of the IntVects, taken in the order traversed by TreeIntVectSetIterator. In a total tie, returns false.
Returns Vector<Box> representation of this IntVectSet.
Returns Vector<Box> representation of this IntVectSet.
Referenced by operator<<().
Returns Vector<Box> representation of this IntVectSet.
Returns Vector<Box> representation of this IntVectSet.
somewhat expensive, but shouldn't be ;-)
a proper faster chop function that does it all in-place and saves the giant memory cost.
expensive
fast if iref is power of 2
fast if iref is power of 2
fast if iref is power of 2
fast if iref is power of 2
slow operation
Referenced by ~TreeIntVectSet().
Referenced by TreeIntVectSet(), and ~TreeIntVectSet().
Referenced by nextNode(). | http://davis.lbl.gov/Manuals/CHOMBO-SVN/classTreeIntVectSet.html | CC-MAIN-2018-43 | refinedweb | 296 | 57.98 |
ServiceStack is an independent, self-sufficient, light-weight framework built on top of ASP.NET for building robust end-to-end web applications.
In the previous edition of the DNC .NET Magazine (Issue 15th Nov-Dec 2014), we introduced Service Stack and saw how easily one can build web apps and web APIs using this great framework. The same article can also be accessed online over here.
If you haven’t read the previous article yet, I would encourage you to read it and check the sample code before continuing further. Code sample of the previous article contains a Student Reports application that performs basic operations on a database and shows data on Razor views. In this article, we will continue adding the following features to the same sample:
Security is an integral part of any enterprise application. Every framework that allows us to create great apps should also provide ways to secure the app for legitimate users. ServiceStack comes with an easy and effective solution for applying security on its components.
In the period of last 2 years or so, OAuth has gotten a lot of attention. We see a number of popular public sites allowing users to authenticate using their existing accounts on Google, Facebook, Twitter or a similar site. ServiceStack offers this to us for free of cost; with minimum setup required to get it up and running.
To enable authentication and authorization in a ServiceStack app, we don’t need to install a new NuGet package to the app. The feature is included in core package of the framework. The framework includes a number of auth providers; following are most commonly used ones:
We can write our own custom authentication provider by either implementing ServiceStack.Auth.IAuthProvider interface or by extending ServiceStack.Auth.AuthProvider class. It is also possible to write our own OAuthProvider by extending the class ServiceStack.Auth.OAuthProvider.
In the sample application, we will add credentials based authentication and Twitter authentication. Let’s start with credentials authentication.
To enable credentials authentication and to provide users to register, we need to add the following statements to Configure method in AppHost.cs:
Plugins.Add(new AuthFeature(() => new AuthUserSession(), new IAuthProvider[]{
new CredentialsAuthProvider()
}));
Plugins.Add(new RegistrationFeature());
The CredentialsAuthProvider needs a repository to store/retrieve user information. The framework has a set of built-in repositories that work with some of the most widely used data storage mechanisms. Following are the repositories available:
These repositories serve most of the common cases, and you can write your own repository by implementing ServiceStack.Auth.IUserAuthRepository. In the sample application, since we are already using SQL Server for application data; let us use the same database for storing user data. So OrmLiteAuthRepository is our obvious choice.
The OrmLiteAuthRepository class is defined in ServiceStack.Server assembly. Let’s add it using NuGet.
Install-package ServiceStack.Server
The OrmLiteAuthRepository needs an IDbConnectionFactory object to perform its action. All we need to do to use OrmLiteAuthRepository is create an object with an IDbConnectionFactory and register it with the IoC container of ServiceStack, Funq. Following are the statements to be added to Configure method to perform this task:
var repository = new OrmLiteAuthRepository(ormLiteConnectionFactory);
container.Register(repository);
The OrmLiteAuthRepository instance needs some of tables to work with. As we don’t have the tables already created in our DB, we can ask the repository to create them for us. For this, the repository has a method called DropAndReCreateTables. As the name suggests, it would drop and re-create all the tables. So we need to add a condition to check if one of the tables is already created. This might not be the best possible check, but works for the demo.
using (var db = ormLiteConnectionFactory.Open())
{
if(!db.TableExists("UserAuth")){
repository.DropAndReCreateTables();
}
}
Exploring Authentication Endpoints
Now if you run the application and see metadata page of ServiceStack, you should be able to see some new endpoints automatically added for us.
Figure 1: New Endpoints
Let’s register a user using the endpoint. Open Fiddler or, Chrome’s REST Client plugin or any HTTP debugger of your choice. I am use Fiddler to play with the APIs I build. We will send a POST request to the Register endpoint listed in Figure-2.
To know how to use this endpoint, click the JSON link (You can click any of the corresponding links; we will use the JSON endpoint in our app). It shows the options and format of the data to be posted to the endpoint.
Figure 2: Data format and options
In your HTTP debugging tool, compose a POST request following the structure in figure 3. Figure 4 shows how to do it in Fiddler.
Figure 3: POST Request
After executing this request, you would get the following response indicating successful registration from the server:
Figure 4: Response in Fiddler
If you check the UserAuth table now, it has a row with the data you just entered:
Figure 5: Newly added user in UserAuth table
If you set autoLogin to true in the JSON data sent with the request, you would also receive a session ID along with the request. I intentionally didn’t do it so that I can show how to login using the auth API.
To login using the API, we need to send a POST request to /auth/credentials endpoint (as we are using credentials provider). You can check the request specification in the documentation. Following is the request sent to login API from Fiddler:
Figure 6: POST Request for login
If your credentials are correct, you will get a success response from server containing session ID.
Figure 7: Successful response with SessionID
Similarly for logout, we need to send a request to the following URL to get the user logged out of the app:
Applying Authentication on a page and Creating Login Form
Let’s impose authentication rules on the Marks API. It can be done by simply adding Authenticate attribute to the MarksRequestDto class.
[Authenticate]
public class MarksRequestDto
{
//Properties of the class
}
Now if you try to load the marks page of any student, you would be redirected to the login page. As we don’t have a view for login yet, the browser will display an error. Let’s add a simple login page to the application. We need a DTO class and a service to respond to the request.
public class LoginService : Service
{
public object Get(LoginDto request)
{
var session = this.GetSession();
if (session.IsAuthenticated)
{
var redirectionUrl = (request.Redirect == string.Empty || request.Redirect == null) ? "students" : request.Redirect;
return this.Redirect(redirectionUrl);
}
return request;
}
}
[Route("/login", Verbs="GET")]
public class LoginDto
{
public string Redirect { get; set; }
}
Add a view and name it LoginDto.cshtml. Add the following mark-up to the page:
@using StudentReports.DTOs
@inherits ViewPage
@{
ViewBag.Title = "Login";
}
Username:
@section Scripts{
This page needs some JavaScript to handle login. Add a JavaScript file to the application inside a new folder App and name it loginPage.js. Add the following script to this file:
(function (window) {
function activateLoginPage() {
$("#loginForm").submit(function (e) {
var formData = $(this).serialize();
var loginUrl = $(this).attr("action");
$.ajax({
url: loginUrl,
type: "POST",
data: formData
}).then(function (data) {
location.replace(decodeURIComponent(location.href.substr(location.href.indexOf("=") + 1)));
}, function (e) {
console.log(e);
alert(e.statusText);
});
e.preventDefault();
});
}
window.activateLoginPage = activateLoginPage;
}(window));
Build and run your application after saving these changes. Change the URL to. You will see the login page appearing on the screen. Login using the credentials you used to register; it should take you to home page after logging in. And now you should be able to view marks of students as well.
Note: I am not creating the register form as part of this article. Creating a register form is similar to the Login form except the fields and API to be used. I am leaving this task as an assignment to the reader.
Let’s add an option for the users to login using their twitter accounts. For this, you need to register your app on twitter app portal and get your consumer key and consumer secret. Once you get the keys from twitter, add following entries to your app settings section:
/students"/>
/auth/twitter"/>
"/>
"/>
Names of keys shouldn’t be modified. Finally, you need to modify the Authentication feature section in AppHost.cs as:
Plugins.Add(new AuthFeature(() => new AuthUserSession(), new IAuthProvider[]{
new CredentialsAuthProvider(),
new TwitterAuthProvider(new AppSettings())
}));
Now, we are all set to use twitter authentication. Save all files, build and run the application. Open a browser and type the following URL:
This will take you to twitter’s login page and will ask you to login. Once you are logged in, it will take you to the following page:
If you click authorize app in the above page, it would take you to the home page and you will be able to access the pages that require authentication.
Similarly, you can add any other OAuth provider to authenticate a user. You can learn more about OAuth providers in ServiceStack on their wiki page.
To logout, you need to send a request to the following path: API.
This API removes user details from the session. To make it easier, you can add a link on your page.
Adding a login/logout link with name of the logged in user needs a bit of work. This is because we get user information from session in the service. We can read the user’s name from session and pass it to view through response DTO. Following snippet shows how to read username and assign it to the response DTO:
public object Get(MarksRequestDto dto)
{
var username = this.GetSession().UserName == null ? "" : this.GetSession().UserName.ToTitleCase();
//Logic inside the method
return new MarksGetResponseDto()
{
Id = student.StudentId,
Name = student.Name,
Class = student.CurrentClass,
Marks = new List() { marks },
Username = username
};
}
In the Layout page, you can use this property to show username and login/logout link. Following is the snippet:
@if (Model.Username != "" && Model.Username != null)
{
Hi @Model.Username!
}
else
{
}
If you login using local account it would display your local username, and if you use an OAuth provider, it would display username of your social account.
Authentication makes sure that the user is known to the site and authorization checks if the user belongs to the right role to access a resource. To restrict users accessing a resource based on the roles, we need to apply the RequiredRole attribute on the request DTO class. Following snippet demonstrates this:
[Authenticate]
[RequiredRole("Admin")]
public class MarksRequestDto
{
//properties in the DTO class
}
Apply this attribute to the MarksRequestDto class. Now if you try accessing the marks page for any student, the app won’t allow you (and it redirects to home page because of the changes we did to login page). You can assign role to a user using /assignroles API, but this API is restricted to users with Admin role. So let’s create a new user with admin role when the application starts. Add the following snippet inside AppHost.cs after creating the tables:
if (repository.GetUserAuthByUserName("Admin") == null)
{
repository.CreateUserAuth(new UserAuth()
{
UserName = "Admin",
FirstName = "Admin",
LastName = "User",
DisplayName = "Admin",
Roles = new List() { RoleNames.Admin }
}, "adminpass");
}
Login using these credentials and now you should be able to access the marks page.
A user can be assigned with a set of permissions too and the permissions can be checked before allowing the user to access an API. To set permissions to the admin in above snippet, assign a string list to the Permissions property:
repository.CreateUserAuth(new UserAuth()
{
UserName = "Admin",
FirstName = "Admin",
LastName = "User",
DisplayName = "Admin",
Roles = new List() { RoleNames.Admin },
Permissions = new List() { "AllEdit" }
}, "adminpass");
Following is the attribute to be applied on the API to check for the permission:
[RequiredPermission("AllEdit")]
Now you can assign a role to an existing user. To do that, login using the admin credentials and invoke the /assignroles POST API with following data:
{“username”:”ravi”, “roles”:”Admin”}
After this step, you will be able to access the marks page when you login using ravi as username.
In rich client based applications where you have a lot of client side script to be loaded during page load, loading time of the page gets affected because of number of round trips made to server to load all files. Using techniques like bundling and minification, these files can be combined and shortened to reduce the total size and number of downloads from server to make the page load faster.
ServiceStack supports bundling and minification. To enable bundling in the ServiceStack app, we need to install the Bundler Nuget package.
· Install-package Bundler
This package adds a new file Bundler.cs and a folder bundler to the project. If you expand bundler folder, it contains a node_modules folder and some executable cmd and exe files. This may pop a question in your mind that why would we need Node.js modules in a .NET application? The bundler.js file in this folder requires Node.js code to use these modules and perform the operations. This package is capable of other things including compiling LESS to CSS and CoffeeScript to JavaScript.
We have two files in App folder that we need to combine and minify. We need to create a .bundle file listing the files to be bundled. Add a new file to App folder and rename it to app.js.bundle. Add following content to this file (list of js filed to be bundled):
loginPage.js
marksRequestPage.js
By default, the package looks inside Content and Scripts folders for bundle files, but we can change it in the bundler.cmd file. Open the bundler.cmd file and modify the node command as follows:
if "%*" == "" (
node bundler.js ../App
)
Now if you run the cmd file, you will get two files:
· app.js: Contains combined content of the files mentioned in the bundle file
· app.min.js: Contains combined and minified content of the files mentioned in the bundle file
From now on, whenever you make a change to one of these files, you need to run the bundle.cmd file. This step seems tiring; so let’s automate it with the build process. Open properties of the project and specify following command under Build Events -> Post build event commandline:
"$(ProjectDir)bundler\bundler.cmd"
From now on, whenever you build the project, bundler.cmd would also run and it would produce two files in the App folder. One of these files would contain concatenated code from both files and the other would contain concatenated and minified code. To refer to the resultant file of the bundle, specify the following statement in Layout.cshtml after reference of jQuery:
@Html.RenderJsBundle("~/App/app.js.bundle", ServiceStack.Html.BundleOptions.MinifiedAndCombined)
The RenderJsBundle extension method is added in Bundler.cs file. You can play around with other options available in the enum BundleOptions by changing in the code and learn its behavior.
When you run the code after this change, you will see the above line gets replaced with a script tag similar to the following:
Markdown is tag less HTML. Meaning, you can compose HTML using plain text by following a set of conventions without writing even a single HTML tag. It is widely used these days to compose README files, post rich text on forums like Stack Overflow and even to write professional blog posts. If you have never tried markdown, check stackedit.io. The welcome page of the site contains a nice example of markdown covering most of the syntax. You can use HTML to apply any formatting that is not available in markdown.
ServiceStack’s razor view engine supports Markdown views. You can use model binding syntax that you use in regular Razor pages and compose HTML with no tags. Add a new page to Views folder and name it AboutDto.md. Add the following code to it:
Student Marks Reports of XYZ School
## About Student Reports
This app is to view and manage reports of students of XYZ School.
*Contact school management for any questions on report of your kid.*
This page was last updated on @Model.ModifiedDate by @Model.Name
Notice that we are using HTML just for applying bootstrap styles. Content of the page is free from tags. In the last statement, we are using a couple of properties from the model object. We can create a layout page for markdown and use it as base template for all markdown files. It is important to remember that Razor layout pages cannot be combined with markdown views and vice versa.
We need DTOs and service for the view. Following snippet contains the code of these classes:
[Route("/about")]
public class AboutDto
{
}
public class AboutResponse
{
public string Name { get; set; }
public DateTime ModifiedDate { get; set; }
}
public class AboutService : Service
{
public object Get(AboutDto dto)
{
return new AboutResponse() { Name = "Ravi", ModifiedDate = DateTime.Today };
}
}
Build and run the application and change the URL to. You will see the following screen in your browser:
As stated in the first article, ServiceStack can be hosted on IIS, on a standalone application or even on mono. We don’t need to make any significant change in the way we write code to build a self-hosted ServiceStack application. The difference is only in the way we define the AppHost class.
Create a new Console application and install the ServiceStack NuGet package to it. Add a new class and change the name to EchoService. Add the following code to this file:
public class EchoService: Service
{
public object Get(EchoDto echoObj)
{
return "Hello, " + echoObj.Name;
}
}
[Route("/echo/{Name}")]
public class EchoDto
{
public string Name { get; set; }
}
It is a simple service that takes a name and echoes a hello message. To host this service, we need an AppHost class. Add a new class to the application and name it AppHost. Add following code to it:
public class AppHost : AppSelfHostBase
{
public AppHost():base("ServiceStack hosted on console app", typeof(EchoService).Assembly)
{
}
public override void Configure(Funq.Container container)
{
}
}
The difference between AppHost in web application and Self-hosted application is the base class of AppHost class. Here, we have AppSelfHostBase instead of AppHostBase as the base class. I will leave the Configure method empty here as we are not going to add a lot of logic to this application.
Now the only thing left to do is starting the server on a specified port. To do this, add the following code to the Main method in the Program class:
static void Main(string[] args)
{
var portno = 8888;
new AppHost().Init().Start(string.Format(":{0}/",portno));
Console.WriteLine("ServiceStack started on port no: {0}. Press any key to stop the server.", portno);
Console.ReadLine();
}
Run the application. You will see a console screen with a message on it. Open a browser and type the following URL:
It will display the EchoService. Now change the URL to:
You will see a hello message on the browser.
As we saw, ServiceStack is an awesome stack of technologies that makes it easier to build and secure an end-to-end application and host it on any platform depending on infrastructure provided by the enterprise. The technology has a number of other features including Logging, Caching, Filters, Validation and support for message queues. Check the official wiki pages for more details on these topics.
Download the entire source code from our GitHub Repository at bit.ly/dncm16-servicestack | https://www.dotnetcurry.com/aspnet/1086/service-stack-security-bundling-self-hosting | CC-MAIN-2018-39 | refinedweb | 3,236 | 56.15 |
/proc/version
open issue documentation: edit and move to FAQ.
IRC, freenode, #hurd, around 2010-09
<pinotree> (also, shouldn't /proc/version say something else than "Linux"?) <youpi> to make linux tools work, no :/ <youpi> kfreebsd does that too <pinotree> really? <youpi> yes <youpi> (kfreebsd, not freebsd) <pinotree> does kbsd's one print just "Linux version x.y.z" too, or something more eg in a second line? <pinotree> (as curiosity) <youpi> % cat /proc/version <youpi> Linux version 2.6.16 (des@freebsd.org) (gcc version 4.3.5) #4 Sun Dec 18 04:30:00 CET 1977 <pinotree> k
IRC, freenode, #hurd, 2013-06-04
<safinaskar> ?@?#@?$?@#???!?!?!?!??!?!?!?! why /proc/version on gnu system reports "Linux version 2.6.1 (GNU 0.3...)"? <braunr> safinaskar: because /proc/version is a linux thing <braunr> applications using it don't expect to see anything else than linux when parsing <braunr> think of it as your web brower allowing you to set the user-agent <safinaskar> braunr: yes, i just thought about user-agent, too <safinaskar> braunr: but freebsd doesn't report it is linux (as well as i know) <braunr> their choice <braunr> we could change it, but frankly, we don't care <safinaskar> so why "uname" says "GNU" and not "Linux"? <braunr> uname is posix <braunr> note that /proc/version also includes GNU and GNU Mach/Hurd versions <safinaskar> if some program read the word "Linux" from /proc/version, it will assume it is linux. so, i think it is bad idea <braunr> why ? <safinaskar> there is no standard /proc across unixen <braunr> if a program reads /proc/version, it expects to be run on linux <safinaskar> every unix implement his own /proc <safinaskar> so, we don't need to create /proc which is fully compatible with linux <braunr> procfs doesn't by default <safinaskar> instead, we can make /proc, which is partially compatible with linux <braunr> debiansets the -c compatibility flag <braunr> that's what we did <safinaskar> but /proc/version should really report kernel name and its version <braunr> why ? <braunr> (and again, it does) <safinaskar> because this is why /proc/version created <pinotree> no? <braunr> on linux, yes <braunr> pinotree: hm ? <safinaskar> and /proc/version should not contain the "Linux" word, because this is not Linux <braunr> pinotree: no to what ? :) <braunr> safinaskar: *sigh* <braunr> i explained the choice to you <pinotree> safinaskar: if you are using /proc/version to get the kernel name and version, you're doing bad already <braunr> disagree if you want <braunr> but there is a point to using the word Linux there <pinotree> safinaskar: there's the proper aposix api for that, which is uname <safinaskar> pinotree: okey. so why we ever implement /proc/version? <braunr> it's a linux thing <braunr> they probably wanted more than what the posix api was intended to do <safinaskar> okey, so why we need this linux thing? there is a lot of linux thing which is useful in hurd. but not this thing. because this is not linux. if we support /proc/version, we should not write "Linux" to it <pinotree> and even on freebsd their linprocfs (mounted on /proc) is not mounted by default <braunr> 10:37 < braunr> applications using it don't expect to see anything else than linux when parsing <braunr> 10:37 < braunr> think of it as your web brower allowing you to set the user-agent <braunr> safinaskar: the answer hasn't changed <safinaskar> pinotree: but they don't export /proc/version with "Linux" word in it anyway <pinotree> safinaskar: they do <safinaskar> pinotree: ??? their /proc/version contain Linux? <pinotree> Linux version 2.6.16 (des@freebsd.org) (gcc version 4.6.3) #4 Sun Dec 18 04:30:00 CET 1977 <kilobug> safinaskar: it's like all web browsers reporting "mozilla" in their UA, it may be silly, but it's how it is for compatibility/historical reasons, and it's just not worth the trouble of changing it <pinotree> that's on a debian gnu/kfreebsd machine <pinotree> and on a freebsd machine it is the same <braunr> safinaskar: you should understand that parsing this string allows correctly walking the rest of the /proc tree <pinotree> and given such filesystem on freebsd is called "linprocfs", you can already have a guess what it is for <kilobug> safinaskar: saying "Linux version 2.6.1" just means "I'm compatible with Linux 2.6.1 interfaces", like saying "Mozilla/5.0 (like Gecko)" in the UA means "I'm a modern browser" <safinaskar> so, is there really a lot of programs which expect "Linux" word in /proc/version even on non-linux platforms? <braunr> no <braunr> but when they do, they do
/proc/self
IRC, freenode, #hurd, around 2010-09
<youpi> jkoenig: is it not possible to provide a /proc/self which points at the client's pid? <pinotree> looks like he did 'self' too, see rootdir_entries[] in rootdir.c <youpi> but it doesn't point at self <antrik> youpi: there is no way to provide /proc/self, because the server doesn't know the identity of the client <youpi> :/ <antrik> youpi: using the existing mechanisms, we would need another magic lookup type <antrik> an alternative idea I discussed with cfhammer once would be for the client to voluntarily provide it's identity to the server... but that would be a rather fundamental change that requires careful consideration <antrik> also, object migration could be used, so the implementation would be provided by the server, but the execution would happen in the client... but that's even more involved :-) <youpi> but we've seen how much that'd help with a lot of other stuff <antrik> I'm not sure whether we discussed this on the ML at some point, or only on IRC <youpi> it "just" needs to be commited :) <antrik> in either case, it can't hurt to bring this up again :-)
discussion, IRC, freenode, #hurd, 2013-09-07.
Look at
[glibc]/hurd/lookup-retry.c for how
FS RETRY MAGICAL
lookups work.
root group
IRC, freenode, #hurd, around October 2010
<pinotree> the only glitch is that files/dirs have the right user as owner, but always with root group
/proc/[PID]/stat being 400 and not 444, and some more
IRC, freenode, #hurd, 2011-03-27
<pochu> is there a reason for /proc/$pid/stat to be 400 and not 444 like on Linux? <pochu> there is an option to procfs to make it 444 like Linux <pochu> jkoenig: ^ <jkoenig> pochu, hi <jkoenig> /proc/$pid/stat reveals information which is not usually available on Hurd <jkoenig> so I made it 400 by default to avoid leaking anything <pochu> is there a security risk in providing that info? <jkoenig> probably not so much, but it seemed like it's not really a descision procfs should make <jkoenig> I'm not sure which information we're speaking about, though, I just remember the abstract reason. <pochu> things like the pid, the memory, the priority, the state... <pochu> sounds safe to expose <jkoenig> also it's 0444 by default in "compatible" mode <jkoenig> (which is necessary for the linux tools to work well) <pochu> yeah I saw that :) <pochu> my question is, should we change it to 0444 by default? if there are no security risks and this improves compatibility, sounds like a good thing to me <pochu> we're already 'leaking' part of that info through e.g. ps <jkoenig> I think /proc should be translated by /hurd/procfs --compatible by default (I'm not sure whether it's already the case) <jkoenig> also I'm not sure why hurd-ps is setuid root, rather than the proc server being less paranoid, but maybe I'm missing something. <pochu> jkoenig: it's not, at least not on Debian <pochu> youpi: hi, what do you think about starting procfs with --compatible by default? <pochu> youpi: or changing /proc/$pid/stat to 0444 like on Linux (--compatible does that among a few other things) <youpi> I guess you need it for something? <pochu> I'm porting libgtop :) <youpi> k <pochu> though I still think we should do this in procfs itself <youpi> ymmv <jkoenig> pochu, youpi, --compatible is also needed because mach's high reported sysconf(_SC_CLK_TCK) makes some integers overflow (IIRC) <youpi> agreed <jkoenig> luckily, tools which use procfs usually try to detect the value /proc uses rather than rely on CLK_TCK <jkoenig> (so we can choose whatever reasonable value we want)
IRC, freenode, #hurd, 2011-03-28
<antrik> jkoenig: does procfs expose any information that is not available to everyone through the proc server?... <antrik> also, why is --compatible not the default; or rather, why is there even another mode? the whole point of procfs is compatibility... <jkoenig> antrik, yes, through the <pid>/environ and (as mentionned above) <pid>/stat files, but I've been careful to make these files readable only to the process owner <jkoenig> --compatible is not the default because it relaxes this paranoia wrt. the stat file, and does not conform to the specification with regard to clock tick counters <antrik> what specification? <jkoenig> the linux proc(5) manpage <jkoenig> which says clock tick counters are in units of 1/sysconf(_SC_CLK_TCK) <antrik> so you are saying that there is some information that the Hurd proc server doesn't expose to unprivileged processes, but linux /proc does? <jkoenig> yes <antrik> that's odd. I wonder what the reasoning behind that could be <antrik> but this information is available through Hurd ps? <antrik> BTW, what exactly is _SC_CLK_TCK supposed to be? <pinotree> jkoenig: hm, just tried with two random processes on linux (2.6.32), and enrivon is 400 <pinotree> (which makes sense, as you could have sensible informations eg in http_proxy or other envvars) <jkoenig> antrik, CLK_TCK is similar to HZ (maybe clock resolution instead of time slices ?) <jkoenig> sysconf(3) says "The number of clock ticks per second." <jkoenig> antrik, I don't remember precisely what information this was, but ps-hurd is setuid root. <jkoenig> anyway, if you run procfs --compatible as a user and try to read foo/1/stat, the result is an I/O error, which is the result of the proc server denying access. <antrik> but Linux /proc acutally uses HZ as the unit IIRC? or is _SC_CLK_TCK=HZ on Linux?... <jkoenig> I expect they're equal. <jkoenig> in practice procps uses heuristics to guess what value /proc uses (for compatibility purposes with older kernels) <jkoenig> I don't think HZ is POSIX, while _SC_CLK_TCK is specifies as the unit for (at least) the values returned by times() <jkoenig> s/specifies/specified/ <jkoenig> antrik, some the information is fetched directly from mach by libps, and understandably, the proc server does not give the task port to anyone who asks. <antrik> well, as long as the information is exposed through ps, there is no point in hiding it in procfs... <antrik> and I'm aware of the crazy guessing in libproc... I was actually mentoring the previous procfs implementation <antrik> (though I never got around to look at his buggy code...) <jkoenig> ok
IRC, freenode, #hurd, 2011-07-22
<pinotree> hm, why /proc/$pid/stat is 600 instead of 644 of linux? <jkoenig> pinotree, it reveals information which, while not that sensitive, would not be available to users through the normal proc interface. <jkoenig> (it's available through the ps command which is setuid root) <jkoenig> we discussed at some point making it 644, IIRC. <pinotree> hm, then why is it not a problem on eg linux? <jkoenig> (btw you can change it with the -s option.) <jkoenig> pinotree, it's not a problem because the information is not that sensitive, but when rewriting procfs I preferred to play it self and consider it's not procfs' job to decide what is sensitive or not. <jkoenig> IIRC it's not sensitive but you need the task port to query it. <jkoenig> like, thread times or something. <pinotree> status is 644 though <jkoenig> but status contains information which anyone can ask to the proc server anyway, I think.
/proc/mounts,
/proc/[PID]/mounts
IRC, freenode, #hurd, 2011-07-25
< pinotree> jkoenig: btw, what do you think about providing empty /proc/mounts and /proc/$pid/mounts files? < jkoenig> pinotree, I guess one would have to evaluate the consequences wrt. existing use cases (in other words, "I have absolutely no clue whatsoever about whether that would be desirable" :-) < jkoenig> pinotree, the thing is, an error message like "/proc/mounts: No such file or directory" is rather explicit, whereas errors which would be caused by missing data in /proc/mounts would maybe be harder to track < braunr> this seems reasonable though < braunr> there already are many servers with e.g. grsecurity or chrooted environments where mounts is empty < pinotree> well, currently we also have an empty mtab < braunr> pinotree: but what do you need that for ? < braunr> pinotree: the init system ? < pinotree> and the mnt C api already returns no entries (or it bails out, i don't remember) < pinotree> not a strict need
A mtab translator now exists.
IRC, freenode, #hurd, 2013-09-20
<pinotree> teythoon: should procfs now have $pid/mounts files pointing to ../mounts? <teythoon> pinotree: probably yes
/proc/[PID]/auxv
Needed by glibc's
pldd tool (commit
11988f8f9656042c3dfd9002ac85dff33173b9bd).
/proc/[PID]/exe
Needed by glibc's
pldd tool (commit
11988f8f9656042c3dfd9002ac85dff33173b9bd).
/proc/self/exe
id:"alpine.LFD.2.02.1110111111260.2016@akari". Needed by glibc's
stdlib/tst-secure-getenv.c.
Also used in
[GCC]/libgfortran/runtime/main.c:
store_exe_path.
Is it generally possible to use something like the following instead? Disadvantage is that every program using this needs to be patched.
#include <dlfcn.h> [...] Dl_info DLInfo; int err = dladdr(&main, &DLInfo); if (err == 0) [...] /* Pathname of shared object that contains address: DLInfo.dli_fname. */ /* Filter it through realpath. */
This is used in
[LLVM]/lib/Support/Unix/Path.inc.
IRC, OFTC, #debian-hurd, 2013-11-10
<mjt> Hello. Does hurd have /proc/self/exe equivalent, to "re-exec myself" ? <youpi> no, only argv[0] <mjt> busybox uses /proc/self/exe by default to re-exec itself when running one of its applets, or failing that, tries to find it in $PATH. I guess it doesn't work on hurd... :) <mjt> and argv0 is unreliable <youpi> some discussion on the hurd wiki talks about using Dl_info DLInfo <youpi> which contains DLInfo.dli_fname <youpi> err, I mean, callling dladdr(&main, &DLInfo); <youpi> this is kernel-agnostic, provided one uses glibc <mjt> um. -ldl. nice for static linking <mjt> gcc t.c -ldl -static <mjt> ./a.out <mjt> fname=AVA� �j <mjt> bah :) <mjt> (it just prints dli_fname) <teythoon> :/ <youpi> ah, yes, that won't work with static linking <teythoon> fixing /proc/self is on my todo list, it shouldn't be too hard <youpi> since in that case it's the exec server which sets the process up, not dl.so <teythoon> but we do not have the exe link either <mjt> (the above test run was on linux not on hurd, fwiw_ <mjt> )
/proc/[PID]/fd/
IRC, freenode, #hurd, 2012-04-24
<antrik> braunr: /proc/*/fd can be implemented in several ways. none of them would require undue centralisation <antrik> braunr: the easiest would be adding one more type of magic lookup to the existing magic lookup mechanism <antrik> wait, I mean /proc/self... for /proc/*/fd it's even more straighforward -- we might even have a magic lookup for that already <pinotree> i guess the ideal thing would be implement that fd logic in libps <antrik> pinotree: nope. it doesn't need to ask proc (or any other server) at all. it's local information. that's what we have the magic lookups for <antrik> one option we were considering at some point would be using the object migration mechanism, so the actual handling would still happen client-side, but the server could supply the code doing it. this would allow servers to add arbitrary magic lookup methods without any global modifications... but has other downsides :-) <gnu_srs> youpi: How much info for /proc/*/fd is possible to get from libps? Re: d-h@ <youpi> see my mail <youpi> I don't think there is an interface for that <youpi> processes handle fds themselves <youpi> so libps would have to peek in there <youpi> and I don't remember having seen any code like that <gnu_srs> 10:17:17< antrik> wait, I mean /proc/self... for /proc/*/fd it's even more straighforward -- we might even have a magic lookup for that already <gnu_srs> pinotree: For me that does not ring a bell on RPCs. Don't know what magic means,, <youpi> for /proc/self/fd we have a magic lookup <youpi> for /proc/pid/fd, I don't think we have <gnu_srs> magic lookup* <gnu_srs> magic lookup == RPC? <youpi> magic lookup is a kind of answer to the lookup RPC <youpi> that basically says "it's somewhere else, see there" <youpi> the magic FD lookup tells the process "it's your FD number x" <youpi> which works for /proc/self/fd, but not /proc/pid/fd <civodul> youpi, gnu_srs: regarding FDs, there the msg_get_fd RPC that could be used <civodul> `msgport' should have --get-fd, actually <youpi> civodul: I assumed that the reason why msgport doesn't have it is that it didn't exist <youpi> so we can get a port on the fd <youpi> but then how to know what it is? <civodul> youpi: ah, you mean for the /proc/X/fd symlinks? <civodul> good question <civodul> it's not designed to be mapped back to names, indeed :-) <antrik> youpi: yeah, I realized myself that only /proc/self/fd is trivial <antrik> BTW, in Linux it's nor real symlinks. it's magic, with some very strange (but useful in certain situations) semantics <antrik> not real symlinks <antrik> it's very weird for example for fd connected to files that have been unlinked. it looks like a broken symlink, but when dereferencing (e.g. with cp), you get the actual file contents...
/proc/[PID]/maps
IRC, OFTC, #debian-hurd, 2012-06-20
<pinotree> bdefreese: the two elfutils tests fail because there are no /proc/$pid/maps files <pinotree> that code is quite relying on linux features, like locating the linux kernel executables and their modules, etc <pinotree> (see eg libdwfl/linux-kernel-modules.c) <pinotree> refactor elfutils to have the linux parts executed only on linux :D <bdefreese> Oh yeah, the maintainer already seems really thrilled about Hurd.. Did you see ? <pinotree> kurt is generally helpful with us (= hurd) <pinotree> most probably there he is complaining that we let elfutils build with nocheck (ie skipping the test suite run) instead of investigate and report why the test suite failed
/proc/self/maps
IRC, freenode, #hurd, 2014-02-22
<ignaker> i'm trying to implement proc/maps <ignaker> actually I can't well evaluate complexity of tasks. However, I appreciate your comments <braunr> the complexity can be roughly estimated from the number of components involved <braunr> proc/maps involves procfs, ports, virtual memory, and file systems <braunr> the naive implementation would merely be associating names to memory objects, and why not, but a more complete one would go ask file system servers about them <braunr> perhaps more <braunr> although personally i'd go for the naive one because less dependencies usually means better reliability <braunr> something similar to task_set_name could do the job
/proc/[PID]/mem
Needed by glibc's
pldd tool (commit
11988f8f9656042c3dfd9002ac85dff33173b9bd).
/proc/[PID]/cwd
IRC, freenode, #hurd, 2012-06-30
* pinotree has a local work to add the /proc/$pid/cwd symlink, but relying on "internal" (but exported) glibc functions
CPU Usage
IRC, freenode, #hurd, 2013-01-30
<gnu_srs> Hi, htop seems to report CPU usage correct, but not top, is that a known issue? <youpi> does your /proc have the -c flag? <gnu_srs> /hurd/procfs -c <youpi> I don't remember which way it works, but iirc depending on whether -c is there or not, it will work or not <youpi> problem being that nothing says linux' clock is 100Hz, but a lot of programs assume it <gnu_srs> seems like htop gets it right though <youpi> possibly just by luc <youpi> k
IRC, freenode, #hurd, 2013-01-31
:(
Kernel PID
IRC, freenode, #hurd, 2013-09-25
-04
<braunr> youpi: i fixed procfs on ironforge and exodar to be started as procfs -c -k 3 <braunr> without -k 3, many things as simple as top and uptime won't work | https://www.gnu.org/software/hurd/hurd/translator/procfs/jkoenig/discussion.html | CC-MAIN-2021-31 | refinedweb | 3,419 | 63.73 |
I think this is a little misunderstanding. Yes I mean the WebDAV ACL spec features, but I we
don't have implemented this into mod_dav. We implemented it into our own module (Catacomb)
and therefore we need to extent mod_dav to handle the ACP.
- Markus
> -----Ursprüngliche Nachricht-----
> Von: Greg Stein [mailto:gstein@gmail.com]
> Gesendet: Sonntag, 21. Februar 2010 21:05
> An: dev@httpd.apache.org
> Betreff: Re: ACL changes in mod_dav
>
> This is pretty cool. I'm assuming you're referring to the WebDAV ACL
> spec features?
>
> Every time that I started to look into the issue, I ran into one basic
> issue: how to notify the multiple processes that the ACLs around a
> particular namespace have changed. How did you handle that?
>
> Cheers,
> -g
>
> On Sat, Feb 20, 2010 at 23:23, <markus.litz@dlr.de> wrote:
> > Hi,
> >
> > I have added ACL features to the mod_dav module. Could you tell me the
> > correct way to get this changes reviewed and into to official
> > mod_dav-source?
> >
> > Thanks,
> > Markus
> > | http://mail-archives.apache.org/mod_mbox/httpd-dev/201002.mbox/%3C0DD6935185ABCB4BB0CCA4F0283CA77D78BF84@exbe3.intra.dlr.de%3E | CC-MAIN-2014-10 | refinedweb | 169 | 67.96 |
Remix is a new full stack JavaScript framework that gets rid of static site generation and also in other areas, does a few things differently than what we're used to from other frameworks. It relies on React to render the UI and if you're familiar with Next.js you can certainly spot a lot of similarities. However, there are also clear distinctions, like nested routes, handling of data fetching and data saving as well as error handling. Let's take a look at those concepts and how they compare to other techniques, currently used in popular frameworks.
The easiest way to get started with a new Remix project is by installing it through npx and following the interactive prompts:
npx create-remix@latest
Once we're done with that, our project's structure is already set up for us and we're good to go. If we compare Remix to Next.js, we'll see that with Remix we're also
writing client side and server ide code inside our route files. However, Remix gives us a little more control to fine-tune things like caching and this also shows in having
two separate files for handling requests —
entry.client and
entry.server that represent our entry points and therefore determine
what's run first on the server and client respectively. We also get a
root.tsx which holds the root component to our app and renders the
<html>, first
<meta> tags, and so on.
We also see a framework-specific
remix.config.js which allows to to configure a lot off different details about oour application, such as the default public directory, development ports and much more.
A very neat mechanism in Remix is the ability to render parts of a page based on the route we're on. When thinking of other frameworks, this would come down to
either a separate route with its own
index file or specifically matching a route to one or more components. We're used to binding parts of our URL to different routes or components already — the way Remix does it, is best described as nested layouts, where
each part of the URL is by definition bound to a route file. Here's an example from their official site.
The route shown above (
/sales/invoices) would therefore be represented by three files
routes/sales.jsx
routes/sales/index.jsx
routes/sales/invoices.jsx
Our first file is the wrapper that gets called first and based on the rest of the URL decides which "sub-components" should be rendered. The initial state
would be
routes/sales/index.jsx and when navigating to /invoices, our wrapper pulls in the code from
routes/sales/invoices.jsx.
The way this is realized in code is not through regular components, but through an
<Outlet /> which is part of
react-router-dom and allows for this mapping
of nested layouts to routes (route components) rather than regular components.
Under the hood this allows Remix to preload the different parts on a page, which makes it really fast and lets us eliminate loading states as much as possible. There are probably some more interesting things we can do here that I haven't fully explored yet.
Styling components is fairly straightforward with Remix, because it's very close to how it works on the web since forever. Remix brings
its own
LinksFunction which can be used to import CSS files on a per-route basis. That's also where we have to be a little careful and separate
our CSS into global CSS that should be available to every route and specific CSS that will not be loaded outside a certain route at all.
import stylesUrl from "../styles/index.css"; export let links: LinksFunction = () => { return [{ rel: "stylesheet", href: stylesUrl }]; }; export default function IndexRoute() { return <div>Index Route</div>; }
Once again Remix relies heavily on how the web already works, so if we wanted to use preprocessors or frameworks like Tailwind, we'd want to pass the compiled resources paths to Remix, just like we would with vanilla CSS files.
To get data inside a route component in Remix, we can use a
loader, which is just an async function that returns the requested data.
Inside our components, we can then access it through a hook called
useLoaderData.
import { useLoaderData } from "remix"; export let loader = async () => { return getData(); // does the heavy lifting, DB calls etc. and returns data } // Component function starts here export default function Component() { let allData = useLoaderData(); // data is now available inside our component }
Note that the function is always called
loader by convention and is only executed server-side, which means we also have
access to all
node features and libraries to connect to databases and fetch data, like we're used to on the server.
If we're passing parameters to our routes, like a dynamic URL often times requires, the loader also has access to that, by passing in the request parameters like this
export let loader = async ({ params }) => { return params.slug }
If we want to send new user-generated data back to the backend, to save it to a database for example, Remix lets us use so-called
actions.
Actions rely on forms for the actual data input and are also only executed server-side, despite being in your route file.
The functions are — again by convention — called
action and can also trigger (return) a redirect. Let's look at an example.
export let action = async ({ request }) => { let formData = await request.formData() let title = formData.get("title") let slug = formData.get("slug") await createPost({ title, slug }) // actual call to store data... return redirect("/home") } export default function NewPost() { return ( <Form method="post"> <p> <label>Post Title: <input type="text" name="title" /></label> </p> <p> <label>Post Slug: <input type="text" name="slug" /></label> </p> <button type="submit">Create Post</button> </Form> );
We see that the
action function takes the
request as a parameter and thereby has access to everything our form sends over to the server.
From there we're free to use any
node code to store our data.
The way Remix handles errors is quite unique, as it allows us to create
ErrorBoundarys that will be shown in case something with our route
components didn't work as expected and an error is thrown. That way, if we're using Remix's nested routes, we might see a single
export function ErrorBoundary({ error }) { return ( <html> <head> <title>Something went wrong!</title> <Meta /> <Links /> </head> <body> ... anything we want to let the user know goes here </body> </html> ); }
Implementing an error boundary is as simple as adding an
ErrorBoundary function to our route components as shown above.
At the time of this writing, Remix has really only been released yesterday, so there is still a lot to learn and some things might even change drastically with newer versions.
If you're looking for more resources and want to dive deeper, there's a fantastic tutorial on building a blog and a small dad jokes applicaiton in the Remix docs. | https://allround.io/articles/a-brief-introduction-to-remix-js | CC-MAIN-2021-49 | refinedweb | 1,178 | 58.52 |
{
my @nums = (0..9,'a'..'z','A'..'Z');
my %nums = map { $nums[$_] => $_ } 0..$#nums;
sub to_base
{
my $base = shift;
my $number = shift;
return $nums[0] if $number == 0;
my $rep = ""; # this will be the end value.
while( $number > 0 )
{
$rep = $nums[$number % $base] . $rep;
$number = int( $number / $base );
}
return $rep;
}
sub fr_base
{
my $base = shift;
my $rep = shift;
my $number = 0;
for( $rep =~ /./g )
{
$number *= $base;
$number += $nums{$_};
}
return $number;
}
}
[download]
You examples becomes
my $UniqueID = to_base( 62, $$ ) . to_base( 62, time );
[download]
print to_base( 16, 28 );
[download]
If you need your original syntax for some reason, just add
sub GenerateBase
{
my $base = shift;
return (
sub { to_base( $base, $_[0] ) },
sub { fr_base( $base, $_[0] ) },
);
}
[download]
Then the user has the choice of syntax at no cost.
In reply to Re: Base Conversion Utility
by ikegami
in thread Base Conversion Utility
by. | http://www.perlmonks.org/index.pl?parent=720148;node_id=3333 | CC-MAIN-2017-26 | refinedweb | 144 | 80.11 |
- NAME
- Synopsis
- Array Filters
- Standard Filters
- Author
- License and Legal
NAME
Template::Liquid::Filters - Default Filters Based on Liquid's Standard Set
Synopsis
Filters are simple methods that modify the output of numbers, strings, variables and objects. They are placed within an output tag
{{ }} and are denoted by a pipe character
|.
# product.title = "Awesome Shoes" {{ product.title | upcase }} # Output: AWESOME SHOES
In the example above,
product is the object,
title is its attribute, and
upcase is the filter being applied.
Some filters require a parameter to be passed.
{{ product.title | remove: "Awesome" }} # Output: Shoes
Multiple filters can be used on one output. They are applied from left to right.
{{ product.title | upcase | remove: "AWESOME" }} # SHOES
Array Filters
Array filters change the output of arrays.
join
Joins elements of the array with a certain character between them.
# Where array is [1..6] {{ array | join }} => 1 2 3 4 5 6 {{ array | join:', ' }} => 1, 2, 3, 4, 5, 6
first
Get the first element of the passed in array
# Where array is [1..6] {{ array | first }} => 1
Get the last element of the passed in array.
# Where array is [1..6] {{ array | last }} => 6
You can use last with dot notation when you need to use the filter inside a tag.
{% if product.tags.last == "sale"%} This product is on sale! {% endif %}
Using last on a string returns the last character in the string.
{{ product.title | last }}
Standard Filters
These are the current default filters. They have been written to behave exactly like their Ruby Liquid counterparts accept where Perl makes improvment irresistable.
date
Reformat a date...
...by calling the object's
strftimemethod. This is tried first so dates may be defined as a DateTime or DateTimeX::Lite object...
# On my dev machine where obj is an object built with DateTime->now() {{ obj | date:'%c' }} => Dec 14, 2009 2:05:31 AM
...with the POSIX module's strftime function. This is the last resort and flags may differ by system so... Buyer beware.
#
MATHFAIL! }} => 1
round
Rounds the output to the nearest integer or specified number of decimals.
{{ 4.6 | round }} => 5 {{ 4.3 | round }} => 4 {{ 4.5612 | round: 2 }} => 4.56
money
Formats floats and integers as if they were money.
{{ 4.6 | money }} => $4.60 {{ -4.3 | money }} => -$4.30 {{ 4.5612 | money }} => $4.56
You may pass a currency symbol to override the default dollar sign (
$).
{{ 4.6 | money:'€' }} => €4.60
stock_price
Formats floats and integers as if they were stock prices.
{{ 4.6 | stock_price }} => $4.60 {{ 0.30 | stock_price }} => $0.3000 {{ 4.5612 | stock_price }} => $4.56
You may pass a currency symbol to override the default dollar sign (
$).
{{ 4.6 | stock_price:'€' }} => €4.60
abs
Returns the absolute value of a number.
{{ 4 | abs }} => 4 {{ -4 | abs }} => 4
ceil
Rounds an integer up to the nearest integer.
{{ 4.6 | ceil }} => 5 {{ 4.3 | ceil }} => 5
floor
Rounds an integer down to the nearest integer.
{{ 4.6 | floor }} => 4 {{ 4.3 | floor }} => 4
default
Sets a default avlue for any variable with no assigned value. Can be used with strings, arrays, and hashes.
The default value is returne if the cariable resolves to
undef or an empty string (
''). A string containing whitespace characters will not resolve to the default value.
{{ customer.name | default: "customer" }} => "customer". | http://web-stage.metacpan.org/pod/Template::Liquid::Filters | CC-MAIN-2020-05 | refinedweb | 545 | 69.79 |
>>
It's well known that documentation is lacking in open source projects, especially in programming libraries. Now, there are shining counter-examples, but in my experience it's safe to bet that docs will suck on any given project.
Now, I'd say that's due to two things: a) lack of time and laziness (writing docs is more boring than writing features) and b) libraries are directed to other programmers/technical users, which means they ship technical documentation (changelogs, build docs, function references) rather than end-user documentation (i.e. *how* to use it).
I don't know if there is a career opening here, but producing decent docs is a useful skill anyway. In most programming related jobs, two things matter a lot: getting the job done and producing maintainable code. Doc-writing skills help in both.
Regarding events, start a new topic and we can help explain the concept. In reality, it is simple (deceptively so), but also very powerful.
Re: Selection/Picking
Thanks for your input. What you say makes a lot of sense and I agree. I like the theories of a more agile software development paradigm to sync requirements with users as progress is generated, but strong documentation seems to go the other way toward the waterfall theory. There is probably a theoretically perfect ratio of features/documentation, but I like to err on the side of a bit too much documentation for my own sake. With a distributed development environment, it seems to me that it may actually facilitate greater efficiency if everyone can more quickly tie into what is happening with a project just by reading the current documentation. Of course, I have no real-world experience... I just work on school projects that stress certain aspects pertaining to the topic of the day.
This kinda goes off on a tangent, since I just wrote about documentation for a specific project, rather than for learning concepts and implementation of a language. My thoughts are that more people could make use of a doc that covers easy-to-learn language concepts and implementations than what is posted for documentation about a specific project. This might make it worth-while to invest in good documentation with a language/platform. Historically, over the past 50 years, if it's easy to use it takes over the market regardless of the fact that it's really inferior in many aspects.
Anyhow, every seasoned programmer, and 3rd year CS student for that matter, probably knows what I'm rambling about already, so these keystrokes are most likely destined for the bit-bucket.
Re: Selection/Picking
Well, if you want to make money writing about OpenTK the best course of action might be writing a book (which will have to get printed) about using OpenGL/AL in the managed world. If your definition of "career" is not related to money, there sure is room in pretty much every open-source project for someone writing quality programmer guides.
Most programmers know it's more efficient to use [insert name of your favorite search engine here] to find an answer, rather than looking inside a book or making a forum post. There's very few answers you cannot find on the net, and most likely when writing about GL|AL you'll end up reading C documentation|specifications|tutorials and rephrase that for C#.
Edit: If you want to give it a try, write a FBO tutorial for OpenTK. :) It will be a valuable experience, there's a huge difference between knowing something and teaching something.
Re: Selection/Picking
Does FBO stand for Frame Buffer Object? Sounds like that would take some serious clock-ticks for me to stamp out a tutorial for that. I'd have to delve into some straight-faced, screaming, burning, and twisting OpenGL before I could even start. #=) The hash, sharp, tick-tac-toe/whatever sign is what my hair looks like after the ride. =)
I put a nice intro on an Nitro-RC Monster-Truck site about my own truck. Someone called me Charles Dickens... I take that as a compliment, LOL. Anyhow, I hope the stuff that sings out of my fingertips on onto this keyboard is almost worth the time it takes to read. My tuts wouldn't be boring!
I guess my biggest prob with other-folk's tuts is that I'm inexperienced, but this stuff is fun so it will come to me... the more I learn, the more fun it becomes, similar to compounding interest on greenbacks, only I'm still trying to figure out how to calculate technological inflation.
Re: Selection/Picking
I hope nobody bothers that this discussion has gone a little offtopic from the initial post, but I guess it's ok since the question has been covered.
Yep, FBO refers to the Framebuffer Object extension used for off-screen rendering. It is neither covered in the red nor orange book, but is for example useful for shadow mapping or rendering to a g-buffer. There seem to be no quality tutorials around which cover the topic and reading OpenGL extension specs isn't really easy to digest.
Not sure if a tutorial has to be funny, personally I do prefer the 'dry' texts that stay ontopic rather than trying to entertain the reader and lose focus. (If you're looking for entertainment there are better things to do than reading a page about pushing bytes around)
If you want a hint how to gather experience quickly: do several small projects after the other rather than a single huge one. You will make mistakes (which are inevitable to build a judgement what's good or bad choices) and a small project is easier to abandon than something you spent months working on. Nothing in the world is perfect.
Except OpenTK. It simply gets even more perfect with every version number increase. :P
Picking code example
Fiddler, did this picking example ever get written? If not, can you point me to some source that includes picking?
Glu-class
This is perhaps a very simple question, but I don't find the Glu-class in the OpenTK class library, further the namespace in my classlibrary isn't OpenTK.OpenGL but OpenTK.Graphics.OpenGL. What am I doing wrong? I'm using Mono.Net/C#, and added the OpenTK.dll assemby as a reference to my Mono.Net project. I know the answer will propably be very simple, but I don't find it at the moment.
-----------------------------------------
my apologies for my bad english.
The box said: "Requires Windows XP or better."
So i installed Linux.
Re: Selection/Picking
Glu has been deprecated upstream and lives in OpenTK.Compatibility.dll now.
Also, do consider opening a new thread instead of hijacking an existing one. :)
Re: Here is more robust code
When I use this code (and the previous versions of it) something strange happens: When I click on a certain area in my GameWindow it selects an object, but it's like the object are mirrored around an axis parallel with the x-axis. That means that when I select a far object in the scene, it sometimes returns a near object. I don't say that there is any problem with this code, but It seems my overriden methods of my GameWindow aren't compatible with the code posted by JTalton. To find a solution, I will put the most important pieces of my GameWindow here:
-----------------------------------------
my apologies for my bad english.
The box said: "Requires Windows XP or better."
So i installed Linux.
Re: Selection/Picking
Could you please post the Zip sample code? | http://www.opentk.com/node/213?page=3 | CC-MAIN-2016-30 | refinedweb | 1,277 | 61.97 |
Q: HOW DO I... determine a file's size by using only ANSI functions?
A: Determining the size of a file is a function that is specific to each computer environment and, as such, wasn't addressed directly in the ANSI C standard. It is, however, possible to write such a function using only the tools ANSI has provided. C Snippet #1 shows how.
/* ** FLENGTH.C - a simple function using all ANSI-standard functions ** to determine the size of a file. ** ** Public domain by Bob Jarvis. */ #include <stdio.h> #include <io.h> long flength(char *fname) { FILE *fptr; long length = 0L; fptr = fopen(fname, "rb"); if(fptr != NULL) { fseek(fptr, 0L, SEEK_END); length = ftell(fptr); fclose(fptr); } return length; } #ifdef TEST void main(int argc, char *argv[]) { printf("Length of %s = %ld\n", argv[0], flength(argv[0])); } #endif /* TEST */
More C Snippets
- Determining a file's size
- Rounding floating-point values
- Sorting an array of strings
- Computing the wind chill factor
- Timers and default actions
- Copying overlapping strings
- Really random numbers | http://www.drdobbs.com/cpp/c-snippet-1/219100141 | CC-MAIN-2013-48 | refinedweb | 172 | 56.05 |
:Hi all, : :Since the old ndisulator code (which is in DragonFly) doesn't recognize :my wireless card I've taken a shot at converting the current ndisulator :code of FreeBSD to DragonFly. : :The code now correctly identifies my wireless card however the :win-driver want to map io space. In the function where ndisulator does :this it recursively iterate over all devices and resources to select the :correct device to map to. The driver crashes in this code. : :Attached is a minimal example of the code which crashes (it crashes in :the SLIST_FOREACH [macro]call) and a dump of the output just prior to :crashing and after the crash. : :Since I don't now much about hardware I'm not sure whether it is normal :that pci1 is a child of pci0, and I also would have thought the :resource_list pointer of pci1 (which it crashes on) would be more in :sequentual to the pointer of the resource_list pointers of agp0 and pcib1. : :Hope one of you can help. : :Martin I think its a bug in pci_get_resource_list(). struct resource_list * pci_get_resource_list (device_t dev, device_t child) { struct pci_devinfo * dinfo = device_get_ivars(child); struct resource_list * rl = &dinfo->resources; if (!rl) return (NULL); return (rl); } That should probably be: struct resource_list * pci_get_resource_list (device_t dev, device_t child) { struct pci_devinfo *dinfo = device_get_ivars(child); if (dinfo == NULL) return (NULL); return (&dinfo->resources); } Try that and see what you get. The PCI heirarchy is a real mess. The busses can hang off of each other, e.g. one pci bus can be a device on another pci bus. It all gets fed into the cpu somehow :-) -Matt | http://leaf.dragonflybsd.org/mailarchive/kernel/2007-05/msg00006.html | CC-MAIN-2014-10 | refinedweb | 268 | 69.72 |
Our Objective: To write a python script that turns any image to ASCII Art format.
Before heading onwards to the script, make sure to understand the basics of the project. I have explained everything in detail so that you don’t need to hop on to different sources. So let’s quickly dive into our tutorial:
What is ASCII Art?
You must have seen such an image somewhere! Right? This is ASCII Art. So now you can already relate better to what is ASCII Art. It stands for American Standard Code for Information Interchange.
ASCII/ Word/Keyboard/ Text-based art is a form of art that is made out of computer characters. It involves making text images with the combination of 95 printable characters defined by the ASCII Standard. The images produced can be simple emoticons but can even involve some great artwork.
Pre-requisites
- Make sure to make a new folder to store all the related files of this particular project. This way it’ll become easy to manage and locate.
- pywhatkit module: This module is responsible for converting any image to ASCII art in no time. Just install it like any other module, type in the following command in the terminal:
pip install pywhatkit
- An image which you wish to convert into ASCII art. Store the image within the same folder to ease out the location path.
Code to Convert Image to ASCII Art
- In your editor, set up a new project and then create a file with the name index.py.
- Now import the required module, i.e., pywhatkit. Make sure to install it beforehand using the above mentioned command. I have imported pywhatkit as pw, so in the later part of this program, I’ll just write pw when using pywhatkit.
import pywhatkit as pw
- Our program will now give us a message that shows us that the conversion is being done. The message reads “Converting your given image to ASCII Art”. You can always customise the message or even completely skip it.
print("Converting your given image to ASCII Art")
- Finally now lets call the method “image_to_ascii_art” from the module pywhatkit. It will convert the given image to its ASCII Art form within a few miliseconds. The converted image needs to get stored somewhere. So two parameters are given along with this method.
- The first parameter represents the source path or the path of the image that you want to convert.
- The second parameter depicts the name of the text file where the converted ASCII image gets stored.
pw.image_to_ascii_art('Capture.PNG', 'ASCII.txt')
Here the name of my source image is ‘Capture.PNG’ and the target text file name is ‘ASCII.txt’. It will automatically create a text file with the target name where the ASCII image will be stored.
- And lastly you can just print a message to show that the task has been completed.
print("Task Completed")
The converted image will always gets stored within the same folder. If you want to change your destination path, you can always set it in the target path like the following:
pw.image_to_ascii_art('Capture.PNG', 'D:/TECHBIT/ASCII.txt')
You must be wondering about the output of our script. So let’s have a look at that as well.
Taking User Input
Here we’ve asked the user to input the path of the image that he/she wants to convert and also the destination path where he wants to save his converted image.
import pywhatkit as pw print("Welcome! Convert any image to ASCII Art!") src_file = input("Enter your saved image path/name here: ") tar_file = input("Enter your output file name/path here: ") print("Converting your given image to ASCII Art") pw.image_to_ascii_art(src_file, tar_file) print("Task Completed")
So that’s how easy it was to turn an image into ASCII art form. A two lines of code is all it takes to accomplish the task. Pywhatkit library has many more fun methods one can try. I have already tried 2-3 methods from this library. One is converting text to Handwriting, and another is Automate WhatsApp using Pywhatkit, which are also really fun and interesting projects to try.
I have also created a video tutorial of the same, so don’t forget to check that out. Make sure to subscribe us on YouTube.
Please do share your valuable suggestions, I highly appreciate your honest feedback!
Do checkout my other blogposts as well:
If you like the content, please give us a like/follow on the following platforms: | https://techbit.in/programming/python-code-to-turn-any-image-to-ascii-art/ | CC-MAIN-2022-21 | refinedweb | 754 | 65.42 |
Table of Contents
booleanType and boolean Values
Object
String
finalVariables
The Java programming language is a statically typed language, which means that every variable and every expression has a type that is known at compile time.
The Java programming language is also a strongly typed language, because types limit the values that a variable (§4.12) can hold or that an expression can produce, limit the operations supported on those values, and determine the meaning of the operations. Strong static).
There
Primitive values do not share state with other primitive values.
The numeric types are the integral types and the floating-point types.
The integral types are
byte,
short,
int, and
long, whose values are 8-bit, 16-bit,
32-bit and 64-bit signed two's-complement integers, respectively, and
char, whose values are 16-bit unsigned integers representing UTF-16
code units (§3.1).
The floating-point types are
float, whose values include the 32-bit IEEE 754 floating-point
numbers, and
double, whose values include the 64-bit IEEE 754
floating-point numbers.
The
boolean type has exactly two values:
true and
false.
The values of the integral types are integers in the following ranges:
The Java programming language provides a number of operators that act on integral values:
The comparison
operators, which result in a value of type
boolean:
The numerical
operators, which result in a value of type
int or
long:
The unary plus
and minus operators
+ and
-
(§15.15.3,
§15.15.4)
The
multiplicative operators
*,
/, and
%
(§15.17)
The additive
operators
+ and
-
(§15.18)
The increment
operator
++, both prefix
(§15.15.1) and postfix
(§15.14.2)
The decrement
operator
--, both prefix
(§15.15.2) and postfix
(§15.14.3)
The signed and
unsigned shift operators
<<,
>>, and
>>>
(§15.19)
The bitwise
complement operator
~
(§15.15.5)
The integer
bitwise operators
&,
^, and
|
(§15.22.1)
The conditional
operator
? : (§15.25)
The cast operator (§15.16), which can convert from an integral value to a value of any specified numeric type
The string
concatenation operator
+ (§15.18.1),
which, when given a
String operand and an integral operand, will
convert the integral operand to a
String representing its value
in decimal form, and then produce a newly created
String that is
the concatenation of the two strings
Other useful constructors,
methods, and constants are predefined in the classes by numeric promotion
(§5.6).
Otherwise, the operation is carried out using
32-bit precision, and the result of the numerical operator is of type
int. If either operand is not an
int, it is first widened to type
int by numeric promotion.
Any value of any integral
type may be cast to or from any numeric type. There are no casts
between integral types and the type
boolean.
See §4.2.5 for an idiom to
convert integer expressions to
boolean.
The integer operators do not indicate overflow or underflow in any way.
An integer operator can throw an exception (§11 (Exceptions)) for the following reasons:
Any integer operator
can throw a
NullPointerException if unboxing conversion
(§5.1.8) of a null reference is
required.
The integer divide
operator
/ (§15.17.2) and the
integer remainder operator
%
(§15.17.3) can throw an
ArithmeticException if the right-hand operand is zero..2-1. Integer Operations
class Test { public static void main(String[] args) { int i = 1000000; System.out.println(i * i); long l = i; System.out.println(l * l); System.out.println(20296 / (l - i)); } }
This program produces the output:
-727379968 1000000000000
and then encounters an
ArithmeticException in the
division by
l - i, because
l - i
is zero. The first multiplication is performed in 32-bit precision,
whereas the second multiplication is a
long multiplication. The
value
-727379968 is the decimal value of the low 32
bits of the mathematical result,
1000000000000,
which is a value too large for type
int.
The
floating-point types are
float.13,
§15.4).
The finite nonzero values of any floating-point value set can all be expressed in the form s ⋅ m ⋅ 2(e - N + 1), where s is +1 or -1, m is a positive integer less than 2N, and e is an integer between Emin = -(2K-1-2) and Emax = 2K-1.3-A.
Where one or both extended-exponent value sets are supported by an implementation, then for each supported extended-exponent value set there is a specific implementation-dependent constant K, whose value is constrained by Table 4.2.3.2.3 SE platform treats NaN values of a given type as though collapsed into a single canonical value, and hence this specification normally refers to an arbitrary NaN as though to a canonical value.
However, version 1.3 of the Java SE.
The numerical
comparison
operators
<,
<=,
>, and
>= return
false if either or both operands are NaN
(§15.20.1).
The equality
operator
== returns
false if either operand
is NaN.
In
particular,
(x<y) == !(x>=y) will be
false if
x or
y is
NaN.
The inequality
operator
!= returns
true if either operand
is NaN (§15.21.1).
In
particular,
x!=x is
true if and only
if
x is NaN.
The Java programming language provides a number of operators that act on floating-point values:
The comparison
operators, which result in a value of type
boolean:
The numerical
operators, which result in a value of type
float or
double:
The unary plus
and minus operators
+ and
-
(§15.15.3,
§15.15.4)
The
multiplicative operators
*,
/, and
%
(§15.17)
The additive
operators
+ and
-
(§15.18.2)
The increment
operator
++, both prefix
(§15.15.1) and postfix
(§15.14.2)
The decrement
operator
--, both prefix
(§15.15.2) and postfix
(§15.14.3)
The conditional
operator
? : (§15.25)
The cast operator (§15.16), which can convert from a floating-point value to a value of any specified numeric type
The string
concatenation operator
+ (§15.18.1),
which, when given a
String operand and a floating-point operand,
will convert the floating-point operand to a
String representing
its value in decimal form (without information loss), and then
produce a newly created
String by concatenating the two
strings
Other useful constructors,
methods, and constants are predefined in the classes (§5.1.5).)
Any value of a floating-point
type may be cast to or from any numeric type. There are no casts
between floating-point types and the type
boolean.
See §4.2.5 for an idiom to
convert floating-point expressions to
boolean. Java programming.
A floating-point operation that overflows produces a signed infinity.
A floating-point operation that underflows produces a denormalized value or a signed zero.
A floating-point operation that has no mathematically definite..4-1. Floating-point Operations program produces the output:
overflow
This example demonstrates, among other things, that gradual underflow can result in a gradual loss of precision.
The results when
i is
0
involve division by zero, so that
z becomes
positive infinity, and
z * 0 is NaN, which is not
equal to
1.0.
The
boolean type represents a logical quantity with two possible values,
indicated by the literals
true and
false
(§3.10.3).
The boolean operators are:
The relational
operators
== and
!=
(§15.21.2)
The logical complement
operator
! (§15.15.6)
The logical operators
&,
^, and
| (§15.22.2)
The conditional-and and
conditional-or operators
&& (§15.23)
and
|| (§15.24)
The conditional
operator
? : (§15.25)
The string
concatenation operator
+ (§15.18.1),
which, when given a
String operand and a
boolean operand, will
convert the
boolean operand to a
String
(either
"true" or
"false"),
and then produce a newly created
String that is the
concatenation of the two strings
Boolean expressions determine the control flow in several kinds of statements:
The
while statement
(§14.12)
The
do statement
(§14.13)
The
for statement
(§14.14)
A
boolean expression also
determines which subexpression is evaluated in the conditional
? : operator (§15.25).
Only
boolean and
Boolean
expressions can be used in control flow statements and as the first
operand of the conditional operator
? :.
An integer
or floating-point expression
x
can be converted to a
boolean
[] (an array of
int) to declare
the field
metrics of the
class
Point..
An object is a class instance or an array.
The reference values (often just references) are pointers to these objects, and a special null reference, which refers to no object.
A class instance is explicitly created by a class instance creation expression (§15.9).
An array is explicitly created by an array creation expression (§15.10.1).
A new class instance is
implicitly created when the string concatenation operator
+
(§15.18.1) is used in a non-constant expression
(§15.28),).
New objects of the types
Boolean,
Byte,
Short,
Character,
Integer,
Long,
Float,
and
Double may be implicitly created by boxing conversion
(§5.1.7).
Example 4.3.1-1. Object Creation]); } }
This program produces the output:
default p: (0,0) a: { (0,0), (1,1) } hello
The operators on references to objects are:
Field access, using either a qualified name (§6.6) or a field access expression (§15.11)
Method invocation (§15.12)
The cast operator (§5.5, §15.16)
The string
concatenation operator
+ (§15.18.1),
which, when given a
String operand and a reference, will convert
the reference to a
String by invoking the
toString method of
the referenced object (using
"null" if either
the reference or the result of
toString is a null reference),
and then will produce a newly created
String that is the
concatenation of the two strings
The
instanceof
operator (§15.20.2)
The reference equality
operators
== and
!=
(§15.21.3)
The conditional
operator
? : (§15.25).
There may be many references to the same object. Most objects have state, stored in the fields of objects that are instances of classes or in the variables that are the components of an array object. If two variables contain references to the same object, the state of the object can be modified using one variable's reference to the object, and then the altered state can be observed through the reference in the other variable.
Example 4.3.1-2. Primitive and Reference Identity); } }
This program produces the output:
i1==3 but i2==4 v1.val==6 and v2.val==6
because
v1.val
and
v2.val reference the same instance variable
(§4.12.3) in the one
Value
object created by the only
new expression,
while
i1 and
i2 are different
variables.
Each object is associated
with a monitor (§17.1), which is used
by
synchronized methods (§8.4.3) and the
synchronized statement (§14.19) to provide
control over concurrent access to state by multiple threads
(§17 (Threads and Locks)).
The class
Object is a superclass (§8.1.4) of all other
classes.
All class and array types
inherit (§8.4.8) the methods of class
Object,
which are summarized as follows:
The
method
clone is used to make a duplicate of
an object.
The
method
equals defines a notion of object
equality, which is based on value, not reference,
comparison.
The
method
finalize is run just before an object
is destroyed (§12.6).
The method
getClass returns the
Class
object that represents the class of the object.
A
Class object exists for each reference type. It can be used,
for example, to discover the fully qualified name of a class,
its members, its immediate superclass, and any interfaces that
it implements.
The type of a method invocation expression
of
getClass is
Class
<
?
extends |T|
>, where T is the class or interface
that was searched for
getClass
(§15.12.1) and |T| denotes the erasure of
T (§4.6).
A class method that is declared
synchronized
(§8.4.3.6) synchronizes on the monitor
associated with the
Class object of the class.
The method
hashCode is very useful, together
with the method
equals, in hashtables such
as
java.util.HashMap.
The
methods
wait,
notify,
and
notifyAll are used in concurrent
programming using threads (§17.2).
The method
toString
returns a
String representation of the object.
Instances
of class
String represent sequences of Unicode
code points.
A
String object has a constant (unchanging) value.
String
literals (§3.10.5) are references to instances of
class
String.
The string concatenation
operator
+ (§15.18.1) implicitly creates a
new
String object when the result is not a constant expression
(§15.28).
Two reference types are the same compile-time type if they have the same binary name (§13.1) and their type arguments, if any, are the same, applying this definition recursively.
When two reference types are the same, they are sometimes said to be the same class or the same interface.:
They are both class or both interface types, are defined by the same class loader, and have the same binary name (§13.1), in which case they are sometimes said to be the same run-time class or the same run-time interface.
They are both array types, and their component types are the same run-time type (§10 }
The scope of a type variable declared as a type parameter is specified in §6.3.
Every type variable declared as a
type parameter has a bound. If no bound is
declared for a type variable,
Object is assumed. If a bound is
declared, it consists of either:
a single type variable T, or
a
class or interface type T possibly followed by interface types
I1
& ...
& In.
It is a compile-time error if any of the types I1 ... In is a class type or type variable.
The erasures (§4.6) of all constituent types of a bound must be pairwise different, or a compile-time error occurs.
A type variable must not at the same time be a subtype of two interface types which are different parameterizations of the same generic interface, or a compile-time error occurs.
The order of types in a bound is only significant in that the erasure of a type variable is determined by the first type in its bound, and that a class type or type variable may only appear in the first position.
The members
of a type variable X with bound T
& I1
& ...
& In
are the members of the intersection type (§4.9)
T
& I1
& ...
& In appearing at the point where the
type variable is declared.
Example 4.4-1. Members of a Type Variable
package TypeVarMembers; class C { public void mCPublic() {} protected void mCProtected() {} void mCPackage() {} private void mCPrivate() {} } interface I { void mI(); } class CT extends C implements I { public void mI() {} } class Test { <T extends C & I> void test(T t) { t.mI(); // OK t.mCPublic(); // OK t.mCProtected(); // OK t.mCPackage(); // OK t.mCPrivate(); // Compile-time error } }
The type variable
T has the same
members as the intersection type
C & I, which
in turn has the same members as the empty class
CT,
defined in the same scope with equivalent supertypes. The members of
an interface are always
public, and therefore always inherited
(unless overridden). Hence
mI is a member
of
CT and of
T. Among the
members of
C, all but
mCPrivate
are inherited by
CT, and are therefore members of
both
CT and
T.
If
C had been declared in a
different package than
T, then the call
to
mCPackage of a
generic class or interface which is nested. For example, if a
non-generic class C has a generic member class
D
<T
>, then C
.D
<
Object
> is a
parameterized type. And if a generic class C
<T
> has
a non-generic member class D, then the member type
C
<
String
>
Wildcards may be given explicit bounds, just like regular type variable declarations. An upper bound is signified by the following syntax, where B is the bound:
? extends B
Unlike ordinary type variables declared in a method signature, no type inference is required when using a wildcard. Consequently, it is permissible to declare lower bounds on a wildcard, using the following syntax, where B is a lower bound:
? super B
The wildcard
?
extends
Object is equivalent to the unbounded wildcard
?.
Two type arguments are provably distinct if one of the following is true:
Neither argument is a type variable or wildcard, and the two arguments are not the same type.
One type argument is a type
variable or wildcard, with an upper bound (from capture conversion
(§5.1.10), if necessary) of S; and the
other type argument T is not a type variable or wildcard; and
neither |S|
<: |T| nor |T|
<: |S|
(§4.8, §4.10).
Each type argument is a type
variable or wildcard, with upper bounds (from capture conversion,
if necessary) of S and T; and neither |S|
<: |T|
nor |T|
<: |S|.
A type argument
T1 is said to contain another type argument
T2, written T2
<= T1, if the set of types denoted by
T2 is provably a subset of the set of types denoted by T1
under the reflexive and transitive closure of the
following rules (where
<: denotes subtyping
(§4.10)):
?
extends T
<=
?
extends S if T
<: S
?
super T
<=
?
super S if S
<: T
?
super T
<=
?
extends
Object
The relationship of wildcards to established type
theory is an interesting one, which we briefly allude to
here. Wildcards are a restricted form of existential types. Given a
generic type declaration G
<T
extends B
>,
G
<
?
> is roughly analogous to Some X
<: B. G
<X
>.
Historically, wildcards are a direct descendant of the work by Atsushi Igarashi and Mirko Viroli.).
Wildcards differ in certain details from the
constructs described in the aforementioned paper, in particular in the
use of capture conversion (§5.1.10) rather).
Example 4.5.1-1. Unbounded Wildcards
import java.util.Collection; import java.util.ArrayList; class Test { static void printCollection(Collection<?> c) { // a wildcard collection for (Object o : c) { System.out.println(o); } } public static void main(String[] args) { Collection<String> cs = new ArrayList<String>(); cs.add("hello"); cs.add("world"); printCollection(cs); } }
Note that using
Collection<Object> as the type of the
incoming parameter,
c, would not be nearly as
useful; the method could only be used with an argument expression that
had type
Collection<Object>, which would be
quite rare. In contrast, the use of an unbounded wildcard allows any
kind of collection to be passed as an argument.
Here is an example where the element type of an array is parameterized by a wildcard:
public Method getMethod(Class<?>[] parameterTypes) { ... }
Example 4.5.1-2..
Reference(T referent, ReferenceQueue<? super T> queue)
Here, the referent can be inserted into any queue
whose element type is a supertype of the type
T of
the referent;
T is the lower bound for the
wildcard.
<T1,...,Tn
> is
T
[A1:=T1,...,An:=Tn].
Let
m be a member or
constructor declaration in D, where D is a class extended by
C or an interface implemented by C. Let
D
<U1,...,Uk
> be the supertype of
C
<T1,...,Tn
> that corresponds to D.
The type of
m in
C
<T1,...,Tn
> is the type of
m in).
Let D be a (possibly
generic) class or interface declaration in C. Then the type of
D in C
<T1,...,Tn
> is D where, if D is
generic, all type arguments are unbounded wildcards.
This is of no consequence, as it is impossible to
access a member of a parameterized type without performing capture
conversion,.
Type erasure is a mapping from types (possibly including parameterized types and type variables) to types (that are never parameterized types or type variables). We write |T| for the erasure of type T. The erasure mapping is defined as follows:
The
erasure of a parameterized type (§4.5)
G
<T1,...,Tn
> is |G|.
The
erasure of a nested type T
.C is |T|.C.
The
erasure of an array type T
[] is |T|
[].
The erasure of a type variable (§4.4) is the erasure of its leftmost bound.
The erasure of every other type is the type itself.
Type erasure also maps the signature (§8.4.2) of a constructor or method to a signature that has no parameterized types or type variables. The erasure of a constructor or method signature s is a signature consisting of the same name as s and the erasures of all the formal parameter types given in s.
The return type of a method (§8.4.5) and the type parameters of a generic method or constructor (§8.4.4, §8.8.4) also undergo erasure if the method or constructor's signature is erased.
The erasure of the signature of a generic method has no type parameters.
Because some type information is erased during compilation, not all types are available at run time. Types that are completely available at run time are known as reifiable types.
A type is reifiable if and only if one of the following holds:
It refers to a non-generic class or interface type declaration.
It is a parameterized type in which all type arguments are unbounded wildcards (§4.5.1).
It is a primitive type (§4.2).
It is an array type (§10.1) whose element type is reifiable.
It is a
nested type where, for each type T separated by a "
.", T
itself is reifiable.
For example,
if a generic class X
<T
> has a generic member
class Y
<U
>, then the type
X
<
?
>
.Y
<
?
> is
reifiable because X
<
?
> is reifiable and
Y
<
?
> is reifiable. The type
X
<
?
>
.Y
<
Object
> is not
reifiable because Y
<
Object
> is not
reifiable.
An intersection type is not reifiable.
The decision not to make all generic types reifiable is one of the most crucial, and controversial design decisions involving the type system of the Java programming language.
Ultimately, the most important motivation for this
decision is compatibility with existing code. In a naive sense, the
addition of new constructs such as generics has no implications for
pre-existing code. The Java SE platform (such as elements of
java.lang or
java.util). In
practice, then, the minimum requirement is platform compatibility -
that any program written for the prior version of the Java SE platform
continues to function unchanged in the new version.
One way to provide platform compatibility is to
leave existing platform functionality unchanged, only adding new
functionality. For example, rather than modify the existing
Collections hierarchy in
java.util, one might introduce a new library
utilizing generics.ics until the supplier's library is updated. If two modules are mutually dependent, the changes must be made simultaneously.
Clearly, platform compatibility, as outlined above, does not provide a realistic path for adoption of a pervasive new feature such as generics..
To facilitate interfacing with non-generic legacy code, it is possible to use as a type the erasure (§4.6) of a parameterized type (§4.5) or the erasure of an array type (§10.1) whose element type is a parameterized type. Such a type is called a raw type.
More precisely, a raw type is defined to be one of:
The reference type that is formed by taking the name of a generic type declaration without an accompanying type argument list.
An array type whose element type is a raw type.
A non-
static member type of a raw type R that is not inherited
from a superclass or superinterface of R.
A non-generic class or interface type is not a raw type.
To see why a non-
static type member of a raw type
is considered raw, consider the following example:
class Outer<T>{ T t; class Inner { T setOuterT(T t1) { t = t1; return t; } } }
The type of the member(s)
of
Inner depends on the type parameter
of
Outer. If
Outer is raw,
Inner must be treated as raw as well, as there.
Another implication of the rules above is that a generic inner class of a raw type can itself only be used as a raw type:
class Outer<T>{ class Inner<S> { S s; } }
It is not possible to access
Inner as a partially raw type (a "rare"
type):
Outer.Inner<Double> x = null; // illegal Double d = x.s;
because
Outer itself is raw,
hence so are all its inner classes including
Inner,
and so it is not possible to pass any type arguments to Inner.
The superclasses (respectively, superinterfaces) of a raw type are the erasures of the superclasses (superinterfaces) of any of the parameterizations of the generic type.
The type of a constructor
(§8.8), instance method
(§8.4, §9.4), or
non-
static field (§8.3) of a raw type C that
is not inherited from its superclasses or superinterfaces
is the raw type that corresponds to the erasure of
its type in the generic declaration corresponding to C.
The type of
a
static method or
static field of a raw type C is the same as
its type in the generic declaration corresponding to C.
It is a
compile-time error to pass type arguments to a non-
static type
member of a raw type that is not inherited from its superclasses or
superinterfaces.
It is a compile-time error to attempt to use a type member of a parameterized type as a raw type.
This means that the ban on "rare" types extends to the case where the qualifying type is parameterized, but we attempt to use the inner class as a raw type:
Outer<Integer>.Inner x = null; // illegal
This is the opposite of the case discussed above. There is no practical justification for this half-baked type. In legacy code, no type arguments are used. In non-legacy code, we should use the generic types correctly and pass all the required type arguments..
The use of raw types is allowed only as a concession to compatibility of legacy code. The use of raw types in code written after the introduction of generics into the Java programming language is strongly discouraged. It is possible that future versions of the Java programming language will disallow the use of raw types.
To make sure that potential violations of the typing rules are always flagged, some accesses to members of a raw type will result in compile-time unchecked warnings. The rules for compile-time unchecked warnings when accessing members or constructors of raw types are as follows:
At an assignment to a field: if the type of the Primary in the field access expression (§15.11) is a raw type, then a compile-time unchecked warning occurs if erasure changes the field's type.
At an invocation of a method or constructor: if the type of the class or interface to search (§15.12.1) is a raw type, then a compile-time unchecked warning occurs if erasure changes any of the formal parameter types of the method or constructor.
No
compile-time unchecked warning occurs for a method call when the
formal parameter types do not change under
erasure (even if the return type and/or
throws clause
changes), for reading from a field, or for a class instance
creation of a raw type.
Note that the unchecked warnings above are distinct from the unchecked warnings possible from unchecked conversion (§5.1.9), casts (§5.5.2), method declarations (§8.4.1, §8.4.8.3, §8.4.8.4, §9.4.1.2), and variable arity method invocations (§15.12.4.2).
The warnings here
cover the case where a legacy consumer uses a generified library. For
example, the library declares a generic class
Foo<T
extends String> that has a field
f of
type
Vector<T>, but the consumer assigns a
vector of integers to
e
.
f
where
e has the raw type
Foo.
The legacy consumer receives a warning because it may have caused heap
pollution (§4.12.2) for generified consumers of
the generified library.
(Note that the
legacy consumer can assign a
Vector<String>
from the library to its own
Vector variable without
receiving a warning. That is, the subtyping rules
(§4.10.2) of the Java programming language make it possible for
a variable of a raw type to be assigned a value of any of the type's
parameterized instances.)
The warnings from
unchecked conversion cover the dual case, where a generified consumer
uses a legacy library. For example, a method of the library has the
raw return type
Vector, but the consumer assigns
the result of the method invocation to a variable of
type
Vector<String>. This is unsafe, since
the raw vector might have had a different element type than
String,
but is still permitted using unchecked conversion in order to enable
interfacing with legacy code. The warning from unchecked conversion
indicates that the generified consumer may experience problems from
heap pollution at other points in the program.
Example 4.8-1. Raw Types
class Cell<E> { E value; Cell(E v) { value = v; } E get() { return value; } void set(E v) { value = v; } public static void main(String[] args) { Cell x = new Cell<String>("abc"); System.out.println(x.value); // OK, has type Object System.out.println(x.get()); // OK, has type Object x.set("def"); // unchecked warning } }
Example 4.8-2. Raw Types and Inheritance } }
In this.
Raw types are closely Proceedings of the ACM Conference on Object-Oriented Programming, Systems, Languages and Applications (OOPSLA 98), October 1998.
An
intersection type takes the form T1
& ...
& Tn (n
> 0), where Ti (1 ≤ i ≤ n) are Ck such that Ck
<: Ci for any
i (1 ≤ i ≤ n), or a compile-time error
occurs.
For 1
≤ j ≤ n, if Tj is a type variable, then let
Tj' be an interface whose members are the same as the
public
members of Tj; otherwise, if Tj is an interface, then let
Tj' be Tj., such as array types).
The subtype and supertype relations are binary relations on types.
The
supertypes of a type are obtained by reflexive
and transitive closure over the direct supertype relation, written S
>1 T, which is defined by rules given later in this
section. We write S
:> T to indicate that the supertype
relation holds between S and T.
S is
a proper supertype of T, written S
> T, if S
:> T and S ≠ T. ≠ T.
T is
a direct subtype of S, written T
<1 S, if S
>1 T.
Subtyping
does not extend through parameterized types: T
<: S does not imply that C
<T
>
<:
C
<S
>.
The following rules define the direct supertype relation among the primitive types:
Given a
direct supertypes of an intersection type T1
& ...
& Tn
are Ti (1 ≤ i ≤ n).
The direct supertypes of a type variable are the types listed in its bound.
A type variable is a direct supertype of its lower bound.
The direct supertypes of the null type are all reference types other than the null type itself.
The following rules define the direct supertype relation among array types:.
Example 4.11-1. Usage of a Type
import java.util.Random; import java.util.Collection; import java.util.ArrayList; class MiscMath<T extends Number> {; } Collection<Number> fromArray(Number[] na) { Collection<Number> cn = new ArrayList<Number>(); for (Number n : na) cn.add(n); return cn; } <S> void loop(S s) { this.<S>loop(s); } }
In this example, types are used in declarations of the following:
Imported types (§7.5); here
the type
Random, imported from the
type
java.util.Random of the
package
java.util, is declared
Fields, which are the class variables and
instance variables of classes (§8.3), and
constants of interfaces (§9.3); here the
field
divisor in the
class
MiscMath is declared to be of type
int
Method parameters (§8.4.1);
here the parameter
l of the
method
ratio is declared to be of type
long
Method results (§8.4); here
the result of the method
ratio is declared to
be of type
float, and the result of the
method
gausser is declared to be of type
double
Constructor parameters
(§8.8.1); here the parameter of the
constructor for
MiscMath is declared to be of
type
int
Local variables (§14.4,
§14.14); the local
variables
r and
val of the
method
gausser are declared to be of
types
Random and
double
[] (array of
double)
Exception parameters
(§14.20); here the exception
parameter
e of the
catch clause is declared
to be of type
Exception
Type parameters (§4.4);
here the type parameter of
MiscMath is a type
variable
T with the
type
Number as its declared bound
In any declaration that uses a parameterized
type; here the type
Number is used as a type
argument (§4.5.1) in the parameterized
type
Collection<Number>.
and in expressions of the following kinds:
Class instance creations
(§15.9); here a local
variable
r of
method
gausser is initialized by a class
instance creation expression that uses the
type
Random
Generic class (§8.1.2)
instance creations (§15.9);
here
Number is used as a type argument in the
expression
new
ArrayList<Number>()
Array creations (§15.10.1);
here the local variable
val of
method
gausser is initialized by an array
creation expression that creates an array of
double with size
2
Generic method (§8.4.4) or
constructor (§8.8.4) invocations
(§15.12); here the
method
loop calls itself with an explicit
type argument
S
Casts (§15.16); here the
return statement of the method
ratio uses
the
float type in a cast
The
instanceof operator
(§15.20.2); here the
instanceof operator
tests whether
e is assignment-compatible with
the type
ArithmeticException
A variable is a storage location and has an associated type, sometimes called its compile-time type, that is either a primitive type (§4.2) or a reference type (§4.3).
A variable's value is changed
by an assignment (§15.26) or by a prefix or
postfix
++ compile-time unchecked warnings (§4.12.2). Default values (§4.12.5) are compatible and all assignments to a variable are checked for assignment compatibility (§5.2), usually at compile time, but, in a single case involving arrays, a run-time check is made (§10.5).
A variable of a primitive type always holds a primitive value of that exact primitive.
Note that a variable is not guaranteed to always refer to a subtype of its declared type, but only to subclasses or subinterfaces of the declared type. This is due to the possibility of heap pollution discussed below. a subclass or subinterface of type T.
A
variable of type
Object
[] can hold a reference to an array
of any reference type.
A
variable of type
Object can hold a null reference or a reference to
any object, whether it is an instance of a class or an array.
It is possible that a variable of a parameterized type will refer to an object that is not of that parameterized type. This situation is known as heap pollution.
Heap pollution can only occur if the program performed some operation involving a raw type that would give rise to a compile-time unchecked warning (§4.8, §5.1.9, §5.5.2, §8.4.1, §8.4.8.3, §8.4.8.4, §9.4.1.2, §15.12.4.2), or if the program aliases an array variable of non-reifiable element type through an array variable of a supertype which is either raw or non-generic.
For example, the code:
List l = new ArrayList<Number>(); List<String> ls = l; // Unchecked warning
gives rise to a compile-time unchecked warning,
because it is not possible to ascertain, either at compile time
(within the limits of the compile-time type checking rules) or at run
time, whether the variable
l does indeed refer to
a type arguments used to create them.
In a simple example as given above, it may appear
that it should be straightforward to identify the situation at compile
time and give an error. However, in the general (and typical) case,
the value of the variable
l may be the result of an
invocation of a separately compiled method, or its value may depend
upon arbitrary control flow. The code above is therefore very
atypical, and indeed very bad style.
Furthermore, the fact that
Object
[] is a
supertype of all array types means that unsafe aliasing can occur
which leads to heap pollution. For example, the following code
compiles because it is statically type-correct:
static void m(List<String>... stringLists) { Object[] array = stringLists; List<Integer> tmpList = Arrays.asList(42); array[0] = tmpList; // (1) String s = stringLists[0].get(0); // (2) }
Heap pollution occurs at (1) because a component in
the
stringLists array that should refer to a
List<String> now refers to
a
List<Integer>. There is no way to detect
this pollution in the presence of both a universal supertype
(
Object
[]) and a non-reifiable type (the declared type of
the formal
parameter,
List<String>
[]). No
unchecked warning is justified at (1); nevertheless, at run time, a
ClassCastException will occur at (2).
A compile-time unchecked warning will be given at
any invocation of the method above because an invocation is considered
by the Java programming language's static type system to create an array whose
element type,
List<String>, is non-reifiable
(§15.12.4.2). If and only if
the body of the method was type-safe with respect to the variable
arity parameter, then the programmer could use the
SafeVarargs
annotation to silence warnings at invocations
(§9
array variable could
be
java.util.Collection[] - a raw element type -
and the body of the method above would compile without warnings or
errors and still cause heap pollution. And if the Java SE platform defined,
say,
Sequence as a non-generic supertype
of
List<T>, then
using
Sequence as the type
of
array would also cause heap pollution.
The variable will always refer to an object that is an instance of a class that represents the parameterized type.
The value of
ls in the example
above is always an instance of a class that provides a representation
of a
List.
Assignment from an expression of a raw type to a variable of a parameterized type should only be used when combining legacy code which does not make use of parameterized types with more modern code that does.
If no operation that requires a compile-time unchecked warning to be issued takes place, and no unsafe aliasing occurs of array variables with non-reifiable element types, then heap pollution cannot occur. Note that this does not imply that heap pollution only occurs if a compile-time unchecked warning actually occurred. It is possible to run a program where some of the binaries were produced by a compiler for an older version of the Java programming language, or from sources that explicitly suppressed unchecked warnings. This practice is unhealthy at best.
Conversely, it is possible that despite executing code that could (and perhaps did) give rise to a compile-time unchecked warning, no heap pollution takes place. Indeed, good programming practice requires that the programmer satisfy herself that despite any unchecked warning, the code is correct and heap pollution will not occur.
There are eight kinds of variables:
A class variable is a field declared using
the keyword
static within a class declaration
(§8.3.1.1), or with or without the keyword
static within).
An instance variable is a field declared
within a class declaration without using the keyword.
Array components are unnamed variables that are created and initialized to default values (§4.12.5) whenever a new object that is an array is created (§10 (Arrays), §15.10.2). The array components effectively cease to exist when the array is no longer referenced.
Method parameters (§8.4.1) name argument values passed to a method.
For every parameter declared in a method declaration, a new parameter variable is created each time that method is invoked (§15.12). The new variable is initialized with the corresponding argument value from the method invocation. The method parameter effectively ceases to exist when the execution of the body of the method is complete.
Constructor parameters (§8.8.1) name argument values passed to a constructor.
For every parameter declared in a constructor declaration, a new parameter variable is created each time a class instance creation expression (§15.9) or explicit constructor invocation (§8.8.7) invokes that constructor. The new variable is initialized with the corresponding argument value from the creation expression or constructor invocation. The constructor parameter effectively ceases to exist when the execution of the body of the constructor is complete. is complete.
An exception parameter is created each time
an exception is caught by a
catch clause of a
try statement
(§14.20).
The new variable is initialized with the actual object
associated with the exception (§11.3,
§14.18). The exception parameter
effectively ceases to exist when execution of the block
associated with the
catch clause is complete.
Local variables are declared by local variable declaration statements (§14.4).
Whenever the flow of control enters a block
(§14.2) or
for statement
(§14.14), a new variable is created for
each local variable declared in a local variable declaration
statement immediately contained within that block or
for
statement.
A local variable declaration statement may contain an expression which initializes the variable. The local variable with an initializing expression is not initialized, however, until the local variable declaration statement that declares it is executed. (The rules of definite assignment (§16 (Definite Assignment)) prevent the value of a local variable from being used before it has been initialized or otherwise assigned a value.) The local variable effectively ceases to exist when the execution of the block or for statement is complete.
Were it not for one exceptional situation, a
local variable could always be regarded as being created when
its local variable declaration statement is executed. The
exceptional situation involves the
switch statement
(§14.11), where it is possible for control
to enter a block but bypass execution of a local variable
declaration statement. Because of the restrictions imposed by
the rules of definite assignment (§16 (Definite Assignment)),
however, the local variable declared by such a bypassed local
variable declaration statement cannot be used before it has been
definitely assigned a value by an assignment expression
(§15.26).
Example 4.12.3-1.; } }.28). Whether a variable is a
constant variable or not may have implications with respect to class
initialization (§12.4.1), binary compatibility
(§13.1, §13.4.9), and
definite assignment (§16 (Definite Assignment)).
Three kinds of
variable are implicitly declared
final: a field of an interface
(§9.3), a local variable which Point origin = new Point(0, 0); }
the class
Point declares a
final class variable
origin. The
origin variable holds a reference to an object that
is an instance of class
Point whose coordinates are
(0, 0). The value of the variable
Point.origin can
never change, so it always refers to the same
Point
object, the one created by its initializer. However, an operation on
this
Point object might change its state - for
example, modifying its
useCount or even,
misleadingly, its
x or
y
coordinate.
Certain variables that are not declared
final are instead
considered effectively final:
A local variable whose declarator has an initializer (§14.4):
For type
byte, the default value is zero, that is, the
value of
(byte)0.
For type
short, the default value is zero, that is, the
value of
(short)0.
For type
int, the default value is zero, that is,
0.
For type
long, the default value is zero, that
is,
0L.
For type
float, the default value is positive zero, that
is,
0.0f.
For type
double, the default value is positive zero, that
is,
0.0d.
For type
char, the default value is the null character,
that is,
'\u0000'.
For type
boolean, the default value is
false.
For all reference types (§4.3), the
default value is
null.
Each method parameter (§8.4.1) is initialized to the corresponding argument value provided by the invoker of the method (§15.12).
Each constructor parameter (§8.8.1) is initialized to the corresponding argument value provided by a class instance creation expression (§15.9) or explicit constructor invocation (§8.8.7).
An exception parameter (§14.20) is initialized to the thrown object representing the exception (§11.3, §14.18).
A local variable (§14.4, §14.14) must be explicitly given a value before it is used, by either initialization (§14.4) or assignment (§15.26), in a way that can be verified using the rules for definite assignment (§16 (Definite Assignment)).
Example 4.12.5-1. Initial Values of Variables
class Point { static int npoints; int x, y; Point root; } class Test { public static void main(String[] args) { System.out.println("npoints=" + Point.npoints); Point p = new Point(); System.out.println("p.x=" + p.x + ", p.y=" + p.y); System.out.println("p.root=" + p.root); } }
This program prints:
npoints=0 p.x=0, p.y=0 p.root=null
illustrating the default initialization
of
npoints, which occurs when the
class
Point is prepared
(§12.3.2), and the default initialization
of
x,
y,
and
root, which occurs when a
new
Point is instantiated. See
§12 (Execution) for a full description of all aspects of
description of the instantiation of classes to make new class
instances.
In the Java programming language, every variable and every expression has a type that can be determined at compile time. The type may be a primitive type or a reference type. Reference types include class types and interface types. Reference types are introduced by type declarations, which include class declarations (§8.1) and interface declarations (§9.1). We often use the term type to refer to either a class or an interface.
In the Java Virtual Machine, every object
belongs to some particular class: the class that was mentioned in the
creation expression that produced the object
(§15.9), or the class whose
Class object was
used to invoke a reflective method to produce the object, or the
String class for objects implicitly created by the string
concatenation operator
+ (§15.18.1). This
class is called the class of the object. An
object is said to be an instance of its class and
of all superclasses of its class.
Every array also has a
class. The method
getClass, when invoked for an
array object, will return a class object (of class
Class) that
represents the class of the array
(§10.8).
The
compile-time type of a variable is always declared, and the
compile-time type of an expression can be deduced at compile time. The
compile-time type limits the possible values that the variable can
hold at run time.5) that interface.
Sometimes a variable or
expression is said to have a "run-time type". This refers to the class
of the object referred to by the value of the variable or expression
at run time, assuming that the value is not
null.
The correspondence between compile-time types and run-time types is incomplete for two reasons:
At run time, classes and interfaces are loaded by the Java Virtual Machine using class loaders. Each class loader defines its own set of classes and interfaces. As a result, it is possible for two loaders to load an identical class or interface definition but produce distinct classes or interfaces at run time. Consequently, code that compiled correctly may fail at link time if the class loaders that load it are inconsistent.
See the paper Dynamic Class Loading in the Java Virtual Machine, by Sheng Liang and Gilad Bracha, in Proceedings of OOPSLA '98, published as ACM SIGPLAN Notices, Volume 33, Number 10, October 1998, pages 36-44, and The Java Virtual Machine Specification, Java SE (§8.1.2, §9.1.2) share a single run-time representation.
Under certain conditions, it is possible that a variable of a parameterized type refers to an object that is not of that parameterized type. This situation is known as heap pollution (§4.12.2). The variable will always refer to an object that is an instance of a class that represents the parameterized type.
Example 4.12.6-1. Type of a Variable versus Class of an Object; } }
In this example:
The local variable
p of the
method
main of class
Test
has type
Point and is initially assigned a
reference to a new instance of
class
Point.
The local variable
cp
similarly has as its type
ColoredPoint, and
is initially assigned a reference to a new instance of
class
ColoredPoint.
The assignment of the value
of
cp to the variable
p
causes
p to hold a reference to
a
ColoredPoint object. This is permitted
because
ColoredPoint is a subclass
of
Point, so the
class
ColoredPoint is assignment-compatible
(§5.2) with the
type
Point. A
ColoredPoint
object includes support for all the methods of
a
Point. In addition to its particular
fields
r,
g,
and
b, it has the fields of
class
Point, namely
x
and
y.
The local variable
c has as
its type the interface type
Colorable, so it
can hold a reference to any object whose class
implements
Colorable; specifically, it can
hold a reference to a
ColoredPoint.
Note that an expression such as
new
Colorable() is not valid because it is not possible to
create an instance of an interface, only of a class. However, the
expression
new Colorable() { public void
setColor... } is valid because it declares an anonymous
class (§15.9.5) that implements
the
Colorable interface. | http://docs.oracle.com/javase/specs/jls/se8/html/jls-4.html | CC-MAIN-2015-32 | refinedweb | 8,241 | 56.05 |
Monty Hall and Bayes
Overview
A bit of a brain-wrinkler, the Monty Hall is one of the more famous problems to come out of probability theory.
You probably know the setup:
- 3 Doors: A, B, C - 1 has a prize, 2 have a dud - You pick a door - Monty Hall opens a second door, revealing a dud - Do you switch to the open door?
You might have memorized that the optimal solution is “always switch.” But memorizing the intuition, all you have to remember is the phrase “You picked A and he chose B.”
We’ll get into the math of it in a minute, but using this phrase, let’s examine all of the 3 scenarios for where the prize is.
It was actually behind A. If you picked A and he chose B, then his B choice would have been at random between two bad doors.
It was actually behind B. If you picked A and he chose B then he would have revealed the prize, which he wouldn’t do.
If it was actually behind C. If you picked A and he chose B, then he would have picked the only door that wouldn’t reveal the prize.
Also worth noting that it doesn’t matter which of the 3 doors we pick at the beginning– this strategy holds regardless.
With Tables
Working through this problem using our table method involves careful population of the various cells.
For starters, our “Data” that we observed will continue to be the phrase “You picked A and he chose B.” Similarly, the Hypothesis is “Which door contains the prize?”
from IPython.display import Image Image('images/monty_table_1.PNG')
- Because all 3 doors are equally likely at first, Column
Bis trivial.
- Column
Cfollows from our scenario checking above
- Column
Dand
Eare straight-up math, just like our last notebook.
Looking at this table, it’s clear that the “Switch” strategy yields a win 2⁄3 of the time.
Variations
There are a couple interesting variations worth exploring to cement your intuition for this problem.
100 Doors
In a short YouTube video on the topic, Numberphile restates the problem, but instead of having 3 doors and revealing 1, we instead have 100 doors and the host reveals 98 of them.
They go on to say that this is intuitively more palatable because you can “feel” the possibility of door 37 being correct “concentrating” around it.
Image('images/monty_numberphile.PNG')
Interestingly, though, reconstructing this table to account for the 98 doors that are eliminated (
B...Y), we can see that the resulting “switch and win” still yields the same 1⁄3 : 2⁄3 relationship
Image('images/monty_table_2.PNG')
4 Doors, Open 1
So the number of opened doors doesn’t change our strategy as long as it comes down to a decision between Switch and Stay, but what if we trim down to more than 2?
Remembering our throughline “Data observation” of “You picked A and he chose B”, we construct a new table.
Again, column
B is trivial. And
D and
E are, again, plug-and-chug. The interesting part of this problem comes in how we fill out Column
C. Going through the options.
The prize is behind A. “You picked A and he chose B.” Then B was a random choice, because any of the 3 remaining doors wouldn’t have a car. The prize was behind B. “You picked A and he chose B.” Again, impossible. The prize was behind C. “You picked A and he chose B.” So Monty can either select B or D and still not show a car. The prize was behidn D. Same as above, but with C or D.
Image('images/monty_table_3.PNG')
An interesting consequence of this is that even though we don’t enjoy the same “two times more likely to win” property as before, we still have a better shot at finding a prize if we elect to switch and randomly pick from the remaining doors. | https://napsterinblue.github.io/notes/stats/bayes/monty_hall/ | CC-MAIN-2021-04 | refinedweb | 667 | 72.66 |
1:16 5 channel speed king rc car for wholesale
US $4.31-4.9 / Box
240 Boxes (Min. Order)
Shantou Chenghai Gang Sheng Trade Co., Ltd.
Cold region usage JIAHE cold resistant permanent sealing tape as SUPER KING/EVA SHENG/QICHANG/AONE type
US $58.5-63.0 / Carton
100 Cartons (Min. Order)
Linyi Jia He Adhesive Tape Co., Ltd.
Filled Duvet throw blanket Goose down Fashion Filling Mattress
US $19.86-25.74 / Piece
100 Pieces (Min. Order)
Zhejiang Shengli Down Products Co., Ltd.
Fruit King 3 Taiwan's Mario Slot Game Machine Kits / Mario Slot Coin Operated Game Machine
US $390-420 / Piece
5 Pieces (Min. Order)
DA SHENG TECHNOLOGY ENTERPRISE CO., LTD.
Unisex Lava Stone Pave Zircon Imperial Crown Copper Bead Charm Bracelet
US $0.59-0.65 / Piece
1 Piece (Min. Order)
Yiwu He Sheng Commercial & Trading Co., Ltd.
Bed room furniture podwer coat wood slat Twin/ Full/ Queen/ King metal bed frame
US $19-63 / Set
300 Sets (Min. Order)
Cao County Jinde Sheng Arts And Crafts Co., Ltd.
Home furniture electric smart bed with massage function
US $850.0-2100.0 / Piece
1 Piece (Min. Order)
Huizhou Sialiy Intelligent Furniture Co., Ltd.
Sweet Color Gift Vacuum Cup Insulation Thermos Airpot Lovely Flask
US $2.0-3.3 / Piece
10 Pieces (Min. Order)
Yiwu Xin Sheng Import & Export Co., Ltd.
gps king [2G, 3G, 4G, OBD] plug-and-play
US $1-90 / Piece
1 Piece (Min. Order)
Shenzhen Hua Sheng Telematics Co., Ltd.
Apiculture beekeeping equipment Bee Queen King Cage Beekeeping Kit Bamboo Multifunction Prisoner Hive Tools
US $0.78-1.99 / Piece
10 Pieces (Min. Order)
Changge City Shenglong Apiculture Co., Ltd.
China make steel/metal artificial tree outdoor king coconut palm tree for beach decoration
US $300-400 / Meter
1 Piece (Min. Order)
Guangzhou Shengjie Artificial Plants Ltd.
Factory hot sale king costumes adult
US $3-8 / Piece
200 Pieces (Min. Order)
Jiangshan City Sheng Wei Arts And Crafts Co., Ltd.
HSM Professional ISO CE Gold King Metal Detector
US $200-300 / Set
2 Sets (Min. Order)
Gongyi City Hua Sheng Ming Heavy Industry Machinery Factory
european sofa cum bed set for living room furniture place
US $720-830 / Set
1 Set (Min. Order)
Foshan LanPai Furniture Manfacture Co., Ltd.
2018 New Arrival Jewelry Gold Chain With A Gold Diamond Cute King Baby Angel Pendant Necklace
US $4-7 / Piece
12 Pieces (Min. Order)
Guangzhou Shengyou Imp. & Exp. Company Ltd.
3 in 1 digital measuring tool with roller mode/sonic mode/string mode
US $8.5-9.5 / Pieces
1000 Pieces (Min. Order)
Jinhua Hong Sheng Electronics Technology Co., Ltd.
ye sheng hei jia lun wholesale high quality herb blackcurrant and Ribes nigrum L
US $34-65 / Bag
1 Bag (Min. Order)
Fuyang King Year Trading Co., Ltd.
Military Dormitory Sing Double Twin King Queen Full size Metal bunk bed frame
US $39-59 / Set
10 Sets (Min. Order)
Shenyang Jiezhida Modern Office Furniture Co., Ltd.
Egypt king fashion cz pendants necklaces fine jewelry sterling silver 925 pharaohs pendant 2018
US $0.28-0.81 / Gram
50 Grams (Min. Order)
Guangzhou Sheng Lei Shi Jewelry Limited
KG206 Wireless Home Theater System
100 Sets (Min. Order)
Kwen Sheng (King Gold) Machinery Electric Co
Car Sun Shade and Dustproof Auto Dashboard Mat Cover for GEELY EC72018 EC8 King Kong SX7 GX7 SC7 GC7 eagle
US $3.5-10.0 / Piece
1 Piece (Min. Order)
Guangzhou Tai Jia Sheng Trading Co., Ltd.
Stainless steel pre rinse faucet for burguer king kitchen,deck mounted kitchen pre rinse faucet
US $70-120 / Piece
1 Piece (Min. Order)
Shenzhen Ming Sheng Kitchen Equipment Co., Ltd.
Natural crystal Goku clear crystal hand carved monkey king for decor
US $32.0-38.0 / Piece
1 Piece (Min. Order)
Donghai County Sheng Quan Jewellery Co., Ltd.
The Animal Paradise of cute tigers and the lion king block building toys for children
US $4.61-5.99 / Piece
24 Pieces (Min. Order)
Shantou Chenghai Guang Qi Sheng Toys Firm
Wholesale Custom Bertoia Diamond Metal Wire Dining Chair Hotel Furniture Modern Style
US $38.57-125.98 / Pieces
10 Pieces (Min. Order)
Guangzhou Sheng Mao Metal Profiles Co., Ltd.
from Bulang Mountain 2007yr yunnan unfermented compressed puerh tea 357g
US $17.65-19.17 / Piece
1 Piece (Min. Order)
Yunnan Deng Xuan Import&Export Trade Co., Ltd.
Thick Winter Men Socks with Airplanes
US $0.59-1.89 / Pair
300 Pairs (Min. Order)
Fuzhou Xing Sheng Ge Trade Co., Ltd.
Gas mode engine valves For Bajaj autorickshaw TVS King three wheeler Spare Parts
US $0.1-10.0 / Pairs
200 Pairs (Min. Order)
Shijiazhuang Gaosheng Auto Parts Make Ltd.
2018 fashion instant ginseng ginger tea
US $4.0-15.4 / Box
1 Box (Min. Order)
Weihui Kangsheng Co., Ltd.
Chinese factory motor cycle parts model train king helicopter in low price
US $0.2-0.99 / Piece
1 Piece (Min. Order)
Hui Sheng Smart Technology (Shenzhen) Co., Ltd.
Quality Guaranteed Newest Model 925 Diamond Lion King Crown Silver Pendant
US $4.2-11.4 / Piece
15 Pieces (Min. Order)
Guangzhou Sheng Lei Shi Jewelry Limited
import fast delivery electric mountain bicycle for Europe market
US $450-500 / Set
1 Piece (Min. Order)
Xianghe Qiangsheng Electric Tricycle Factory
wholesale glass cow figurine for home decoration cow king size bed
US $1.0-1.8 / Unit
100 Units (Min. Order)
Funing Yao Sheng Trading Co., Ltd.
environment friendly best quality used corrugated carton flexo printing machine/flexographic printing machine(ruian kings brand)
US $100-200 / Piece
3 Pieces (Min. Order)
Hebei Shengli Paper Chest Equipment Manufacturing Co., Ltd.
Personalized color king size plastic cigarette case
US $0.4-0.8 / Piece
1000 Pieces (Min. Order)
Fuzhou Sheng Leaf Import And Export Co., Ltd.
Live Mud Crabs Including King, Light Green, Light Brown etc
Shin Sheng International Development
0.0%
Halloween Party Supply King Crowns Adult Prince Full Round Crystal Crown
US $0.52-0.86 / Set
500 Sets (Min. Order)
Hubei Pu Sheng Trading Co., Ltd.
1800 Series premium 3pc microfibrer embossed bedding sheet set wrinkle free
US $5-7.5 / Set
300 Sets (Min. Order)
Nantong Xinsheng Textile Garment Co., Ltd.
Frozen Indo Pacific King Mackerel
US $800-1000 / Ton
5 Tons (Min. Order)
Xiamen Victory Seafoods Co., Ltd.
Excellent quality garden king tiller parts,garden power tiller price
US $202.0-202.0 / Pieces
50 Pieces (Min. Order)
Zhejiang E-Shine Machinery Manufacturing Co., Ltd.
- About product and suppliers:
Q: Why did King Tutankhamun become . Q: Which son of King William became King? A: King John of England was born on 24th December 1166 and became king on 6th April 1199 aged thirty-two . Q: How do you become a . | https://www.alibaba.com/showroom/king-sheng.html | CC-MAIN-2019-13 | refinedweb | 1,117 | 69.79 |
Centroid Decomposition of Tree
Get FREE domain for 1st year and build your brand new site
Reading time: 30 minutes
Centroid Decomposition is a divide and conquer technique which is used on trees.
The centroid decomposition of a tree is another tree defined recursively as:
- Its root is the centroid of the original tree.
- Its children are the centroid of each tree resulting from the removal of the root(centroid) from the original tree.
Centroid of a Tree
Given a tree with N nodes, a centroid is a node whose removal splits the given tree into a forest of trees, where each of the resulting tree contains no more than N/2 nodes.
Algorithm
Finding the centroid of a tree
One way to find the centroid is to pick an arbitrary root, then run a depth-first search computing the size of each subtree, and then move starting from root to the largest subtree until we reach a vertex where no subtree has size greater than N/2. This vertex would be the centroid of the tree i.e. Centroid is a node v such that,
maximum(N - S(v), S(u1, S(u2, .. S(um) <= N/2 where ui is ith child of v, and S(u) is the size(number of nodes in tree) of subtree rooted at u.
Algorithm
- Select arbitrary node v
- Start a DFS from v, and setup subtree sizes
- Re-position to node v (or start at any arbitrary v that belongs to the tree)
- Check mathematical condition of centroid for v
1. If condition passed, return current node as centroid
2. Else move to adjacent node with
greatestsubtree size, and back to step 4.
Decomposing the Tree to get the new "Centroid Tree"
On removing the centroid, the original given tree decomposes into a number of different trees, each having no of nodes < N/2 . We make this centroid the root of our centroid tree and then recursively decompose each of the new trees formed and attach their centroids as children to our root. Thus , a new centroid tree is formed from the original tree.
Algorithm
- Make the centroid as the root of a new tree (which is centroid tree).
- Recursively decompose the trees in the resulting forest.
- Make the centroids of these trees as children of the centroid which last split them.
example: For the following tree:
Number of children of any node of a tree can be O(logN) in the worst case. So in each find_centroid operation which takes O(N) time in worst case, we divide the tree into forest of O(logN) subtrees, so we recur at most O(logN) times. So overall centroid decomposition of a tree takes O(N logN) time in the worst case.
Implementation
Code in Python3
import sys class Tree: def __init__(self, n): self.size = n + 1 self.cur_size = 0 self.tree = [[] for _ in range(self.size)] self.iscentroid = [False] * self.size self.ctree = [[] for _ in range(self.size)] def dfs(self, src, visited, subtree): visited[src] = True subtree[src] = 1 self.cur_size += 1 for adj in self.tree[src]: if not visited[adj] and not self.iscentroid[adj]: self.dfs(adj, visited, subtree) subtree[src] += subtree[adj] def findCentroid(self, src, visited, subtree): iscentroid = True visited[src] = True heavy_node = 0 for adj in self.tree[src]: if not visited[adj] and not self.iscentroid[adj]: if subtree[adj] > self.cur_size//2: iscentroid = False if heavy_node == 0 or subtree[adj] > subtree[heavy_node]: heavy_node = adj if iscentroid and self.cur_size - subtree[src] <= self.cur_size//2: return src else: return self.findCentroid(heavy_node, visited, subtree) def findCentroidUtil(self, src): visited = [False] * self.size subtree = [0] * self.size self.cur_size = 0 self.dfs(src, visited, subtree) for i in range(self.size): visited[i] = False centroid = self.findCentroid(src, visited, subtree) self.iscentroid[centroid] = True return centroid def decomposeTree(self, root): centroid_tree = self.findCentroidUtil(root) print(centroid_tree, end=' ') for adj in self.tree[centroid_tree]: if not self.iscentroid[adj]: centroid_subtree = self.decomposeTree(adj) self.ctree[centroid_tree].append(centroid_subtree) self.ctree[centroid_subtree].append(centroid_tree) return centroid_tree def addEdge(self, src, dest): self.tree[src].append(dest) self.tree[dest].append(src) if __name__ == '__main__': tree = Tree(15) # A tree with 15 nodes tree.addEdge(1, 2) tree.addEdge(2, 3) tree.addEdge(3, 4) tree.addEdge(3, 5) tree.addEdge(3, 6) tree.addEdge(5, 7) tree.addEdge(5, 8) tree.addEdge(6, 9) tree.addEdge(7, 10) tree.addEdge(9, 11) tree.addEdge(10, 12) tree.addEdge(11, 13) tree.addEdge(11, 14) tree.addEdge(14, 15) print("DFS traversal of generated Centroid tree is:") tree.decomposeTree(1)
Output
DFS traversal of generated Centroid tree is: 3 2 1 4 7 5 8 10 12 11 9 6 13 14 15
Complexity
Time Complexity
- DFS of tree takes O(N) time.
- Find_Centroid of tree takes O(N) time.
- Centroid Decomposition of tree takes overall O(N logN) time.
Space Complexity
- DFS of tree takes O(N) space.
- Adjacency list of tree takes O(N2) auxiliary space in worst case.
- Overall time space used by Centroid Decomposition algorithm is O(N2)
Application
Centroid Decomposition of tree is a vastly used divide and conquer technique in competitive programming. Here is a list of problems where we use centroid decomposition to solve the problem: | https://iq.opengenus.org/centroid-decomposition-of-tree/ | CC-MAIN-2021-43 | refinedweb | 888 | 60.31 |
How to make an internet clock with NTP, Pi Zero W and RasPiO InsPiRing
I’ve been messing about with the Pi Zero W and one of my RasPiO InsPiRing boards to make a colourful clock that keeps accurate time using NTP (Network Time Protocol). Because the Zero W has WiFi onboard, it’s perfect for things like this. It’s quite a visual thing, so I made a video about it…
Here’s the Code
If you want a walk-through of the code, I made a little walk-through video of it, but decided to keep that separate because not everybody would want that level of detail. You can find that after the code.
import time from time import strftime, sleep from datetime import datetime import apa # RasPiO InsPiRing driver class numleds = 24 # number of LEDs in our display brightness = 0xE5 # 224-255 or 0xE0-0xFF ledstrip = apa.Apa(numleds) # initiate an LED strip ledstrip.flush_leds() # initiate LEDs ledstrip.zero_leds() ledstrip.write_leds() print ('Press Ctrl-C to quit.') try: while True: timenow = datetime.now().strftime('%H%M%S.%f').rstrip('0') hour = int(timenow[0:2]) if hour >= 12: hour = hour - 12 minute = int(timenow[2:4]) second = float(timenow[4:]) print(hour, minute, second) ledstrip.led_set(hour*2, brightness, 0, 0, 255) # 3 Red LEDs for the hour ledstrip.led_set(hour*2+1, brightness, 0, 0, 255) if hour == 0: ledstrip.led_set(23, brightness, 0, 0, 255) else: ledstrip.led_set(hour*2-1, brightness, 0, 0, 255) precise_minute = float(minute + second/60.0) ledstrip.led_set(int(precise_minute / 2.5) , brightness, 0, 255, 0) # green minute ledstrip.led_set(int(second / 2.5) , brightness, 255, 0, 0) # blue second ledstrip.write_leds() # Now blank the LED values for all LEDs in use, so that if any values # change in the next loop iteration, we've cleaned up behind us ledstrip.led_set(int(second / 2.5) , brightness, 0, 0, 0) ledstrip.led_set(int(precise_minute / 2.5) , brightness, 0, 0, 0) ledstrip.led_set(hour*2+1, brightness, 0, 0, 0) ledstrip.led_set(hour*2 , brightness, 0, 0, 0) ledstrip.led_set(hour*2-1, brightness, 0, 0, 0) time.sleep(0.03) # limit the number of cycles to ~33 fps if minute == 59 and int(second) == 59: # Red wipe hourly for i in range(numleds): ledstrip.led_set(i , brightness, 0, 0, 255) ledstrip.write_leds() sleep(0.03) sleep(0.25) for i in range(numleds): ledstrip.led_set(i , brightness, 0, 0, 0) ledstrip.write_leds() elif minute == 29 and int(second) == 59: # Green wipe half-hourly for i in range(numleds): ledstrip.led_set(i , brightness, 0, 255, 0) ledstrip.write_leds() sleep(0.03) sleep(0.25) for i in range(numleds): ledstrip.led_set(i , brightness, 0, 0, 0) ledstrip.write_leds() # Blue wipe quarter-hourly elif (minute == 14 or minute == 44) and int(second) == 59: for i in range(numleds): ledstrip.led_set(i , brightness, 255, 0, 0) ledstrip.write_leds() sleep(0.03) sleep(0.25) for i in range(numleds): ledstrip.led_set(i , brightness, 0, 0, 0) ledstrip.write_leds() finally: print("/nAll LEDs OFF - BYE!/n") ledstrip.zero_leds() ledstrip.write_leds()
I’ll pre-empt those who are going to tell me that the animations (53-81) could be made into a single function. I know that. I just haven’t got ‘a round tuit’ yet. Here’s the code walk-through video…
If you find this interesting, you’ll probably like the RasPiO InsPiRing KickStarter. Please pop over and have a look.
Looks lovely, that. Practical, blinky and colourful. :-D
Ta. Yep. Ticks all my boxes.
LEDS
BRIGHT
BLINKY
COLOURFUL
PROGRAMMABLE
What more could anybody ever want?
why not using the object returned by datetime.now() ? it already provides .hour, .minute and .second as numeric. No need for string manipulations.
You’re right. Thanks for pointing that out. I dug a bit deeper into datetime() I didn’t know those were hiding there :)
I’ll make some tweaks to the code.
What is the licence for your code?
Did you share your library already?
Line 41-49: Why don’t you clear all pixel?
Line 49 is wrong compared to Line 30.
There is so much to say or fix in this code… of only you open source it. :-)
I’m not ready to publish the class yet. When I do it will be on a CC-BY-NA-4 I think (not fully decided).
I don’t think you watched the second video where I explained how the code works and how the “blanking” works. Although I didn’t mention that these are not WS2812 so do not have to be pulsed regularly. They are APA102 and stay set at last setting until they get the next setting. 41-49 resets the led values in the list to zero, but that is not sent to the LEDs until the next iteration of line 39. It’s only done so we can clean up after ourselves if in the next frame we’re writing to the next LED in the sequence.
The code works. | http://raspi.tv/2017/how-to-make-an-internet-clock-with-ntp-pi-zero-w-and-raspio-inspiring | CC-MAIN-2018-39 | refinedweb | 839 | 77.23 |
Some issues about Qt programming in building MySQL driver
I have download the Qt5.5.0 and installed it with "source component" checked. and MySQL installed either. When I tried to build the driver I got the following error
QtSql/private/qsqldriver_p.h: No such file or directory
#include <QtSql/private/qsqldriver_p.h>
And when I cd to the directory I found that there are three lib files
libqsqlite.so libqsqlmysql.so libqsqlpsql.so
I try to execute 'ldd libqsqlmysql.so' and I got the following response
**
libmysqlclient_r.so.16 => not found
libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007fdbee109000)
libcrypt.so.1 => /lib/x86_64-linux-gnu/libcrypt.so.1 (0x00007fdbeded0000)
libnsl.so.1 => /lib/x86_64-linux-gnu/libnsl.so.1 (0x00007fdbedcb6000)
libssl.so.10 => not found
libcrypto.so.10 => not found
**
How can I solve this?
- SGaist Lifetime Qt Champion
Hi and welcome to devnet,
Please start by searching the forum a bit, this question has been asked many times already. You need to install the mysql dev package for your distribution then you should be able to build the plugin.
Thank you. There may be another problem. Actually there are some problems in MySQL Somehow, I'll try.
- SGaist Lifetime Qt Champion
There's a known bug that has been fixed for 5.5.1, check that before going too deep.
Thank you. After I installed Qt4.8.6 via apt-get. Everything went smoothly.
- SGaist Lifetime Qt Champion
Not the most clean solution, you are downgrading Qt to a series that has seen its last release a few month ago. | https://forum.qt.io/topic/59598/some-issues-about-qt-programming-in-building-mysql-driver | CC-MAIN-2019-35 | refinedweb | 266 | 70.8 |
Overloading Javascript Functions Using A Sub-Function Approach
I've been looking through a lot of jQuery source code lately and one of the things that I see being done all over the place is function overloading. Function overloading is the practice in which a function can take different sets of arguments. In a strict language like Java, overloaded functions are typically defined with physically different method signatures; in looser languages like ColdFusion and Javascript - where you can't define parallel variables with the same name - function overloading is typically done through argument inspection. I wondered, however, if we could use Function behavior in Javascript to create a "best of both worlds" type solution.
In Javascript, Functions are objects; granted, they are very special objects that can be used in conjunction with the "()" operator. But, just as any other objects in Javascript, Functions can have properties associated with them. I wanted to see if we could use these function-level properties to create multiple function signatures that all existed under the same function name.
To see what I'm talking about, I've created a function, randRange(), that can take the following method signatures:
- randRange( max )
- randRange( min, max )
In the first invocation, the min is assumed to be zero. In the second invocation, there is no need for assumption as both limits are supplied. Using function-level properties, I am going to define the above two functions using completely different functions off of the core randRange() object:
<!DOCTYPE html> <html> <head> <title>Overloading Javascript Functions - Sub-Function Approach</title> <script type="text/javascript"> // I am the core randRange() function who's signature can // be overloaded with a variable number of arguments. function randRange(){ // Check to see how many arguemnts we have in order to // determine which function implementation to invoke. if (arguments.length == 2){ // Two-parameters avialble. return( randRange.twoParams.apply( this, arguments ) ); } else { // One-parameters available. return( randRange.oneParams.apply( this, arguments ) ); } } // I am the single-argument implementation. Notice that // I am a property of the core function object. randRange.oneParams = function( max ){ // We are going to assume that the min is zero - pass // control off to the two-param implementation. return( randRange.twoParams( 0, max ) ); }; // I am the double-argument implementation. Notice that // I am a property of the core function object. randRange.twoParams = function( min, max ){ return( min + Math.floor( Math.random() * (max - min) ) ); }; // -------------------------------------------------- // // -------------------------------------------------- // // Try a few different approaches. console.log( "One: ", randRange( 10 ) ); console.log( "One: ", randRange( 50 ) ); console.log( "One: ", randRange( 100 ) ); console.log( "Two: ", randRange( 100, 110 ) ); console.log( "Two: ", randRange( 100, 150 ) ); console.log( "Two: ", randRange( 100, 200 ) ); </script> </head> <body> <!-- Intentionally left blank. --> </body> </html>
When we run this code, we get the following console output:
One: 6
One: 18
One: 44
Two: 109
Two: 143
Two: 138
As you can see in the above code, I am defining the randRange() function. But then, I am defining the single and double parameter implementations as properties off of the core randRange() object:
randRange()
randRange.oneParams = function( max )
randRange.twoParams = function( min, max )
Now, the individual method signatures don't have to worry about any kind of arguments-based logic; all the routing logic is factored out and encapsulated within the core randRange() method. This feels like a really clean separation of concerns that leaves the final implementations extremely focused and easy to understand.
In this particular demo, my routing logic depends only on the number of arguments. You could easily augment this, however, to include type checking for methods using the same number of arguments. You could even use the core method to transform several different signatures into one, unified invocation. In any case, I think the factoring-out of argument-specific logic feels really good.
Reader Comments
I think you meant looser languages. PHP is the loser language :P
This is cool!
@Eric,
Ha ha ha, thanks for the critical catch - this has now been corrected.
@Pradeep,
Thanks, I'm glad you like it.
Another way of doing it (shorter version) -
function randRange(){
(randRange[arguments.length] || randRange[2]).apply(this, arguments);
}
randRange[0] = function () {
alert('Error: No parameters passed!');
return 0;
};
randRange[1] = function (max) {
return( randRange[2]( 0, max ) );
};
randRange[2] = function (max, min) {
return(
min +
Math.floor( Math.random() * (max - min) )
);
};
randRange();
randRange(1);
randRange(1, 2);
randRange(1, 2, 3); // calls the no-param version
Anyone see any problems using this approach?
Oops .. the comment in the last line of code above should read "calls the 2-param version" .. which I think is a better implementation considering that the randRange() function essentially wants to deal with maximum of 2 arguments .. any more should be ignored.
@All,
Besides number of arguments, there's also overloading by type of arguments. jQuery("a[name]") does one thing, jQuery(this) does something else and jQuery(function(){}) does something else.
This sort-of argues in favor of defining a hash of subfunctions, doesn't it? With a 2 dimensional hash and the typeof operator, you could deal with the combinatorial explosion of multiple argument types quite naturally:
subfuncs["boolean"]["boolean"]
subfuncs["boolean"]["string"]
subfuncs["boolean"]["number"]
subfuncs["string"]["number"]
subfuncs["string"]["boolean"]
etc.
It's like Java signatures, but managed out of a hash.
Nice how you get people thinking, Ben.
A (very) quick prototype of a cleaner way of doing (strict) arguments check for both - type and count -
Function.prototype.overload = function () {
this.variants = this.variants || {};
var len = arguments.length, args = (Array.prototype.slice.call(arguments)),
id = args.slice(0,len-1).join(',');
this.variants[id] = this.variants[id] || args[len-1];
};
Function.prototype.overloaded = function () {
var len = arguments.length, args = (Array.prototype.slice.call(arguments)),
id = [];
for (var i=0, len=args.length; i<len; i++) {
id.push(typeof(args[i]));
}
id = id.join(',');
var fn = randRange.variants[id];
if (randRange.variants && fn) {
fn.apply(fn, arguments);
}
};
function randRange(){
randRange.overloaded.apply(randRange, arguments);
}
randRange.overload(
'string', 'boolean', 'number',
function (mystr, mybool, mynum) {
alert(['String:'+mystr, 'boolean:'+mybool, 'Number:'+mynum].join('\n'));
}
);
randRange.overload(
'string', 'number', 'boolean',
function (mystr, mynum, mybool) {
alert(['String:'+mystr, 'Number:'+mynum, 'boolean:'+mybool].join('\n'));
}
);
randRange('abc', 100, true);
randRange('pqr', false, -1);
randRange(false, 'str', -1); // no "variant" matches .. call ignored.
This can also be enhanced to take in metadata about arguments like "mandatory/optional", default values, etc. -
randRange.overload(
'string[Default Value]', 'number:-1', 'boolean:optional',
function (mystr, mynum, mybool) {
...
}
);
What do you guys think?
@EtchEmKay, @Steve,
These are some very interesting ideas. This really is like moving back to a strict method signatures. I have to run to catch a plane, but I'll let this sink in a bit. Some very clever stuff going on here.
I've done this a couple times in cfscript, when I wanted a conditional argument. I've also done something similar in cfc methods when a function contains most of the logic I want already, but I want to interact with it in different ways.
The advantage to cfc methods is that you can self document a bit better and if you choose you can use named arguments in your calls to tell "the next guy" what you're doing.
This is really cool - I love overloading and overriding in Javascript.
Just looking at some of the suggestions - you could also put the functions in an array, and then call the function based on the arguments.length which would be in the corresponding place in the array of functions...
function randRange(){
return randRange.Params[arguments.length-1].apply(this, arguments)
}
randRange.Params = [
function( max ){
return( randRange.Params[1]( 0, max ) );
},
function( min, max ){
return(min + Math.floor( Math.random() * (max - min) ))
}
];
This obviously has a little problem with error handling - ie: when there's no arguments - but that's easy enough to cater for.
So was the idea purely to work out a clean separation of functional intent for overloading? Otherwise why not keep is simple like so?
function randRange(arg1, arg2)
{
if (arg1 === undefined && arg2 === undefined) { return; }
else if (arg2 === undefined) { return (0 + Math.floor(Math.random() * (arg1 - 0))); }
else { return (arg1 + Math.floor(Math.random() * (arg2 - arg1))); }
}
Just wondering. Still a fun read as always Ben.
@Grant,
Yeah, ColdFusion's ability to use both ordered and named arguments is something that opens up a lot of options for us.
@Wayne,
Very true; also, something I hadn't thought of before is that you should be able to just call the method recursively. Meaning, the one-param method doesn't have to call the two-param method directly; rather, it can simply call the core, randRange() method and the core method can take care of re-routing to the two-param handler. It adds a bit more processing, but it might be a cleaner implementation??
@Adam,
Yes, I was really just trying to factor out the routing logic. In this kind of example, the difference is negligible; but, the internal behavior of a function can change dramatically depending on the arguments. In such a case, I think it will be quite nice to not have to worry about branching logic within a large function. When I get back to NY (I'm at #BFLEX right now), I'll come up with a better example.
Nice thoughts on the subject,
A good use-case for overloading is when you are dealing with google maps, making a constructor for the market with assorted options (lat, lng, icon, text, function)
Seem to recall reading a post by jresig on the array approach, along the lines of the example
Lookimg forward to your further exploration
@Atleb,
Cool post by Resig. He's using like some sort of Wrapper pattern where each layer defines a different method signature (and then passes the control off to a deeper functional layer if the current signature doesn't match).
I have an idea for an example, but it will have to wait till Monday.
I wanted to take my other post on animate-powered easing and augment it to use two different signatures: one with "duration", one without:
The post isn't really about easing, but it *is* an example of how factoring-out the branching logic of overloaded functions can create an important separation of concerns that leads to a clean, cohesive execution. | https://www.bennadel.com/blog/2008-overloading-javascript-functions-using-a-sub-function-approach.htm | CC-MAIN-2022-05 | refinedweb | 1,711 | 57.27 |
Created on 2019-11-08 16:38 by Marco Sulla, last changed 2019-11-11 11:41 by Marco Sulla. This issue is now closed.
Sometimes I’m lazy and I would test code copy - pasted from internet or from other sources directly in the interpreter, in interactive mode. But if the code contains completely blank lines, the copy - paste fails. For example:
def f():
print("Marco")
print("Sulla")
does not work, but
def f():
print("Marco")
print("Sulla")
yes. Notice that in a script the first code block is perfectly valid and works.
This does not happen with Jupiter console, aka IPython. Jupiter implements bracketed paste mode, so it distinguish between normal input and pasted input.
Jupyter offers also:
- autoindent
- save code blocks in one history entry: this way, if you write a function, for example, and you press the up key, the whole function will be loaded, and not its last line.
- auto-reloading of modules. It should be disabled by default and enabled by a flag, and could auto-reload a module if its code changes.
- save code to console. All the code written in the current interactive session could be saved to the clipboard. It could be triggered by F12.
- history: it could be a new built-in function. if called without parameters, it could show the history, with lines numbered. If called with a number, it will paste the corresponding history line to the console
- pretty printing and source inspection. IMHO pprint.pprint() and inspect.getsource() are so useful in interactive mode that could be added to builtins.
- syntax coloring. It should be disabled by default, and could be enabled by a flag or a config.
- bracket matching. See above.
I think that implementing all of this in CPython is really hard. I suppose that maybe many things are not possible for compatibility between platforms, or can't be available everywhere, like syntax coloring.
In interactive mode, python.exe interacts with a console/(dumb terminal) through the std streams using \n as a special character It gets input from stdin, send output to stdout or errors to stderr. The terminal, not python, handles line editing and history retrieval. Once a statement is entered and executed, python has no memory of it.
On Linux, one can use readline and ncurses modules for somewhat enhances interaction.
IPython is GUI-based. Python already come with a GUI-based IDE, IDLE, which has many of the features you list - autoindent, statement history, save, line numbers in the editor, syntax coloring, and some source inspection. Code with blank lines within statement can be pasted into an editor window and run either with or without clearing the shell workspace.
There are other alternatives with similar features, but this is not the place to discuss them. The point is that there is no need to completely rewrite current text-based interactive mode.
Terry, I think you were extremely over-eager, almost aggressively so, to close this feature request, especially since your reasons given are rather bogus: IPython isn't based on a GUI, it works in a text mode console too, including on Windows.
You say "there is no need to completely rewrite current text-based interactive mode". You are probably right: there probably is *no need* to completely rewrite the current implementation to add at least some, if not all, of the requested features.
For example, I would be shocked if it wasn't absolutely trivial for the current implementation to add auto-indenting following a colon. That feature alone would be a win for usability.
Given that Brett already said that the main obstacle to this feature request was lack of somebody interested and able to do the work (as opposed to a policy that we want the default REPL to be weak and unfriendly), I think you were premature in closing this so quickly. It's not like it has been languishing for years.
Marco: there's no need for these to be "slowly" introduced. If the features are worth having in the default REPL, they're worth having as soon as possible, without us artificially slowing the process down. It will be hard enough to get somebody willing and able to do the work without telling them to dribble the features out slowly as well. Trust me on this, the hard part of Python development is getting feature requests implemented *at all*, not that they come too quickly!
You might like to try building these features on top of the pure-Python interactive REPL:
or perhaps try adding them to IDLE.
If and when you have something positive to show, you could try re-opening this task with a concrete proof-of-concept using the code module, or perhaps a PR for IDLE.
Well, maybe too much feature requests in a single report. I'll report them separately, with more rationale.
Steven: currently I'm developing `frozendict` as part of CPython. About IDLE, IDLE can't be used on a server without a GUI. Furthermore, I *really* hope that IDLE is simply a GUI wrapper of REPL, with some additional features.
> For example, I would be shocked if it wasn't absolutely trivial
> for the current implementation to add auto-indenting following
> a colon. That feature alone would be a win for usability.
That would be a non-trivial change in Windows. I think it's at least possible using the high-level console API. It could be implemented with the pInputControl parameter of ReadConsoleW in combination with WriteConsoleW. This is how the CMD shell implements tab completion for file paths. That said, many of the proposed UI enhancements cannot be implemented in Windows using the high-level console API.
IPython used to depend on readline. (5.0 switched to prompt_toolkit instead.) In Windows this was via the pyreadline package, which uses the low-level console API via ctypes. pyreadline is apparently abandoned (last updated in 2015). Maybe CPython could incorporate a fork of pyreadline that fixes bugs (Unicode support in particular) and updates it to use a C extension instead of ctypes.
@Eryk: why a C extension apart and not a patch to `readline`?
Stephen, I think *you* were the one over-anxious to be dismissive. In the title Marco refers to "Jupyter console (IPython)" features and in his opening, to "Jupyter console, aka IPython". Jupyter Console is, I read, QT based. IPython/Jupyter Notebooks are GUI-based also.
However, I agree that single process terminal IPython, such as illustrated near the top of
is the better reference for comparison. As pictured, it requires a color terminal with full screen edit.
An important constraint is that instead of users talking to the UI program with menu clicks, they must use "‘magic’ commands" entered after the '>>>' prompt instead of code. Guido specifically vetoed the idea of allowing these into IDLE, so he must not want them in the REPL either.
Skipping the rest of your post, I will just restate why I closed this issue.
1. It introduces too many features not directly related. The existing unix-only completions uses two modules. I suspect some of the other features would also need new modules. (But Marco, please don't rush to immediately open 8 new issues.)
Furthest removed, no new module needed: Adding builtins that can be used in REPL as it is. I suspect the answer to that proposal would be to use a PYTHONSTARTUP module with code such as "import pprint as _; pprint = _.pprint"
2. It introduces policy issues that might require a policy change, and that I think should better be discussed on, say, pydev. Steven and I have obviously gotten different policy impressions from Brett and Guido respectively. I will try to ask on pydev what the current REPL feature policy is, and what people think it should be. For instance, how far do we want to go is adding features that only work on a subset of systems?
3. I believe that some of the concrete proposals have even more implementation problems than Marco himself acknowledged. Difficult does not mean prohibited, but coredevs are mostly not interested in working on UIs and it has been Guido's stated-to-me policy that 'advanced' UI/IDE features (not defined) should be left to other projects.
One communication problem is that once python is running in interactive mode, the only thing the user can send to it is lines of Python code. Consider pasting multiline code with blank lines. The console feeds pasted lines *1 at a time* to interactive Python, the same as if the user typed them. So Python does not know if they were typed or pasted. Nor does it know that there might be a next line already waiting, and if so, what it is (dedented?). If it does not execute on receiving '\n', when will it?
There are also communication issues in the other direction. When REPL sends a prompt, everything up to and including a prompt is somehow marked read-only. But autoindents must be left erasable so a user can dedent.
If that can be solved, including on Windows, I would like IDLE's autoindent code moved to an stdlib module (and polished, if need be) to be used by REPL, IDLE, and anyone else who wishes. The main function would map a sequence of lines to a PEP-8 compliant indent for the next line.
Syntax-coloring opens a much bigger can of worms. It requires full screen editing to overwrite existing input. It also needs to be configurable as part of a configuration system. IPython requires the user to write "a dictionary mapping Pygments token types to strings defining the style." in a python file with other needed code in a place where IPython can find it. IDLE lets one modify an existing scheme by clicking on an element in sample code and then a new color and let IDLE handle the storage.
Marco, you said "I *really* hope that IDLE is simply a GUI wrapper of REPL". Nope. In a single process, python cannot simultaneous execute a file in batch mode and input lines in interactive mode. IDLE uses "exec(code, simulated_main_namespace)" and I suspect other simulated shells do something similar. An advantage is that 'code' can represent complete multiline statements or even an entire module.
@Terry:
> Jupyter Console is, I read, QT based
Nope. It's shell based by default. You can open it also as a QT app, like IDLE, but by default `jupyter console` is via terminal.
> they must use "‘magic’ commands" entered after the '>>>' prompt
> instead of code. Guido specifically vetoed the idea
Indeed I'm against too, and I wrote it. And if you read my proposals, I do not suggest any magic word
> the answer to that proposal would be to use a PYTHONSTARTUP module
> with code such as "import pprint as _; pprint = _.pprint"
I know this, but it should be the default behaviour, IMHO. I mean, you can invoke `help()` in REPL but also in a `.py`. It makes no sense, but you can do it and you have not to import a separate module before.
> The console feeds pasted lines *1 at a time* to interactive Python
This is fixed by many terminal editors, like `vi`, with bracketed paste mode, as I wrote.
> When REPL sends a prompt, everything up to and including a prompt is
> somehow marked read-only.
A workaround could be simulate input by user. Ugly but effective.
> Syntax-coloring [...] requires full screen editing
???
> [Syntax-coloring] also needs to be configurable
This could be IMHO delayed, or not implemented at all. If you don't like the colors, you can always not use it :D It will suffice that the colors will be as much as possible readable also by the majority of color-blind person. | https://bugs.python.org/issue38747 | CC-MAIN-2019-47 | refinedweb | 1,976 | 63.9 |
Created on 2009-11-09 23:02 by jasper, last changed 2010-09-22 13:41 by jasper. This issue is now closed.
While trying to get Python 2.6 working on OpenBSD/sgi (64-bit port)
I ran into the following during build:
OverflowError: signed integer is greater than maximum
I ran the command that triggered this by hand with -v added:
(sgi Python-2.6.3 40)$ export PATH; PATH="`pwd`:$PATH"; export
PYTHONPATH; PYTHONPATH="`pwd`/Lib"; export DYLD_FRAMEWORK_PATH;
DYLD_FRAMEWORK_PATH="`pwd`"; export EXE; EXE=""; cd
./Lib/plat-openbsd4; ./regen
python$EXE -v ../../Tools/scripts/h2py.py -i '(u_long)'
/usr/include/netinet/in.h
# installing zipimport hook
import zipimport # builtin
# installed zipimport hook
# /usr/obj/ports/Python-2.6.3/Python-2.6.3/Lib/site.pyc matches
/usr/obj/ports/Python-2.6.3/Python-2.6.3/Lib/site.py
import site # precompiled from
/usr/obj/ports/Python-2.6.3/Python-2.6.3/Lib/site.pyc
'import site' failed; traceback:
Traceback (most recent call last):
File "/usr/obj/ports/Python-2.6.3/Python-2.6.3/Lib/site.py", line 61,
in <module>
import sys
OverflowError: signed integer is greater than maximum
import encodings # directory
/usr/obj/ports/Python-2.6.3/Python-2.6.3/Lib/encodings
# /usr/obj/ports/Python-2.6.3/Python-2.6.3/Lib/encodings/__init__.pyc
matches /usr/obj/ports/Python-2.6.3/Python-2.6.3/Lib/encodings/__init__.py
import encodings # precompiled from
/usr/obj/ports/Python-2.6.3/Python-2.6.3/Lib/encodings/__init__.pyc
Python 2.6.3 (r263:75183, Nov 6 2009, 09:50:33)
[GCC 3.3.5 (propolice)] on openbsd4
Type "help", "copyright", "credits" or "license" for more information.
# /usr/obj/ports/Python-2.6.3/Python-2.6.3/Lib/re.pyc matches
/usr/obj/ports/Python-2.6.3/Python-2.6.3/Lib/re.py
import re # precompiled from
/usr/obj/ports/Python-2.6.3/Python-2.6.3/Lib/re.pyc
Traceback (most recent call last):
File "../../Tools/scripts/h2py.py", line 24, in <module>
import sys, re, getopt, os
File "/usr/obj/ports/Python-2.6.3/Python-2.6.3/Lib/re.py", line 104,
in <module>
import sys
OverflowError: signed integer is greater than maximum
# clear __builtin__._
# clear sys.path
# clear sys.argv
# clear sys.ps1
# clear sys.ps2
# clear sys.exitfunc
# clear sys.exc_type
# clear sys.exc_value
# clear sys.exc_traceback
# clear sys.last_type
# clear sys.last_value
# clear sys.last_traceback
# clear sys.path_hooks
# clear sys.path_importer_cache
# clear sys.meta_path
# clear sys.flags
# clear sys.float_info
# restore sys.stdin
# restore sys.stdout
# restore sys.stderr
# cleanup __main__
# cleanup[1] zipimport
# cleanup[1] signal
# cleanup[1] exceptions
# cleanup[1] _warnings
# cleanup sys
# cleanup __builtin__
# cleanup ints: 3 unfreed ints
# cleanup floats
(sgi plat-openbsd4 41)$
There have been several patches applied:
Although none seem to be relevant as far as I can see.
Please find attached the build log and the configure log.
And the build log on OpenBSD/sgi.
Thanks for filing the report! Some questions:
If you configure with the --with-pydebug option, and also do whatever
else (if anything) is necessary to remove the -O2 flag from the
compilation steps, does the build failure still occur?
What's the minimal Python code required to cause the failure. Is it
enough to launch the interpreter and then just do 'import sys'?
Judging by the error message, it looks as though the OverflowError is
being set in the 'convertsimple' function in Python/getargs.c: the
relevant code looks something like:
case 'i': {/* signed int */
int *p = va_arg(*p_va, int *);
long ival;
if (float_argument_error(arg))
return converterr("integer<i>", arg, msgbuf, bufsize);
ival = PyInt_AsLong(arg);
if (ival == -1 && PyErr_Occurred())
return converterr("integer<i>", arg, msgbuf, bufsize);
else if (ival > INT_MAX) {
PyErr_SetString(PyExc_OverflowError,
"signed integer is greater than maximum");
return converterr("integer<i>", arg, msgbuf, bufsize);
}
But this code is part of Python's general argument parsing mechanism, so
is called from many many places; we really need some way of figuring out
where it's getting called from when the build fails. Still with a
--with-pydebug build, could you try using gdb (or an equivalent) to set
a breakpoint on the PyErr_SetString line in the (ival > INT_MAX) branch,
then do whatever is required to trigger the failure and report the
backtrace at that breakpoint?
After properly compiling with -O0, it actually gets a lot further in the
build. It crashes elsewhere though:
PYTHONPATH=/usr/obj/ports/Python-2.6.3/fake-sgi/usr/local/lib/python2.6
./python -Wi -tt
/usr/obj/ports/Python-2.6.3/fake-sgi/usr/local/lib/python2.6/compileall.py
-d /usr/local/lib/python2.6 -f -x 'bad_coding|badsyntax|site-packages'
/usr/obj/ports/Python-2.6.3/fake-sgi/usr/local/lib/python2.6
Floating point exception (core dumped)
Attached is the full build log with the backtrace of that core file.
Hmm. I don't understand that backtrace at all. It seems to say that
the conversion of this particular double value (2.34e17) to long is
causing some kind of arithmetic exception. I'd assume overflow, except
that the configure script says sizeof(long) == 8, and a 64-bit long
should be plenty large enough to hold the result of the conversion.
Is it possible that the configure script is somehow ending up with the
wrong value for SIZEOF_LONG? Or do C longs definitely have width 64 on
this platform?
this little test program:
#include <unistd.h>
int main(int argc, char*argv[])
{
printf("short = %d\n", sizeof(short));
printf("int = %d\n", sizeof(int));
printf("float = %d\n", sizeof(float));
printf("long = %d\n", sizeof(long));
printf("double = %d\n", sizeof(double));
printf("long long = %d\n", sizeof(long long));
printf("double long = %d\n", sizeof(double long));
return 0;
}
gives the following values on mips64:
short = 2
int = 4
float = 4
long = 8
double = 8
long long = 8
double long = 16
is there any other thing I should check?
Sorry, I'm running out of ideas. The traceback is truly baffling.
I'm not sure why you're configuring with --with-fpectl. Does removing
this make any difference?
Maybe you could also try copying _PyHash_Double into a small test program,
calling it with an input of 2.34e17 (get rid of the Python-specific calls
in the if-branch, which I believe isn't taken in this case anyway) and see
if you can reproduce the FP signal there.
Removing --with-fpectl makes no difference.
I'll try the _PyHash_Double-thing later this weekend.
This problem has been diagnosed as a problem in the mips64 port of OpenBSD. mips64 systems need to emulate some floating point instructions in software, depending on the cpu type. In this case we hit an instruction for which the emulation was incomplete. The floating point exception actually signals this.
Work is being done to fix this.
Many thanks for the update. I'll close this as not a Python bug, then.
(Unless there's an easy and nonintrusive workaround...)
FYI, the issue has been fixed now in the mips64 port of OpenBSD by "replacing the previous/old floating point completion code with a C interface to the MI softfloat code, implementing all MIPS IV specified floating point
operations." | https://bugs.python.org/issue7296 | CC-MAIN-2020-45 | refinedweb | 1,226 | 51.95 |
Valgrind in QtCreator: console input failing
- Don Slowik
I have a simple HelloWorld type project using QtCreator as a code editor. Here is the source:
@
#include <iostream>
#include <string>
int main() {
std::cout << "Hello World!" << std::endl;
std::cout << "Enter your name: ";
std::string name;
std::cin >> name;
std::cout << "Hello " + name << std::endl;
return 0;
}
@
If I check the 'Run in terminal' box in the project Run Settings, it Runs fine, opening a separate terminal window(not the Application Output window in QtCreator), which cout/cin happily write/read to/from. If I de-select the aforementioned 'Run in terminal' box, click Run, it opens the Application Output window in QtCreator, writes to it, but is unable to read from it. This is all fine with me -I just make sure to always select 'Run in terminal' for such console applications; GDB and everything runs fine with i/o directing there.
Now the problem is that when I click 'Analyze', select either Valgrind Memory Analyzer or Valgrind Function Profiler, click the Start button, the Application Output window opens(within QtCreator) rather than the seperate terminal window opening, REGARDLESS of whether I have checked the aforementioned 'Run in terminal' box! So it is again unable to read any user input.
How to use cout/cin to write/read to/from this simple program running under Valgrind?
- Don Slowik
Is cin/cout io not possible while running Valgrind?
- SGaist Lifetime Qt Champion
Hi and welcome to devnet,
You should try to bring this question to the Qt-Creator mailing list, you'll find there Qt Creator's developers/maintainers. This forum is more user oriented | https://forum.qt.io/topic/41002/valgrind-in-qtcreator-console-input-failing | CC-MAIN-2017-34 | refinedweb | 274 | 56.69 |
In my graph algorithms course we have been discussing breadth-first search and depth-first search algorithms and are now transitioning to directed acyclic graphs (DAGs) and topological sorting. In class we discussed one method of topological sorting that uses depth-first search. Before writing an article on topological sorting in Python, I programmed 2 algorithms for doing depth-first search in Python that I want to share. One is a recursive Python function and the other is a non-recursive solution that introduces a
Stack Data Structure to implement the stack behavior that is inherent to a recursive function. I already coded C# versions of depth-first search and breadth-first search, but I am learning Python along with learning algorithms, so I want to share examples of depth-first search in Python as well.
Adjacency Matrix an Directed Graph
Below is a simple graph I constructed for topological sorting, and thought I would re-use it for depth-first search for simplicity. I am representing this graph in code using an adjacency matrix via a Python Dictionary.
adjacency_matrix = {1: [2, 3], 2: [4, 5], 3: [5], 4: [6], 5: [6], 6: [7], 7: []}
Depth-First Search Recursive Function in Python
Given the adjacency matrix and a starting vertex of 1, one can find all the vertices in the graph using the following recursive depth-first search function in Python.
def dfs_recursive(graph, vertex, path=[]): path += [vertex] for neighbor in graph[vertex]: if neighbor not in path: path = dfs_recursive(graph, neighbor, path) return path adjacency_matrix = {1: [2, 3], 2: [4, 5], 3: [5], 4: [6], 5: [6], 6: [7], 7: []} print(dfs_recursive(adjacency_matrix, 1)) # [1, 2, 4, 6, 7, 5, 3]
I included the variable,
path, for 2 reasons. First, it is keeping a list of vertices already visited so that the function does not visit a vertex twice. Second, it shows the path that the depth-first search algorithm took to find all the vertices. Since we are using a
list as opposed to a
set in Python to keep track of visited vertices, the search to see if a vertex has already been visited has a linear runtime as opposed to constant runtime. I did that for simplicity, but I wanted to mention it.
Notice how the depth-first seach algorithm dives deep into the graph and only backtracks when it comes to a deadend. It dives deep going from 1 -> 2 -> 4 -> 6 -> 7, and then backtracks to go from 2 -> 5, and then backtracks again to go from 1 -> 3.
Depth-First Search Non-Recursive Function in Python
The Python code for the non-recursive depth-first function is similar to the recursive function, except that a
Stack Data Structure is necessary to provide the stack functionality inherently present in the recursive function.
def dfs_iterative(graph, start): stack, path = [start], [] while stack: vertex = stack.pop() if vertex in path: continue path.append(vertex) for neighbor in graph[vertex]: stack.append(neighbor) return path adjacency_matrix = {1: [2, 3], 2: [4, 5], 3: [5], 4: [6], 5: [6], 6: [7], 7: []} print(dfs_iterative(adjacency_matrix, 1)) # [1, 3, 5, 6, 7, 2, 4]
The path taken is different because the vertices are pushed onto the
Stack Data Structure in a different order. In this case, the depth-first search function dives deep to the right 1 -> 3 -> 5 -> 6 -> 7, and then backtracks to go from 1 -> 2 -> 4.
Conclusion
Next time I will use a form of depth-first search to do a topological sort on this directed acyclic graph (DAG). Since the algorithm I want to use for the topological sort is a derivative of depth-first search, it made sense to code this first in Python. Again, you can see depth-first search in C# and breadth-first search in C# in previous articles.
I hope this is useful. You can find me on twitter as @KoderDojo. | https://www.koderdojo.com/blog/depth-first-search-in-python-recursive-and-non-recursive-programming | CC-MAIN-2021-39 | refinedweb | 652 | 55.88 |
I'm currently trying to make an RP game. all I need to do is break a few while loops. but a certain amount of times every time. I can't figure out how to do it.
@AllAwesome497 no. I'm wondering how to break 2 while loops. like at the end. I need to break 3 while loops on that if statement. only if that happens
@KianAlford like others had suggested, break at different loops. You could use controlled variables. e.g. isAlive = true
while isAlive
- while isAlive and doSomething
#to break, set isAlive = false then use break statement
#if necessary, use if !isAlive: break immediately after the while loop if you
#to prevent codes not inside the loop from running
Add multiple
break statements is a solution to go down a controlled number of steps:
x = True while True: # stuff while True: # stuff if x == True: break if x == True: break #more stuff
Alternatively what you could do if u are not doing anything after the loop, you could add
import sys to the top of the file and use
sys.exit() which ends the program
@AllAwesome497 i also need it to break only that set of while statements. not the entire code.
@MATTHEWBECHTEL i dont think you know that i know how to break while statements. i know how to do that. im just stuck on a double break
@ash15khng tried that. does not work sadly.
mainly cause its not in the same if statement.
it acts as if it were in the other while statements still even tho its farther down the code.
You try using a flag variable(like concept ) like just make it true whenever you want to break a loop and that flag variable gets checked for each iteration. I think it might help.
@KianAlford look I am not familiar with python.can you just brief me a little more about your problem in general way.I might give you another suggestion also.
In python, you can use the break statement:
for example:
Please mark this as the correct answer thanks!
@PYer wait what if it isnt the right answer :thonk:
@AllAwesome497 I need to break multiple while loops inside one if statement
@KianAlford maybe add a if statement to all of them that breaks it if a certain codition is met?:
Ex:
that may work.
@AllAwesome497 no because of those variables before they are not defined but only once then
@KianAlford add
dead=Falseat the beginning of it
@AllAwesome497 it is though
@PYer, first of all, it wasn't, second of all, it was a joke.
@AllAwesome497 what is the answer? he/she asked on how to break out of a while loop. i showed how to
@PYer He wants to break multiple loops within 1 if statement
@AllAwesome497 ah. do sys.exit()
@PYer I already said that. Although i just did a few minutes ago so u may not have seen it. was not on the reply thread | https://repl.it/talk/ask/breaking-while-loops/14405?order=votes | CC-MAIN-2020-24 | refinedweb | 497 | 82.34 |
import "encoding/xml"
Package xml implements a simple XML 1.0 parser that understands XML name spaces.
package main import ( "encoding/xml" "fmt" "log" "strings" ) type Animal int const ( Unknown Animal = iota Gopher Zebra ) func (a *Animal) UnmarshalXML(d *xml.Decoder, start xml.StartElement) error { var s string if err := d.DecodeElement(&s, &start); err != nil { return err } switch strings.ToLower(s) { default: *a = Unknown case "gopher": *a = Gopher case "zebra": *a = Zebra } return nil } func (a Animal) MarshalXML(e *xml.Encoder, start xml.StartElement) error { var s string switch a { default: s = "unknown" case Gopher: s = "gopher" case Zebra: s = "zebra" } return e.EncodeElement(s, start) } func main() { blob := ` <animals> <animal>gopher</animal> <animal>armadillo</animal> <animal>zebra</animal> <animal>unknown</animal> <animal>gopher</animal> <animal>bee</animal> <animal>gopher</animal> <animal>zebra</animal> </animals>` var zoo struct { Animals []Animal `xml:"animal"` } if err := xml.Unmarshal([]byte(blob), &zoo); err != nil { log.Fatal(err) } census := make(map[Animal]int) for _, animal := range zoo.Animals { census[animal] += 1 } fmt.Printf("Zoo Census:\n* Gophers: %d\n* Zebras: %d\n* Unknown: %d\n", census[Gopher], census[Zebra], census[Unknown]) }
package main import ( "encoding/xml" := ` <sizes> <size>small</size> <size>regular</size> <size>large</size> <size>unrecognized</size> <size>small</size> <size>normal</size> <size>small</size> <size>large</size> </sizes>` var inventory struct { Sizes []Size `xml:"size"` } if err := xml.Unmarshal([]byte(blob), &inventory); err != nil { log.Fatal(err) } counts := make(map[Size]int) for _, size := range inventory.Sizes { counts[size] += 1 } fmt.Printf("Inventory Counts:\n* Small: %d\n* Large: %d\n* Unrecognized: %d\n", counts[Small], counts[Large], counts[Unrecognized]) }
marshal.go read.go typeinfo.go xml.go
const ( // Header is a generic XML header suitable for use with the output of Marshal. // This is not automatically added to any output of this package, // it is provided as a convenience. Header = `<?xml version="1.0" encoding="UTF-8"?>` + "\n" )
HTMLAutoClose is the set of HTML elements that should be considered to close automatically.
See the Decoder.Strict and Decoder.Entity fields' documentation.
HTMLEntity is an entity map containing translations for the standard HTML entity characters.
See the Decoder.Strict and Decoder.Entity fields' documentation.
Escape is like EscapeText but omits the error return value. It is provided for backwards compatibility with Go 1.0. Code targeting Go 1.1 or later should use EscapeText.
EscapeText writes to w the properly escaped XML equivalent of the plain text data s.. - a field implementing Marshaler is written by calling its MarshalXML method. - a field implementing encoding.TextMarshaler is written by encoding the result of its MarshalText method as text.
If a field uses a tag "a>b>c", then the element c will be nested inside parent elements a and b. Fields that appear next to each other that name the same parent will be enclosed in one XML element.
If the XML name for a struct field is defined by both the field tag and the struct's XMLName field, the names must match.
See MarshalIndent for an example.
Marshal will return an error if asked to marshal a channel, function, or map.
MarshalIndent works like Marshal, but each XML element begins on a new indented line that starts with prefix and is followed by one or more copies of indent according to the nesting depth..
If Unmarshal encounters a field type that implements the Unmarshaler interface, Unmarshal calls its UnmarshalXML method to produce the value from the XML element. Otherwise, if the value implements encoding.TextUnmarshaler, Unmarshal calls that value's UnmarshalText method.. Whitespace is trimmed and ignored.
Unmarshal maps an XML element or attribute value to an integer or floating-point field by setting the field to the result of interpreting the string value in decimal. There is no check for overflow. Whitespace is trimmed and ignored.}
An Attr represents an attribute in an XML element (Name=Value).
A CharData represents XML character data (raw text), in which XML escape sequences have been replaced by the characters they represent.
Copy creates a new copy of CharData.
A Comment represents an XML comment of the form <!--comment-->. The bytes do not include the <!-- and --> comment markers. = xml.HTMLAutoClose // d.Entity = xml.
InputOffset returns the input stream byte offset of the current decoder position. The offset gives the location of the end of the most recently returned token and the beginning of the next token.
RawToken is like Token but does not verify that start and end elements match and does not translate name space prefixes to their corresponding URLs..
A Directive represents an XML directive of the form <!text>. The bytes do not include the <! and > markers.
Copy creates a new copy of Directive.
An Encoder writes XML data to an output stream.>
NewEncoder returns a new encoder that writes to w..
Flush flushes any buffered XML to the underlying writer. See the EncodeToken documentation for details about when it is necessary.
Indent sets the encoder to generate XML in which each element begins on a new indented line that starts with prefix and is followed by one or more copies of indent according to the nesting depth..
A Name represents an XML name (Local) annotated with a name space identifier (Space). In tokens returned by Decoder.Token, the Space identifier is given as a canonical URL, not the short prefix used in the document being parsed.
A ProcInst represents an XML processing instruction of the form <?target inst?>
Copy creates a new copy of ProcInst.
A StartElement represents an XML start element.
func (e StartElement) Copy() StartElement
Copy creates a new copy of StartElement.
func (e StartElement) End() EndElement
End returns the corresponding XML end element.
A SyntaxError represents a syntax error in the XML input stream.
func (e *SyntaxError) Error() string
A TagPathError represents an error in the unmarshaling process caused by the use of field tags with conflicting paths.
func (e *TagPathError) Error() string
A Token is an interface holding one of the token types: StartElement, EndElement, CharData, Comment, ProcInst, or Directive.
CopyToken returns a copy of a Token. d.DecodeElement, and then to copy the data from that value into the receiver. Another common strategy is to use d.Token to process the XML object one token at a time. UnmarshalXML may not use d.RawToken..
UnsupportedTypeError is returned when Marshal encounters a type that cannot be converted into XML.
func (e *UnsupportedTypeError) Error() string
☞ Mapping between XML elements and data structures is inherently flawed: an XML element is an order-dependent collection of anonymous values, while a data structure is an order-independent collection of named values. See package json for a textual representation more suitable to data structures.
Package xml imports 12 packages (graph) and is imported by 17748 packages. Updated 2019-12-05. Refresh now. Tools for package owners. | https://godoc.org/encoding/xml | CC-MAIN-2019-51 | refinedweb | 1,139 | 51.24 |
The wprintf() function is defined in <cwchar> header file.
wprintf() prototype
int wprintf( const wchar_t* format, ... );
The wprintf() function writes the wide string pointed to by format to stdout. The wide string format may contain format specifiers starting with % which are replaced by the values of variables that are passed to the wprintf() function as additional arguments.
wprintf() Parameters
-.
wprintf() Return value
- If successful, the wprintf() function returns number of characters written.
- On failure it returns a negative value.
Example: How wprintf() function works?
#include <cwchar> #include <clocale> int main() { wint_t x = 5; wchar_t name[] = L"André "; setlocale(LC_ALL, "en_US.UTF-8"); wprintf(L"x = %d \n", x); wprintf(L"Hello %ls \n", name); return 0; }
When you run the program, the output will be:
x = 5 Hello André | https://cdn.programiz.com/cpp-programming/library-function/cwchar/wprintf | CC-MAIN-2021-04 | refinedweb | 129 | 66.33 |
mount_udf -- mount a UDF file system
mount_udf [-v] [-o options] [-C charset] special | node
The mount_udf utility attaches the UDF file system residing on the device
special to the global file system namespace at the location indicated by
node.
The options are as follows:
-o Options are specified with a -o flag followed by a comma separated
string of options. See the mount(8) man page for possible
options and their meanings. The following UDF specific options
are available:
-v Be verbose about mounting the UDF file system.
-C charset
Specify local charset to convert Unicode file names.
cdcontrol(1), mount(2), unmount(2), fstab(5), mount(8)
The mount_udf utility first appeared in FreeBSD 5.0.
FreeBSD 5.2.1 March 23, 2002 FreeBSD 5.2.1 | http://nixdoc.net/man-pages/FreeBSD/man8/mount_udf.8.html | CC-MAIN-2013-20 | refinedweb | 128 | 54.52 |
signature URL structure Url : URL
This structure provides functions for parsing URLs, extracting URL constituents, constructing URLs from constituents and resolving relative URLs. Absolute and relative URLs are supported. URL parsing conforms to RFCs 1738 and 1808 with one exception: We allow a URL to contain a single-letter device constituent before the path, which is a mostly-conservative common extension. Device letters enable the embedding of Windows-style path and file names within the set of URLs.
See also: OS.Path, HASHABLE, ORDERED
import structure Url from "x-alice:/lib/system/Url" import signature URL from "x-alice:/lib/system/URL-sig"
signature URL = sig eqtype url type t = url type scheme = string option type authority = string option type device = char option type path = string list type query = string option type fragment = string option exception Malformed exception NotLocal val empty : url val setScheme : url * scheme -> url val setAuthority : url * authority -> url val setDevice : url * device -> url val makeAbsolutePath : url -> url val makeRelativePath : url -> url val setPath : url * path -> url val setQuery : url * query -> url val setFragment : url * fragment -> url val getScheme : url -> scheme val getAuthority : url -> authority val getDevice : url -> device val isAbsolutePath : url -> bool val getPath : url -> path val getQuery : url -> query val getFragment : url -> fragment val fromString : string -> url val toString : url -> string val toStringRaw : url -> string val toLocalFile : url -> string val isAbsolute : url -> bool val resolve : url -> url -> url val equal : url * url -> bool val compare : url * url -> order val hash : url -> int end
The type of parsed URLs. Values of this type represent absolute as well as relative URLs. The equivalence test using = is reliable for absolute URLs only. URL values are always normalized.
The types of the respective URL constituents. Option types are used to indicate the presence or absence of optional indiviual constituents. Device letters are always normalized to their lower-case equivalent, that is, are always in the range #"a" ... #"z" if present. The absent path is equivalent to the empty path, represented by nil. Path constituents can contain empty strings. The last element of a path constituent being the empty string represents the absence of a file name constituent (that is, the string representation of the path constituent ends in a slash). Constituents use no encoding except for the query, which has to encode #"=" and #"&". The only URL constituent for which there is no explicit type defined is the flag whether the path constituent is absolute or relative, that is, whether its string representation starts with a slash or not.
indicates that a string is not a well-formed URL in string representation, or that a URL constituent has no well-formed string representation.
raised by toLocalFile to indicate that a URL does not have a local file name equivalent.
represents the empty URL, which has all constituents absent resp. empty. Its string representation is the empty string.
return a URL with the corresponding constituent replaced. If x is SOME _, this causes the constituent to be present in the result, if it is NONE, the constituent is absent in the result. Raise Malformed if x is not a valid value for the constituent.
return a URL equivalent to url except that its path constituent is absolute resp. relative.
return a URL with the corresponding constituent replaced. If x is SOME _, this causes the constituent to be present in the result, if it is NONE, the constituent is absent in the result.
return the corresponding constituents of url. For optional constituents, return SOME _ if the constituent is present, NONE otherwise.
returns true if the path constituent of url represents an absolute path, that is, a path whose string representation starts with a slash, false otherwise.
parses s as a URL in string representation, raising Malformed if it is not well-formed. The resulting URL is normalized and returned.
converts url into its string representation. Characters within constituents are encoded as required by RFC 1738.
converts url into its string representation, without encoding any characters. Should only be used to construct messages and if the URL is known not to contain any control characters.
if possible, converts url to a local file name, else raises the NotLocal exception. This operation
returns true if url represents an absolute URL, false otherwise. An URL is absolute if at least one of scheme or device is present or if the path constituent is an absolute path or starts with ".", ".." or the character #"~".
resolves relUrl with respect to baseUrl and returns the resulting URL. baseUrl should be an absolute URL, although this is not required.
returns true if url1 and url2 represent the same URL, false otherwise. Is identical to
url1 = url2
is equivalent to
String.compare (toString url1, toString url2)
returns a hash value for url. | http://www.ps.uni-saarland.de/alice/manual/library/url.html | CC-MAIN-2018-47 | refinedweb | 793 | 62.88 |
Rose::HTML::Form::Repeatable - Repeatable sub-form automation.
package Person; use base 'Rose::Object'; use Rose::Object::MakeMethods::Generic ( scalar => [ 'name', 'age' ], array => 'emails', ); ... package Email; use base 'Rose::Object'; use Rose::Object::MakeMethods::Generic ( scalar => [ 'address', 'type' => { check_in => [ 'home', 'work' ] }, ], ); ... package EmailForm; use base 'Rose::HTML::Form'; sub build_form { my($self) = shift; $self->add_fields ( address => { type => 'email', size => 50, required => 1 }, type => { type => 'pop-up menu', choices => [ 'home', 'work' ], required => 1, default => 'home' }, save_button => { type => 'submit', value => 'Save Email' }, ); } sub email_from_form { shift->object_from_form('Email') } sub init_with_email { shift->init_with_object(@_) } ... package PersonEmailsForm; use base 'Rose::HTML::Form'; sub build_form { my($self) = shift; $self->add_fields ( name => { type => 'text', size => 25, required => 1 }, age => { type => 'integer', min => 0 }, save_button => { type => 'submit', value => 'Save Person' }, ); ## ## The important part happens here: add a repeatable form ## # A person can have zero or more emails $self->add_repeatable_form(emails => EmailForm->new); # Alternate ways to add the same repeatable form: # # Name/hashref pair: # $self->add_repeatable_form(emails => { form_class => 'EmailForm' }); # # Using the generic add_form() method: # $self->add_form # ( # emails => # { # form_class => 'EmailForm', # default_count => 0, # repeatable => 1, # } # ); # # See the documentation for Rose::HTML::Form's add_forms() and # add_repeatable_forms() methods for more information. } sub init_with_person { my($self, $person) = @_; $self->init_with_object($person); # Delete any existing email forms and create # the appropriate number for this $person my $email_form = $self->form('emails'); $email_form->delete_forms; my $i = 1; foreach my $email ($person->emails) { $email_form->make_form($i++)->init_with_email($email); } } sub person_from_form { my($self) = shift; my $person = $self->object_from_form(class => 'Person'); my @emails; foreach my $form ($self->form('emails')->forms) { push(@emails, $form->email_from_form); } $person->emails(@emails); return $person; }
Rose::HTML::Form::Repeatable provides a convenient way to include zero or more copies of a nested form. See the nested forms section of the Rose::HTML::Form documentation for some essential background information.
Rose::HTML::Form::Repeatable works like a wrapper for an additional level of sub-forms. The Rose::HTML::Form::Repeatable object itself has no fields. Instead, it has a list of zero or more sub-forms, each of which is named with a positive integer greater than zero.
The synopsis above contains a full example. In it, the
PersonEmailsForm contains zero or more EmailForm sub-forms under the name
Each repeated form must be of the same class. A repeated form can be generated by cloning a prototype form or by instantiating a specified prototype form class.
A repeatable form decides how many of each repeated sub-form it should contain based on the contents of the query parameters (contained in the params attribute for the parent form). If there are no params, then the default_count determines the number of repeated forms.
Repeated forms are created in response to the init_fields or prepare methods being called. In the synopsis example, the
person_from_form method does not need to create, delete, or otherwise set up the repeated email sub-forms because it can sensibly assume that the init_fields and/or prepare methods have been called already. On the other hand, the
init_with_person method must configure the repeated email forms based on the number of email addresses contained in the
Person object that it was passed.
On the client side, the usual way to handle repeated sub-forms is to make an AJAX request for new content to add to an existing form. The make_form method is designed to do exactly that, returning a correctly namespaced Rose::HTML::Form-derived object ready to have its fields serialized (usually through a template) into HTML which is then inserted into the existing form on a web page.
This class inherits from and follows the conventions of Rose::HTML::Form. Inherited methods that are not overridden will not be documented a second time here. See the Rose::HTML::Form documentation for more information.
Constructs a new Rose::HTML::Form::Repeatable object based on PARAMS, where PARAMS are name/value pairs. Any object method is a valid parameter name.
Get or set the name of the default Rose::HTML::Form-derived class of the repeated form. The default value is Rose::HTML::Form.
Get or set the default number of repeated forms to create in the absence of any parameters. The default value is zero.
Get or set a boolean value that indicates whether or not it's OK for a repeated form to be empty. (That is, validation should not fail if the entire sub-form is empty, even if the sub-form has required fields.) Defaults to false.
In addition to doing all the usual things that the base class implementation does, this method creates or deletes repeated sub-forms as necessary to make sure they match the query parameters, if present, or the default_count if there are no parameters that apply to any of the sub-forms.
Given a list of OBJECTS or name/value pairs PARAMS, initialize each sub-form, taking one object from the list and passing it to a method called on each sub-form. The first object is passed to the first form, the second object to the second form, and so on. (Form order is determined by the the order forms are returned from the forms method.)
Valid parameters are:
A reference to an array of objects with which to initialize the form(s). This parameter is required if PARAMS are passed.
The name of the method to call on each sub-form. The default value is
init_with_object.
Given an integer argument greater than zero, create, add to the form, and return a new numbered prototype form clone object.
Create, add to the form, and return a new numbered prototype form clone object whose rank is one greater than the the highest-ranking existing sub-form.
Return a list (in list context) or reference to an array (in scalar context) of objects corresponding to the list of repeated sub-forms. This is done by calling a method on each sub-form and collecting the return values. Name/value parameters may be passed. Valid parameters are:
The name of the method to call on each sub-form. The default value is
object_from_form.
This method does the same thing as the init_fields method, but calls through to the base class prepare method rather than the base class init_fields method.
Get or set the Rose::HTML::Form-derived object used as the prototype for each repeated form.
Get or set the name of the Rose::HTML::Form-derived class used by the prototype_form_clone method to create each repeated sub-form. The default value is determined by the default_form_class class method.
Get or set the specification for the Rose::HTML::Form-derived object used as the prototype for each repeated form. The SPEC can be a reference to an array, a reference to a hash, or a list that will be coerced into a reference to an array. In the absence of a prototype_form, the SPEC is dereferenced and passed to the
new() method called on the prototype_form_class in order to create each prototype_form_clone.
Returns a clone of the prototype_form, if one was set. Otherwise, creates and returns a new prototype_form_class object, passing the prototype_form_spec to the constructor.
John C. Siracusa (siracusa@gmail.com)
Copyright (c) 2010 by John C. Siracusa. All rights reserved. This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself. | http://search.cpan.org/~jsiracusa/Rose-HTML-Objects-0.618/lib/Rose/HTML/Form/Repeatable.pm | CC-MAIN-2014-52 | refinedweb | 1,215 | 52.6 |
Differences Between PIG vs MapReduce
Pig is a scripting language used for exploring large data sets. Pig Latin is a Hadoop extension that simplifies Hadoop programming by giving a high-level data processing language. As Pig is scripting we can achieve the functionality by writing very few lines of code. MapReduce is a solution for scaling data processing. MapReduce is not a program, it is a framework to write distributed data processing programs. Programs written using the MapReduce framework have successfully scaled across thousands of machines.
PIG
Pig is a Dataflow and High-Level Language. Pig works with any of the versions in Hadoop.
Components of Pig
- Pig Latin — a language used to express data flows
- Pig Engine — an engine on top of Hadoop
Advantages of PIG
- Removes the need for users to tune Hadoop
- Insulates users from changes in Hadoop interfaces.
- Increases in productivity.
- In one test 10 lines of Pig Latin ≈ 200 lines of Java
- What takes 4 hours to write in Java takes about 15 minutes in Pig Latin
- Open system to non-Java programmers
If we are aware of HIVE and PIG, there is no need to care about code, if the Hadoop version is upgraded to a higher version.
For example: if the Hadoop version is 2.6 now it is upgraded to 2.7. PIG supports in any versions no need to worry whether the code works or not in the Higher Versions.
Features of PIG
Pig Latin is a data flow language
- Provides support for data types – long, float, char array, schemas, and functions
- Is extensible and supports User-Defined Functions
- Metadata not required, but used when available
- Operates on files in HDFS
- Provides common operations like JOIN, GROUP, FILTER, SORT
PIG Usage scenario
- Weblog processing
- Data processing for web search platforms
- Ad hoc queries across large data sets
- Rapid prototyping of algorithms for processing large data sets
Who uses Pig
- Yahoo, one of the heaviest users of Hadoop, runs 40% of all its Hadoop jobs in a pig.
- Twitter is also another well-known user of Pig
MapReduce
- In the past, processing increasingly larger datasets was a problem. All your data and computation had to fit on a single machine. To work on more data, you had to buy a bigger, more expensive machine.
- So, what is the solution to processing a large volume of data when it is no longer technically or financially feasible to do on a single machine?
- MapReduce is a solution for scaling data processing.
MapReduce has 3 stages/Phases
The steps below are executed in sequence.
- Mapper phase
Input from the HDFS file system.
- Shuffle and sort
Input to shuffle and sort is an output of mapper
- Reducer
Input to the reducer is output to shuffle and sort.
MapReduce will understand the data in terms of only Key-value combination.
- The main purpose of the map phase is to read all of the input data and transform or filter it. The transformed or filtered data is further analyzed by business logic in the reduce phase, although a reduce phase not strictly required.
- The main purpose of the reducing phase is to employ business logic to answer a question and solve a problem.
Head to Head Comparison Between PIG and MapReduce (Infographics)
Below are the Top 4 comparisons between PIG and MapReduce:
Key Differences Between PIG and MapReduce
Below are the most important Differences Between PIG and MapReduce:
PIG or MapReduce Faster
Any PIG jobs are rewritten in MapReduce.so, Map Reduce is only faster.
Things that can’t be in PIG
When something is hard to express in Pig, you are going to end up with a performance i.e., building something up of several primitives
Some examples:
- Complex groupings or joins
- Combining lots of data sets
- Complex usage of the distributed cache (replicated join)
- Complex cross products
- Doing crazy stuff in nested FOREACH
In these cases, Pig is going to slow down a bunch of MapReduce jobs, which could have been done with less.
Usage of MapReduce Scenarios
- When there is tricky stuff to achieve use MapReduce.
Development is much faster in PIG?
- Fewer lines of code i.e. smaller the code save the time of the developer.
- Fewer java level bugs to work out but these bugs are harder to find out.
In addition to the above differences PIG supports
- It allows developers to store data anywhere in the pipeline.
- Declares execution plans.
- It provides operators to perform ETL (Extract, Transform, and Load) functions.
Head to Head Comparison Between PIG and MapReduce
Below are the lists of points, describe the comparisons between PIG and MapReduce:
Conclusion
Example: we need to count the reoccurrence of words present in the sentence.
What is the better way to do the program?
PIG or MapReduce
Writing the program in pig
input_lines = LOAD ‘/tmp/word.txt’ AS (line:chararray);
words = FOREACH input_lines GENERATE FLATTEN(TOKENIZE(line)) AS word;
filtered_words = FILTER words BY word MATCHES ‘\\w+’;
word_groups = GROUP filtered_words BY word;
word_count = FOREACH word_groups GENERATE COUNT(filtered_words) AS count, group AS word;
ordered_word_count = ORDER word_count BY count DESC;
STORE ordered_word_count INTO ‘/tmp/results.txt’;
Writing the program in MapReduce.
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.Job;
public class WordCount {
public static void main(String[] args) throws Exception {
if (args.length != 2) {
System.out.printf(
“Usage: WordCount <input dir> <output dir>\n”);
System.exit(-1);
}
@SuppressWarnings(“deprecation”)
Job job = new Job();
job.setJarByClass(WordCount.class);
job.setJobName(“Word Count”);
FileInputFormat.setInputPaths(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
job.setMapperClass(WordMapper.class);
job.setReducerClass(SumReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
boolean success = job.waitForCompletion(true);
System.exit(success ? 0 : 1);
}
}
If the functionality can be achieved by PIG, what is use of writing functionality in MapReduce(Lengthy codes).
Always Use the right tool for the Job, get the job faster and better.
Recommended Articles
This has been a useful guide to PIG vs MapReduce.Here we have discussed head to head comparison, key differences along with infographics and comparison table respectively. You may also look at the following articles to learn more – | https://www.educba.com/pig-vs-mapreduce/?source=leftnav | CC-MAIN-2020-45 | refinedweb | 1,052 | 55.44 |
Supplementary Information for “Detect issue heterogenity in gene expression data with BioQC” (Jitao David Zhang, Klas Hatje, Gregor Sturm, Clemens Broger, Martin Ebeling, Martine Burtin, Fabiola Terzi, Silvia Ines Pomposiello and Laura Badi)
In this vignette, we explain the underlaying algorithmic details of our implementation of the Wilcoxon-Mann-Whitney test. The source code used to produce this document can be found in the github repository BioQC.
BioQC is a R/Bioconductor package to detect tissue heterogeneity from high-throughput gene expression profiling data. It implements an efficient Wilcoxon-Mann-Whitney test, and offers tissue-specific gene signatures that are ready to use ‘out of the box’.
The Wilcoxon-Mann-Whitney (WMW) test is a non-parametric statistical test to test if median values of two population are equal or not. Unlike the t-test, it does not require the assumption of normal distributions, which makes it more robust against noise.
We improved the computational efficiency of the Wilcoxon-Mann-Whitney test in comparison to the native R implementation based on three modifications:
While (1) and (2) are straightforward, we elaborate (3) in the following.
Let \(W_{a,b}\) be the approximative WMW test of two gene vectors \(a,b\), where \(a\) is the gene set of interest, typically containing less than a few hundreds of genes, and \(b\) is the set of all genes outside the gene set (background genes) typically containing \(>10000\) genes. In the context of BioQC, the gene sets are referred to as tissue signatures.
Given an \(m \times n\) input matrix of gene expression data with \(m\) genes and \(n\) samples \(s_1, \dots, s_n\), and \(k\) gene sets \(d_1, \dots, d_k\), the WMW-test needs to be applied for each sample \(s_i, i \in 1..n\) and each gene set \(d_j, j \in 1..k\). The runtime of the WMW-test is essentially determined by the sorting operation on the two input vectors. Using native R
wilcox.test, the vectors \(a\) and \(b\) are sorted individually for each gene set. However, in the context of gene set analysis, this is futile, as the (large) background set changes insignificantly in relation to the (small) gene set, when testing different gene sets on the same sample.
Therefore, we approximate the WMW-test by extending \(b\) to all genes in the sample, keeping the background unchanged when testing multiple gene sets. Like this, \(b\) has to be sorted only once per sample. The individual gene sets still need to be sorted, which is not a major issue, as they are small in comparison to the set of background genes.
Figure 1: BioQC speeds up the Wilcoxon-Mann-Whitney test by avoiding futile sorting operations on the same sample.
To demonstrate BioQC’s superior performance, we apply both BioQC and the native R
wilcox.test to random expression matrices and measure the runtime.
We setup random expression matrices of 20186 human protein-coding genes of 1, 5, 10, 50, or 100 samples. Genes are \(i.i.d\) distributed following \(\mathcal{N}(0,1)\). The native R and the BioQC implementations of the Wilcoxon-Mann-Whitney test are applied to the matrices respectively.
The numeric results of both implementations,
bioqcNumRes (from BioQC) and
rNumRes (from R), are equivalent, as shown by the next command.
The BioQC implementation is more than 500 times faster: while it takes about one second for BioQC to calculate enrichment scores of all 155 signatures in 100 samples, the native R implementation takes about 20 minutes:
Figure 2: Time benchmark results of BioQC and R implementation of Wilcoxon-Mann-Whitney test. Left panel: elapsed time in seconds (logarithmic Y-axis). Right panel: ratio of elapsed time by two implementations. All results achieved by a single thread on in a RedHat Linux server.
We have shown that BioQC achieves identical results as the native implementation in two orders of magnitude less time. This renders BioQC a highly efficient tool for quality control of large-scale high-throughput gene expression data.
##] rbenchmark_1.0.0 gplots_3.0.1 gridExtra_2.3 ## [4] latticeExtra_0.6-28 RColorBrewer_1.1-2 lattice_0.20-35 ## [7] hgu133plus2.db_3.2.3 org.Hs.eg.db_3.7.0 AnnotationDbi_1.44.0 ## [10] IRanges_2.16.0 S4Vectors_0.20.0 BioQC_1.10.0 ## [13] Biobase_2.42.0 BiocGenerics_0.28.0 Rcpp_0.12.19 ## [16] testthat_2.0.1 knitr_1.20 ## ## loaded via a namespace (and not attached): ## [1] highr_0.7 compiler_3.5.1 bitops_1.0-6 ## [4] tools_3.5.1 digest_0.6.18 bit_1.1-14 ## [7] RSQLite_2.1.1 evaluate_0.12 memoise_1.1.0 ## [10] gtable_0.2.0 pkgconfig_2.0.2 rlang_0.3.0.1 ## [13] DBI_1.0.0 yaml_2.2.0 stringr_1.3.1 ## [16] caTools_1.17.1.1 gtools_3.8.1 rprojroot_1.3-2 ## [19] bit64_0.9-7 grid_3.5.1 R6_2.3.0 ## [22] rmarkdown_1.10 gdata_2.18.0 blob_1.1.1 ## [25] magrittr_1.5 backports_1.1.2 htmltools_0.3.6 ## [28] KernSmooth_2.23-15 stringi_1.2.4 | http://bioconductor.org/packages/release/bioc/vignettes/BioQC/inst/doc/bioqc-efficiency.html | CC-MAIN-2018-47 | refinedweb | 823 | 56.55 |
Each of the .NET languages can provide its own keywords for the types it supports. For example, a keyword for an integer in VB is Integer, whereas in C# or C++ it is int; a boolean is Boolean in VB, but bool in C# or C++. In any case, the integer is mapped to the class Int32, and the boolean is mapped to the class Boolean in the System namespace. Table C-1 lists all simple data types common to the .NET Framework. Non-CLS-compliant types are not guaranteed to interoperate with all CLS-compliant languages.
Table C-2 shows a number of useful container types that the .NET Framework provides. | http://etutorials.org/Programming/.NET+Framework+Essentials/Appendix+C.+Common+Data+Types/ | CC-MAIN-2018-13 | refinedweb | 112 | 74.79 |
I have the following data structure:
df = pd.DataFrame({"Date":["2015-02-02 14:19:00","2015-02-02 14:22:00","2015-02-17 14:57:00","2015-02-17 14:58:59"],"Occurrence":[1,0,1,1]}) df["Date"] = pd.to_datetime(df["Date"])
I want to plot the following:
import seaborn as sns sns.set_theme(style="darkgrid") sns.lineplot(x="Date", y="Occurrence", data=df)
And I get this:
I only want the hours and minutes to be shown on the x axis (the date of the day is unnecessary). How can I do that?
>Solution :
You can use the
matplotlib‘s
Dateformatter. Updated code and plot below. I did notice that the
Date column you posted had dates on 2nd and 17th. I changed those to show everything on the 2nd. Otherwise, there would be too many entries. Hope this helps…
df = pd.DataFrame({"Date":["2015-02-02 10:19:00","2015-02-02 12:22:00","2015-02-02 14:57:00","2015-02-02 16:58:59"],"Occurrence":[1,0,1,1]}) df["Date"] = pd.to_datetime(df["Date"]) import seaborn as sns sns.set_theme(style="darkgrid") ax = sns.lineplot(x="Date", y="Occurrence", data=df) import matplotlib.dates as mdates ax.xaxis.set_major_locator(mdates.HourLocator(interval=2)) # set formatter ax.xaxis.set_major_formatter(mdates.DateFormatter('%H:%M'))
Output Plot | https://devsolus.com/2022/06/22/how-can-i-only-plot-hours-and-minutes-in-seaborn/ | CC-MAIN-2022-27 | refinedweb | 223 | 55.61 |
Remove Jira Issue Attachments by MD5 Hash Redux
In my previous post Remove Jira Issue Attachments by MD5 Hash I showed how to remove attachments from JIRA based on the MD5 hash of the attachment.
I was feeling pretty good after writing that post and having eaten my doughnut. So, I went to tell a couple of my colleagues about it. This was their reaction …
So, you expect me to …
- know what an MD5 hash is?
- know how to get the MD5 hash of a file?
- know where to find this script to add the hash to?
- not mess the whole thing up in the process?
Um … uhh … yes? Ok, so maybe my approach isn’t super easy except to the programmer type. And now that I think about it I don’t want to have to be the one to always fix these. So, back to the drawing board. Let’s get this right.
So, I need to make it easy for others than myself to help maintain. Maybe if I made a way for my colleagues to take an attachment from an issue ticket and simply drop to a centralized storage location that could be scanned by the script … yeah that could work. It involves no knowledge of MD5 hashes or scripting and should be easy for pretty much anyone to do.
Now if I only had a location where we could place these attachments. A place that JIRA is able to scan. A place that all my colleagues have easy access to. If only such a place actually existed … hmm … oh, wait!! I could just have them attach the files to another JIRA ticket that will be used as a control ticket of sorts. Any attachments attached to this ticket would be compared against by the script and if a match is found then the issue attachment is deleted. (insert Handel’s Messiah playing in my head here)
The great thing is that most of my script doesn’t really need to be changed. All I need to do is specify a control ticket key in the script and have the script build the list of hashes based on that ticket. Here is my ticket …
And here is the new script. I’ve cleaned it up a little from the last version and removed a call to a method that is currently set as deprecated. It still worked even with the call, but best to get rid of that call before Atlassian removes the method altogether. Simply replace “{Project Key}-{Issue Number}” on line 12 with the issue key that holds your attachments to remove. So, if for instance the issue is in the FOO project and the issue number is 789 then that line would look like this …
def controlIssue = “FOO-789”;
import com.atlassian.jira.component.ComponentAccessor; import com.atlassian.jira.issue.AttachmentManager; import com.atlassian.jira.issue.attachment.FileSystemAttachmentDirectoryAccessor import com.atlassian.jira.issue.Issue; import com.atlassian.jira.issue.IssueManager; import java.security.*; /***********************************************************************************/ /* This is the ticket that has the attachments on it to compare MD5 hashes against */ /***********************************************************************************/ def controlIssue = "{Project Key}-{Issue Number}"; /***********************************************************************************/ /* */ /***********************************************************************************/ /************************************************************/ /* Don't edit below this unless you know what you are doing */ /************************************************************/ // Get the attachment hashes for our control issue to compare against def attachmentHashes = getAttachmentHashesFromIssue(controlIssue); // Obviously we don't want to run this on the control issue ... only on other issues. if(event.issue.key != controlIssue) { deleteMatchingAttachments(attachmentHashes); } public void deleteMatchingAttachments(List<String> deleteHashes){ def issue = event.issue; def attachmentManager = ComponentAccessor.getComponent(AttachmentManager); def attachments = issue.getAttachments(); def attachmentFile = null; def bytes = null; def md = MessageDigest.getInstance("MD5"); def digest = null; def hash = ""; // Loop through each attachment on the issue for(a in attachments) { attachmentFile = getAttatchmentFile(issue, a.getId()); bytes = getBytesFromFile(attachmentFile); digest = md.digest(bytes); hash = String.format("%032x", new BigInteger(1, digest)); // Compare hash to the list of hashes we don't want for(h in deleteHashes) { if(hash == h) { attachmentManager.deleteAttachment(a); break; } } } } public List<String> getAttachmentHashesFromIssue(String controlIssueKey) { def deleteHashes = []; def attachmentManager = ComponentAccessor.getComponent(AttachmentManager); def issueManager = ComponentAccessor.getComponent(IssueManager); def issue = issueManager.getIssueObject(controlIssueKey); def controlIssueAttachments = attachmentManager.getAttachments(issue); def attachmentFile = null; def bytes = null; def md = MessageDigest.getInstance("MD5"); def digest = null; def hash = ""; // Get hashes for all the attachments in the control issue for(a in controlIssueAttachments) { attachmentFile = getAttatchmentFile(issue, a.getId()); bytes = getBytesFromFile(attachmentFile); digest = md.digest(bytes); hash = String.format("%032x", new BigInteger(1, digest)); deleteHashes.add(hash); } return deleteHashes; } public byte[] getBytesFromFile(File file) throws IOException { def length = file.length(); if (length > Integer.MAX_VALUE) { throw new IOException("File is too large!"); } def bytes = new byte[(int)length]; def offset = 0; def numRead = 0; def is = new FileInputStream(file); try { while (offset < bytes.length && (numRead=is.read(bytes, offset, bytes.length-offset)) >= 0) { offset += numRead; } } finally { is.close(); } if (offset < bytes.length) { throw new IOException("Could not completely read file " + file.getName()); } return bytes; } public File getAttatchmentFile(Issue issue, Long attatchmentId){ return ComponentAccessor.getComponent(FileSystemAttachmentDirectoryAccessor.class).getAttachmentDirectory(issue).listFiles().find({ File it-> it.getName().equals(attatchmentId.toString()) }); }
And now my colleagues sing my praises (in my dreams) instead of cursing my name (which maybe still happens when I make hard to update workflows). Oh well, you live and learn. | https://iamdav.in/tag/jira/ | CC-MAIN-2021-43 | refinedweb | 871 | 50.94 |
Update generator plugin with FieldList sub-objects being updated
Generator doesn't receive updates if objects attached to a FieldList are updated (moved in space, or resized, or whatever)
Here's code
def GetDirty(self, op, doc): if op is None or doc is None: raise RuntimeError("Failed to retrieves op or doc.") self.DIRTY_SELF = op.GetDirty(c4d.DIRTYFLAGS_MATRIX | c4d.DIRTYFLAGS_DATA | c4d.DIRTYFLAGS_DESCRIPTION) if op[PLUGIN_FIELD] is not None: # PLUGIN_FIELD is as FIELDLIST defined in a res file self.DIRTY_FIELDS = op[PLUGIN_FIELD].GetDirty(doc) return self.DIRTY_SELF + self.DIRTY_FIELDS
op[PLUGIN_FIELD].GetDirty(doc) is only changed when properties changed inside of the FieldList (like opacity, blend-mode and so on).
But it updates if I'll disable and enable generator in Object Manager
Should I iterate through FieldList object and through child GroupField sub-objects to check if they are also dirty?
Hi @baca, thanks for reaching out us.
With regard to your question, as written in the documentation, FieldList::GetDirty() returns the dirty state of the GUI only. The dirty state of the object included in the list as well as of the object owning the list should be checked independently by checking each of them.
Best R.
@r_gigante Thanks man.
Can you please highlight proper way to check dirty of the layers?
- get FieldList's root FieldList.GetLayersRoot()
- get root's direct children GeListNode.GetChildren()
- get each child dirtyness, and if child also has GeListNode.GetChildren() list, check their dirty recursively as well
right?
Hi,
you probably have to check the dirty count of the
BaseObjectsattached to the
FieldLayersthat support such control object. The dirty method of the
FieldListprobably only considers the layer nodes as relevant data for its evaluation. I made here a post on a similar topic, which shows you how to iterate through a
FieldListand print out the layers and their attached objects (if there are any).
Cheers,
zipit
Thamks @zipit, it works.
Just noticed it has to be adjusted with GetClone(), otherwise I'm getting "link is not alive" error
def get_field_layers(op): """ Returns all field layers that are referenced in a field list. """ def flatten_tree(node): """ Listifies a GeListNode tree. """ res = [] while node: res.append(node.GetClone()) [] | https://plugincafe.maxon.net/topic/12780/update-generator-plugin-with-fieldlist-sub-objects-being-updated | CC-MAIN-2021-17 | refinedweb | 364 | 55.64 |
) 2006 Mirko Stocker <me@misto.ch>15 * 16 * Alternatively, the contents of this file may be used under the terms of17 * either of the GNU General Public License Version 2 or later (the "GPL"),18 * or the GNU Lesser General Public License Version 2.1 or later (the "LGPL"),19 * in which case the provisions of the GPL or the LGPL are applicable instead20 * of those above. If you wish to allow use of your version of this file only21 * under the terms of either the GPL or the LGPL, and not to allow others to22 * use your version of this file under the terms of the CPL, indicate your23 * decision by deleting the provisions above and replace them with the notice24 * and other provisions required by the GPL or the LGPL. If you do not delete25 * the provisions above, a recipient may use your version of this file under26 * the terms of any one of the CPL, the GPL or the LGPL.27 ***** END LICENSE BLOCK *****/28 29 package org.jruby.ast.visitor.rewriter.utils;30 31 import org.jruby.ast.visitor.rewriter.ReWriteVisitor;32 33 34 public class DRegxReWriteVisitor extends ReWriteVisitor {35 36 public DRegxReWriteVisitor(ReWriterContext config) {37 super(config);38 }39 40 protected boolean inDRegxNode() {41 return true;42 }43 }
Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ | | http://kickjava.com/src/org/jruby/ast/visitor/rewriter/utils/DRegxReWriteVisitor.java.htm | CC-MAIN-2017-30 | refinedweb | 228 | 56.79 |
In an attempt to understand how
BatchNorm1d works in PyTorch, I tried to match the output of
BatchNorm1d operation on a 2D tensor with manually normalizing it. The manual output seems to be scaled down by a factor of 0.9747. Here’s the code (note that affine is set to false):
import torch import torch.nn as nn from torch.autograd import Variable X = torch.randn(20,100) * 5 + 10 X = Variable(X) B = nn.BatchNorm1d(100, affine=False) y = B(X) mu = torch.mean(X[:,1]) var_ = torch.var(X[:,1]) sigma = torch.sqrt(var_ + 1e-5) x = (X[:,1] - mu)/sigma #the ratio below should be equal to one print(x.data / y[:,1].data )
Output is:
0.9747 0.9747 0.9747 ....
Doing the same thing for
BatchNorm2d works without any issues. How does
BatchNorm1d calculate its output? | https://discuss.pytorch.org/t/output-of-batchnorm1d-in-pytorch-does-not-match-output-of-manually-normalizing-input-dimensions/14050 | CC-MAIN-2022-27 | refinedweb | 142 | 71.21 |
Learn how to get started with Umbraco on Pluralsight
My new Pluralsight course, Umbraco Jumpstart is out!
A new feature coming in Umbraco Juno (4.6) is something that is probably a bit surprising for most people that it has come in after so long, an abstracted macro engine.
What this means is that no longer is there just XSLT, .NET controls, IronRuby, IronPython and Razor, but you'll be able to write your own macro engine if you want.
In this article we'll look at how to create a new macro engine.
Like with a lot of extensibility points in Umbraco it's actually really quite simple to do what you need, and creating a custom macro engine is no exception, all you have to do is implement a single interface,
IMacroEngine from within the
cms assembly.
On this interface there are only three sections that you need to implement for most operations, the name of it, the extensions it supports and its execution method.
Here's a really basic macro engine:
public class MyAwesomeMacroEngine : IMacroEngine { public bool Validate(string code, INode currentPage, out string errorMessage) { throw new NotImplementedException(); } public string Execute(MacroModel macro, INode currentPage) { return "Go go awesome macro engine!"; } public string Name { get { return "This is my awesome Macro Engine"; } } public List<string> SupportedExtensions { get { return new List<string> { "awesome" }; } } public Dictionary<string, IMacroGuiRendering> SupportedProperties { get { throw new NotImplementedException(); } } }
Now when you go to create a new Script File in the Umbraco admin you'll have a new option for your own macro engine.
I've created a supplementary post to this one which looks at how to create a NHaml based macro engine.
Seriously, it's just that easy to create your own macro engine, obviously you'll want to do more with the
Execute method so that it will interact with the script file that you've created, but this should give you a bit of a starting point :). | http://www.aaron-powell.com/posts/2010-12-27-custom-umbraco-macro-engines.html | CC-MAIN-2017-17 | refinedweb | 326 | 53.65 |
/** * \file uid.c -- UIDL handling for POP3 servers without LAST * * For license terms, see the file COPYING in this directory. */ #include "config.h" #include <sys/stat.h> #include <errno.h> #include <stdio.h> #include <limits.h> #if defined(STDC_HEADERS) #include <stdlib.h> #include <string.h> #endif #if defined(HAVE_UNISTD_H) #include <unistd.h> #endif #include "fetchmail.h" #include "i18n.h" #include "sdump.h" /* * Machinery for handling UID lists live here. This is mainly to support * RFC1725/RFC1939-conformant POP3 servers without a LAST command, but may also * be useful for making the IMAP4 querying logic UID-oriented, if a future * revision of IMAP forces me to. * * These functions are also used by the rest of the code to maintain * string lists. * * Here's the theory: * * At start of a query, we have a (possibly empty) list of UIDs to be * considered seen in `oldsaved'. These are messages that were left in * the mailbox and *not deleted* on previous queries (we don't need to * remember the UIDs of deleted messages because ... well, they're gone!) * This list is initially set up by initialize_saved_list() from the * .fetchids file. * * Early in the query, during the execution of the protocol-specific * getrange code, the driver expects that the host's `newsaved' member * will be filled with a list of UIDs and message numbers representing * the mailbox state. If this list is empty, the server did * not respond to the request for a UID listing. * * Each time a message is fetched, we can check its UID against the * `oldsaved' list to see if it is old. * * Each time a message-id is seen, we mark it with MARK_SEEN. * * Each time a message is deleted, we mark its id UID_DELETED in the * `newsaved' member. When we want to assert that an expunge has been * done on the server, we call expunge_uid() to register that all * deleted messages are gone by marking them UID_EXPUNGED. * * At the end of the query, the `newsaved' member becomes the * `oldsaved' list. The old `oldsaved' list is freed. * * At the end of the fetchmail run, seen and non-EXPUNGED members of all * current `oldsaved' lists are flushed out to the .fetchids file to * be picked up by the next run. If there are no un-expunged * messages, the file is deleted. * * One disadvantage of UIDL is that all the UIDs have to be downloaded * before a search for new messages can be done. Typically, new messages * are appended to mailboxes. Hence, downloading all UIDs just to download * a few new mails is a waste of bandwidth. If new messages are always at * the end of the mailbox, fast UIDL will decrease the time required to * download new mails. * * During fast UIDL, the UIDs of all messages are not downloaded! The first * unseen message is searched for by using a binary search on UIDs. UIDs * after the first unseen message are downloaded as and when needed. * * The advantages of fast UIDL are (this is noticeable only when the * mailbox has too many mails): * * - There is no need to download the UIDs of all mails right at the start. * - There is no need to save all the UIDs in memory separately in * `newsaved' list. * - There is no need to download the UIDs of seen mail (except for the * first binary search). * - The first new mail is downloaded considerably faster. * * The disadvantages are: * * - Since all UIDs are not downloaded, it is not possible to swap old and * new list. The current state of the mailbox is essentially a merged state * of old and new mails. * - If an intermediate mail has been temporarily refused (say, due to 4xx * code from the smtp server), this mail may not get downloaded. * - If 'flush' is used, such intermediate mails will also get deleted. * * The first two disadvantages can be overcome by doing a linear search * once in a while (say, every 10th poll). Also, with flush, fast UIDL * should be disabled. * * Note: some comparisons (those used for DNS address lists) are caseblind! */ int dofastuidl = 0; #ifdef POP3_ENABLE /** UIDs associated with un-queried hosts */ static struct idlist *scratchlist; /** Read saved IDs from \a idfile and attach to each host in \a hostlist. */ void initialize_saved_lists(struct query *hostlist, const char *idfile) { struct stat statbuf; FILE *tmpfp; struct query *ctl; /* make sure lists are initially empty */ for (ctl = hostlist; ctl; ctl = ctl->next) { ctl->skipped = (struct idlist *)NULL; ctl->oldsaved = (struct idlist *)NULL; ctl->newsaved = (struct idlist *)NULL; ctl->oldsavedend = &ctl->oldsaved; } errno = 0; /* * Croak if the uidl directory does not exist. * This probably means an NFS mount failed and we can't * see a uidl file that ought to be there. * Question: is this a portable check? It's not clear * that all implementations of lstat() will return ENOTDIR * rather than plain ENOENT in this case... */ if (lstat(idfile, &statbuf) < 0) { if (errno == ENOTDIR) { report(stderr, "lstat: %s: %s\n", idfile, strerror(errno)); exit(PS_IOERR); } } /* let's get stored message UIDs from previous queries */ if ((tmpfp = fopen(idfile, "r")) != (FILE *)NULL) { char buf[POPBUFSIZE+1]; char *host = NULL; /* pacify -Wall */ char *user; char *id; char *atsign; /* temp pointer used in parsing user and host */ char *delimp1; char saveddelim1; char *delimp2; char saveddelim2 = '\0'; /* pacify -Wall */ while (fgets(buf, POPBUFSIZE, tmpfp) != (char *)NULL) { /* * At this point, we assume the bug has two fields -- a user@host * part, and an ID part. Either field may contain spurious @ signs. * The previous version of this code presumed one could split at * the rightmost '@'. This is not correct, as InterMail puts an * '@' in the UIDL. */ /* first, skip leading spaces */ user = buf + strspn(buf, " \t"); /* * First, we split the buf into a userhost part and an id * part ... but id doesn't necessarily start with a '<', * espescially if the POP server returns an X-UIDL header * instead of a Message-ID, as GMX's () POP3 * StreamProxy V1.0 does. * * this is one other trick. The userhost part * may contain ' ' in the user part, at least in * the lotus notes case. * So we start looking for the '@' after which the * host will follow with the ' ' separator with the id. * * XXX FIXME: There is a case this code cannot handle: * the user name cannot have blanks after a '@'. */ if ((delimp1 = strchr(user, '@')) != NULL && (id = strchr(delimp1,' ')) != NULL) { for (delimp1 = id; delimp1 >= user; delimp1--) if ((*delimp1 != ' ') && (*delimp1 != '\t')) break; /* * It should be safe to assume that id starts after * the " " - after all, we're writing the " " * ourselves in write_saved_lists() :-) */ id = id + strspn(id, " "); delimp1++; /* but what if there is only white space ?!? */ /* we have at least one @, else we are not in this branch */ saveddelim1 = *delimp1; /* save char after token */ *delimp1 = '\0'; /* delimit token with \0 */ /* now remove trailing white space chars from id */ if ((delimp2 = strpbrk(id, " \t\n")) != NULL ) { saveddelim2 = *delimp2; *delimp2 = '\0'; } atsign = strrchr(user, '@'); /* we have at least one @, else we are not in this branch */ *atsign = '\0'; host = atsign + 1; /* find proper list and save it */ for (ctl = hostlist; ctl; ctl = ctl->next) { if (strcasecmp(host, ctl->server.queryname) == 0 && strcasecmp(user, ctl->remotename) == 0) { save_str(&ctl->oldsaved, id, UID_SEEN); break; } } /* * If it's not in a host we're querying, * save it anyway. Otherwise we'd lose UIDL * information any time we queried an explicit * subset of hosts. */ if (ctl == (struct query *)NULL) { /* restore string */ *delimp1 = saveddelim1; *atsign = '@'; if (delimp2 != NULL) { *delimp2 = saveddelim2; } save_str(&scratchlist, buf, UID_SEEN); } } } fclose(tmpfp); /* not checking should be safe, mode was "r" */ } if (outlevel >= O_DEBUG) { struct idlist *idp; for (ctl = hostlist; ctl; ctl = ctl->next) { report_build(stdout, GT_("Old UID list from %s:"), ctl->server.pollname); idp = ctl->oldsaved; if (!idp) report_build(stdout, GT_(" <empty>")); else for (idp = ctl->oldsaved; idp; idp = idp->next) { char *t = sdump(idp->id, strlen(idp->id)-1); report_build(stdout, " %s\n", t); free(t); } report_complete(stdout, "\n"); } report_build(stdout, GT_("Scratch list of UIDs:")); if (!scratchlist) report_build(stdout, GT_(" <empty>")); else for (idp = scratchlist; idp; idp = idp->next) { char *t = sdump(idp->id, strlen(idp->id)-1); report_build(stdout, " %s\n", t); free(t); } report_complete(stdout, "\n"); } } /** Assert that all UIDs marked deleted in query \a ctl have actually been expunged. */ void expunge_uids(struct query *ctl) { struct idlist *idl; for (idl = dofastuidl ? ctl->oldsaved : ctl->newsaved; idl; idl = idl->next) if (idl->val.status.mark == UID_DELETED) idl->val.status.mark = UID_EXPUNGED; } static const char *str_uidmark(int mark) { static char buf[20]; switch(mark) { case UID_UNSEEN: return "UNSEEN"; case UID_SEEN: return "SEEN"; case UID_EXPUNGED: return "EXPUNGED"; case UID_DELETED: return "DELETED"; default: if (snprintf(buf, sizeof(buf), "MARK=%d", mark) < 0) return "ERROR"; else return buf; } } static void dump_list(const struct idlist *idp) { if (!idp) { report_build(stdout, GT_(" <empty>")); } else while (idp) { char *t = sdump(idp->id, strlen(idp->id)); report_build(stdout, " %s = %s%s", t, str_uidmark(idp->val.status.mark), idp->next ? "," : ""); free(t); idp = idp->next; } } /* finish a query */ void uid_swap_lists(struct query *ctl) { /* debugging code */ if (outlevel >= O_DEBUG) { if (dofastuidl) { report_build(stdout, GT_("Merged UID list from %s:"), ctl->server.pollname); dump_list(ctl->oldsaved); } else { report_build(stdout, GT_("New UID list from %s:"), ctl->server.pollname); dump_list(ctl->newsaved); } report_complete(stdout, "\n"); } /* * Don't swap UID lists unless we've actually seen UIDLs. * This is necessary in order to keep UIDL information * from being heedlessly deleted later on. * * Older versions of fetchmail did * * free_str_list(&scratchlist); * * after swap. This was wrong; we need to preserve the UIDL information * from unqueried hosts. Unfortunately, not doing this means that * under some circumstances UIDLs can end up being stored forever -- * specifically, if a user description is removed from .fetchmailrc * with UIDLs from that account in .fetchids, there is no way for * them to ever get garbage-collected. */ if (ctl->newsaved) { /* old state of mailbox may now be irrelevant */ struct idlist *temp = ctl->oldsaved; if (outlevel >= O_DEBUG) report(stdout, GT_("swapping UID lists\n")); ctl->oldsaved = ctl->newsaved; ctl->newsaved = (struct idlist *) NULL; free_str_list(&temp); } /* in fast uidl, there is no need to swap lists: the old state of * mailbox cannot be discarded! */ else if (outlevel >= O_DEBUG && !dofastuidl) report(stdout, GT_("not swapping UID lists, no UIDs seen this query\n")); } /* finish a query which had errors */ void uid_discard_new_list(struct query *ctl) { /* debugging code */ if (outlevel >= O_DEBUG) { /* this is now a merged list! the mails which were seen in this * poll are marked here. */ report_build(stdout, GT_("Merged UID list from %s:"), ctl->server.pollname); dump_list(ctl->oldsaved); report_complete(stdout, "\n"); } if (ctl->newsaved) { /* new state of mailbox is not reliable */ if (outlevel >= O_DEBUG) report(stdout, GT_("discarding new UID list\n")); free_str_list(&ctl->newsaved); ctl->newsaved = (struct idlist *) NULL; } } /** Reset the number associated with each id */ void uid_reset_num(struct query *ctl) { struct idlist *idp; for (idp = ctl->oldsaved; idp; idp = idp->next) idp->val.status.num = 0; } /** Write list of seen messages, at end of run. */ void write_saved_lists(struct query *hostlist, const char *idfile) { long idcount; FILE *tmpfp; struct query *ctl; struct idlist *idp; /* if all lists are empty, nuke the file */ idcount = 0; for (ctl = hostlist; ctl; ctl = ctl->next) { for (idp = ctl->oldsaved; idp; idp = idp->next) if (idp->val.status.mark == UID_SEEN || idp->val.status.mark == UID_DELETED) idcount++; } /* either nuke the file or write updated last-seen IDs */ if (!idcount && !scratchlist) { if (outlevel >= O_DEBUG) { if (access(idfile, F_OK) == 0) report(stdout, GT_("Deleting fetchids file.\n")); } if (unlink(idfile) && errno != ENOENT) report(stderr, GT_("Error deleting %s: %s\n"), idfile, strerror(errno)); } else { char *newnam = (char *)xmalloc(strlen(idfile) + 2); strcpy(newnam, idfile); strcat(newnam, "_"); if (outlevel >= O_DEBUG) report(stdout, GT_("Writing fetchids file.\n")); (void)unlink(newnam); /* remove file/link first */ if ((tmpfp = fopen(newnam, "w")) != (FILE *)NULL) { int errflg = 0; for (ctl = hostlist; ctl; ctl = ctl->next) { for (idp = ctl->oldsaved; idp; idp = idp->next) if (idp->val.status.mark == UID_SEEN || idp->val.status.mark == UID_DELETED) if (fprintf(tmpfp, "%s@%s %s\n", ctl->remotename, ctl->server.queryname, idp->id) < 0) { int e = errno; report(stderr, GT_("Write error on fetchids file %s: %s\n"), newnam, strerror(e)); errflg = 1; goto bailout; } } for (idp = scratchlist; idp; idp = idp->next) if (EOF == fputs(idp->id, tmpfp)) { int e = errno; report(stderr, GT_("Write error on fetchids file %s: %s\n"), newnam, strerror(e)); errflg = 1; goto bailout; } bailout: (void)fflush(tmpfp); /* return code ignored, we check ferror instead */ errflg |= ferror(tmpfp); fclose(tmpfp); /* if we could write successfully, move into place; * otherwise, drop */ if (errflg) { report(stderr, GT_("Error writing to fetchids file %s, old file left in place.\n"), newnam); unlink(newnam); } else { if (rename(newnam, idfile)) { report(stderr, GT_("Cannot rename fetchids file %s to %s: %s\n"), newnam, idfile, strerror(errno)); } } } else { report(stderr, GT_("Cannot open fetchids file %s for writing: %s\n"), newnam, strerror(errno)); } free(newnam); } } #endif /* POP3_ENABLE */ /* uid.c ends here */ | http://opensource.apple.com/source/fetchmail/fetchmail-33/fetchmail/uid.c | CC-MAIN-2016-26 | refinedweb | 2,117 | 62.98 |
You are not logged in.
*sigh* So, of course I had to go and demonstrate a bug in the tiny bit of C code I
typed, as if just to prove your point for you... ;-)
if (ref->cnt == 0) { close (ref->sock); free (ref); } return (ref->cnt);
Yeah, obviously you don't want to reference "ref" after freeing it... ;-/ Pretend I had a
"return (0);" inside the curly-braces, or something... ;-)
I don't have as much programming experience as either Rob or Jeremy, but
I started with C++ and ended up at C.
What I detest in both "solutions" is that they both make simple code more
complicated than needed.
Making code more complicated is never a good way of making code more
secure. And making things more complicated than needed is what C++ code
seems to do all the time.
In this example, a socket never comes alone, there's almost always some
other associated data. Then it's really easy to simplify resource handling
because the lifetime of both some state structure and the socket is the
same. Having a good code design with consistent rules is the right solution,
not some lame "let's wrap it and then forget about it". Bah.
Impressive discussion, guys.
However that is c++ thread.
Regarding oop and not oop. After one of my early program : Java script chess, i found that funcional coding is a big mess in comparance to oop mindset.
Progress of languages which are easy to learn, fast to code and produce many times cheaper applications that satisfy user needs prooves that OOP is a really good idea.
But there are still will be cars which are hand made (exlusive, of highest quality and incredibly expensive).
int some_func (...) { int ret = 0; /* ... */ if (some_bad_thing) ret = ERR_SOME_BAD_THING; if (!ret) { /* ... do more real work ... */ if (some_other_bad_thing) ret = ERR_SOME_OTHER_BAD_THING; } if (!ret) { /* ... do more real work ... */ } /* ... */ /* ... now do whatever cleanup is needed ... */ return (ret); }
OK where is the Objective-C section/discussion? Are there no Mac programmers out there (I know that is not the only use but that is the common usage today).
This is great discussion by the way...though I am not sure it should be attached to this ancient thread--but who cares?
Okay, although I don't care much about C++ forcing people to cast, I'll
bother to reply anyway...
The reason it's totally useless and hence only pollutes the code is because
the coder writing that code is very aware that it's a void*. And a void* isn't
typeless, it's still a pointer, so most of the type info is preserved.
The main issue is this: Dealing with void* has two sides, the one where putting
something of a certain type in it, and the other side which extracts it again. A
bug only happens when there is a miscommunication and both sides use two
incompatible types. Both code snippets are separate, and in both it's very clear
what type they expect. That is why forcing a cast is totally useless, because it
doesn't prevent the miscommunication from happening, looking at the code
separately it's totally clear what type it is and should be cast to. It results in
code like:
type_a x = (type_a)y;
Useless, isn't it? If y's type changes, there's still a bug. If x's type changes
the programmer also changed the type cast, or does so after the compiler
whined. Either way, a bug goes unnoticed.
Even funnier, casting things makes things worse. As seen above it doesn't
solve a damn thing, but what it does do is add a potential for missing a class
of bugs: When y's type is changed from void* to type_b, you won't hear the
compiler complaining anymore.
Forced casting from void* is like a railing besides a mountain track that
collapes when you try to actually lean on it. | https://developerweb.net/viewtopic.php?id=4463&p=2 | CC-MAIN-2021-21 | refinedweb | 658 | 72.87 |
fork - create a child process
#include <sys/types.h>
#include <unistd.h>
pid_t fork(void);)) or session.().
Memory in address ranges that have been marked with the madvise(2) MADV_WIPEONFORK flag is zeroed in the child after a fork(). (The MADV_WIPEONFORK setting remains in place for those address ranges in the child.)() in a multithreaded program, the child can safely call only async-signal-safe functions (see signal-safety.
On success, the PID of the child process is returned in the parent, and 0 is returned in the child. On failure, -1 is returned in the parent, no child process is created, and
errno is set appropriately.));
the maximum number of PIDs,
/proc/sys/kernel/pid_max, was reached (see proc(5)); or
the PID limit (
pids.max) imposed by the cgroup "process number" (PIDs) controller was reached.
The caller is operating under the SCHED_DEADLINE scheduling policy and does not have the reset-on-fork flag set. See sched(7).
fork() failed to allocate the necessary kernel structures because memory is tight.
An attempt was made to create a child process in a PID namespace whose "init" process has terminated. See pid_namespaces(7).
fork() is not supported on this platform (for example, hardware without a Memory-Management Unit).
System call was interrupted by a signal and will be restarted. (This can be seen only during a trace.)
POSIX.1-2001, POSIX.1-2008, SVr4, 4.3BSD.).
clone(2), execve(2), exit(2), setrlimit(2), unshare(2), vfork(2), wait(2), daemon(3), pthread_atfork(3), capabilities(7), credentials(7)
This page is part of release 4.15 of the Linux
man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. | https://manual.cs50.io/2/fork | CC-MAIN-2021-04 | refinedweb | 291 | 66.94 |
>>
ESP32 for IoT - Quick Guide.
Introduction to ESP32
ESP32 is the SoC (System on Chip) microcontroller which has gained massive popularity recently. Whether the popularity of ESP32 grew because of the growth of IoT or whether IoT grew because of the introduction of ESP32 is debatable. If you know 10 people who have been part of the firmware development for any IoT device, chances are that 7−8 of them would have worked on ESP32 at some point. So what is the hype all about? Why has ESP32 become so popular so quickly? Let's find out.
Before we delve into the actual reasons for the popularity of ESP32, let's take a look at some of its important specifications. The specs listed below belong to the ESP32 WROOM 32 variant.−
Integrated Crystal− 40 MHz
Module Interfaces− UART, SPI, I2C, PWM, ADC, DAC, GPIO, pulse counter, capacitive touch sensor
Integrated SPI flash− 4 MB
ROM− 448 KB (for booting and core functions)
SRAM− 520 KB
Integrated Connectivity Protocols− WiFi, Bluetooth, BLE
On−chip sensor− Hall sensor
Operating temperature range− −40 − 85 degrees Celsius
Operating Voltage− 3.3V
Operating Current− 80 mA (average)
With the above specifications in front of you, it is very easy to decipher the reasons for ESP32's popularity. Consider the requirements an IoT device would have from its microcontroller (μC). If you've gone through the previous chapter, you'd have realized that the major operational blocks of any IoT device are sensing, processing, storage, and transmitting. Therefore, to begin with, the μC should be able to interface with a variety of sensors. It should support all the common communication protocols required for sensor interface: UART, I2C, SPI. It should have ADC and pulse counting capabilities. ESP32 fulfills all of these requirements. On top of that, it also can interface with capacitive touch sensors. Therefore, most common sensors can interface seamlessly with ESP32.
Secondly, the μC should be able to perform basic processing of the incoming sensor data, sometimes at high speeds, and have sufficient memory to store the data. ESP32 has a max operating frequency of 40 MHz, which is sufficiently high. It has two cores, allowing parallel processing, which is a further add-on. Finally, its 520 KB SRAM is sufficiently large for processing a large array of data onboard. Many popular processes and transforms, like FFT, peak detection, RMS calculation, etc. can be performed onboard ESP32. On the storage front, ESP32 goes a step ahead of the conventional microcontrollers and provides a file system within the flash. Out of the 4 MB of onboard flash, by default, 1.5 MB is reserved as SPIFFS (SPI Flash File System). Think of it as a mini−SD Card that lies within the chip itself. You can not only store data, but also text files, images, HTML and CSS files, and a lot more within SPIFFS. People have displayed beautiful Webpages on WiFi servers created using ESP32, by storing HTML files within SPIFFS.
Finally, for transmitting data, ESP32 has integrated WiFi and Bluetooth stacks, which have proven to be a game-changer. No need to connect a separate module (like a GSM module or an LTE module) for testing cloud communication. Just have the ESP32 board and a running WiFi, and you can get started. ESP32 allows you to use WiFi in Access Point as well as Station Mode. While it supports TCP/IP, HTTP, MQTT, and other traditional communication protocols, it also supports HTTPS. Yep, you heard that right. It has a crypto−core or a crypto-accelerator, a dedicated piece of hardware whose job is to accelerate the encryption process. So you cannot only communicate with your web server, you can do so securely. BLE support is also critical for several applications. Of course, you can interface LTE or GSM or LoRa modules with ESP32. Therefore, on the 'transmitting data' front as well, ESP32 exceeds expectations.
With so many features, ESP32 would be costing a fortune, right? That's the best part. ESP32 dev modules cost in the ballpark of ₹ 500. Not only that, the chip dimensions are quite small (25 mm x 18 mm, including the antenna area), allowing its use in devices requiring a very small form factor.
Finally, ESP32 can be programmed using the Arduino IDE, making the learning curve much less steep. Isn't that great? Are you excited to get your hands dirty with ESP32? Then let's start by installing the ESP32 board in the Arduino IDE in the next chapter. See you there.
Installing ESP32 Board in Arduino IDE
One very big advantage with ESP32, which has aided its quick adoption and massive popularity, is the provision for programming the ESP32 within the Arduino IDE.
Now, I should point out here that Arduino is not the only IDE that helps you compile code for ESP32 and flash it into the microcontroller. There is ESP−IDF which is the official development framework for ESP32, which provides much more flexibility in terms of configuration options. However, it is hardly as intuitive and user−friendly as the Arduino IDE, and if you are starting out with ESP32, Arduino IDE is ideal to get your hands dirty. Also, with the number of supporting libraries built for ESP32 in Arduino, courtesy of the huge developer community, there's hardly any functionality of ESP32 which can't be realized with the Arduino IDE. ESP-IDF is more suitable for the more advanced and experienced programmers, who need to stretch ESP32 to its limits. If you are one of those, you are looking for the ESP−IDF Getting Started Guide. Others can follow along.
Installation Steps
Now, to install the ESP32 board in the Arduino IDE, you need to follow the below steps −
Make sure you have Arduino IDE (preferably the latest version) installed on your machine
Open Arduino and go to File −> Preferences
In the Additional Boards Manager URL, enter
In case you have an existing JSON file's URL in the preferences (this is likely if you've installed ESP8266, stm32duino, or any such additional board in the IDE), you can just append the above path to the existing path, using a comma. An example is shown below, for ESP8266 and ESP32 boards −,
Go to Tools −> Board−> Boards Manager. A pop−up would open up. Search for ESP32 and install the esp32 by Espressif Systems board. The image below shows the board already installed because I had installed the board before preparing this tutorial.
Verifying the Installation
Once your ESP32 board has been installed, you can verify the installation by going to Tools −> Boards. You can see a whole bunch of boards under the ESP32 Arduino section. Choose the board of your choice. If you are not sure which board best represents the one you have, you can choose ESP32 Dev Module.
Next, connect your board to your machine using the USB Cable. You should see an additional COM Port under Tools−> Port. Select that additional port. In case you see multiple ports, you can disconnect the USB and see which port disappeared. That port corresponds to ESP32.
Once the port is identified, pick any one example sketch from File −> Examples. We will choose the StartCounter example from File −> Examples −> Preferences −> StartCounter.
Open that sketch, compile it and flash it into the ESP32 by clicking on the Upload button (the right arrow button, besides the Compile button).
Then open the Serial Monitor using Tools −> Serial Monitor, or simply by pressing Ctrl + Shift + M on your keyboard. You should see the counter value getting incremented after every ESP32 restart.
Congratulations!! You've set up the environment for working with ESP32.
Setting up RTOS for dual-core & multi-threaded operation
A key feature of ESP32 that makes it so much more popular than its predecessor, ESP8266, is the presence of two cores on the chip. This means that we can have two processes executing in parallel on two different cores. Of course, you can argue that parallel operation can also be achieved on a single thread using FreeRTOS/ any other equivalent RTOS. However, there is a difference between two processes running in parallel on a single core, and they running in parallel on different cores. On a single core, often, one thread has to wait for the other to pause before it can begin execution. On two cores, parallel execution is literally parallel, because they are literally occupying different processors.
Sounds exciting? Let's get started with a real example, that demonstrates how to create two tasks and assign them to specific cores within ESP32.
Code Walkthrough
GitHub link:
To use FreeRTOS within the Arduino IDE, no additional imports are required. It comes inbuilt. What we need to do is define two functions that we wish to run on the two cores. They are defined first. One function evaluates the first 25 terms of the Fibonacci series and prints every 5th of them. It does so in a loop. The second function evaluates the sum of numbers from 1 to 100. It too does so in a loop. In other words, after calculating the sum from 1 to 100 once, it does so again, after printing the ID of the core it is executing on. We are not printing all the numbers, but only every 5th number in both the sequences, because both the cores will try to access the same Serial Monitor. Therefore, if we print every number, they will try to access the Serial Monitor at the same time frequently.
void print_fibonacci() { int n1 = 0; int n2 = 1; int term = 0; char print_buf[300]; sprintf(print_buf, "Term %d: %d\n", term, n1); Serial.print(print_buf); term = term + 1; sprintf(print_buf, "Term %d: %d\n", term, n1); Serial.print(print_buf); for (;;) { term = term + 1; int n3 = n1 + n2; if(term%5 == 0){ sprintf(print_buf, "Term %d: %d\n", term, n3); Serial.println(print_buf); } n1 = n2; n2 = n3; if (term >= 25) break; } } void sum_numbers() { int n1 = 1; int sum = 1; char print_buf[300]; for (;;) { if(n1 %5 == 0){ sprintf(print_buf, " Term %d: %d\n", n1, sum); Serial.println(print_buf); } n1 = n1 + 1; sum = sum+n1; if (n1 >= 100) break; } } void codeForTask1( void * parameter ) { for (;;) { Serial.print("Code is running on Core: ");Serial.println( xPortGetCoreID()); print_fibonacci(); } } void codeForTask2( void * parameter ) { for (;;) { Serial.print(" Code is running on Core: ");Serial.println( xPortGetCoreID()); sum_numbers(); } }
You can see above that we have shifted the print statement for Task 2 to the right. This will help us differentiate between the prints happening from Task 1 and Task 2.
Next we define task handles. Task handles serve the purpose of referencing that particular task in other parts of the code. Since we have two tasks, we will define two task handles.
TaskHandle_t Task1, Task2;
Now that the functions are ready, we can move to the setup part. Within setup(), we simply pin the two tasks to the respective cores. First, let me show you the code snippet.
void setup() { Serial.begin(115200); /*Syntax for assigning task to a core: xTaskCreatePinnedToCore( coreTask, // Function to implement the task "coreTask", // Name of the task 10000, // Stack size in words NULL, // Task input parameter 0, // Priority of the task NULL, // Task handle. taskCore); // Core where the task should run */ xTaskCreatePinnedToCore( codeForTask1, "FibonacciTask", 5000, NULL, 2, &Task1, 0); //delay(500); // needed to start-up task1 xTaskCreatePinnedToCore( codeForTask2, "SumTask", 5000, NULL, 2, &Task2, 1); }
Now let's dive deeper into the xTaskCreatePinnedToCore function. As you can see, it takes a total of 7 arguments. Their description is as follows.
The first argument codeForTask1 is the function that will be executed by the task
The second argument "FibonacciTask" is the label or name of that task
The third argument 1000 is the stack size in bytes that is allotted to this task
The fourth argument NULL is the task input parameter. Basically, if you wish to input any parameter to the task, it goes here
The fifth argument 1 defines the priority of the task. The higher the value, the more is the priority of the task.
The sixth argument &Task1 is the Task Handle
The final argument 0 is the Code on which the task will run. If the value is 0, the task will run on Core 0. If it is 1, the task will run on Code 1.
Finally, the loop can be left empty, since the two tasks running on the two cores are of more importance here.
void loop() {}
You can see the output on the Serial Monitor. Note that there are no delays anywhere in the code. Therefore, both the series getting incremented shows that the computations are happening in parallel. The Core IDs printed on the Serial Monitor also confirm that.
Please note that Arduino sketches, by default, run on Core 1. This can be verified using Serial.print( xPortGetCoreID()); So if you add some code in loop(), it will run as another thread on Core 1. In that case, Core 0 will have a single task running, while Core 1 will have two tasks running.
Interfacing ESP32 with MPU6050
Accelerometers and Gyroscopes are widely used in Industrial IoT for measuring the health and operating parameters of various machines. MPU6050 is a popular six-axis accelerometer + gyroscope. It is a MEMS (Micro-Electro-Mechanical Systems) sensor, meaning it is very compact (as can be seen from the image below) and, for a wide range of frequencies, very accurate as well.
In this tutorial, we will see how to interface ESP32 with the MPU6050. In the process, you will learn about the usage of the I2C (Inter-Integrated Circuit) protocol, which will then enable you to interface the ESP32 with several sensors and peripherals which communicate using the I2C protocol. You will need your ESP32, an MPU6050, and a couple of jumper wires for this tutorial.
Connecting MPU6050 with ESP32
As shown in the image below, you need to connect the SDA line of MPU6050 to pin 21 on ESP32, SCL line to pin 22, GND to GND, and VCC to 3V3 pin. The other pins of MPU6050 need not be connected.
Code Walkthrough
GitHub Link −
ESP32, and Arduino in general refer to the I2C protocol as 'Wire'. Therefore the required library import is Wire.h
#include<Wire.h>
Next we define constants and global variables.
const int MPU_ADDR = 0x68; // I2C address of the MPU-6050 int16_t AcX, AcY, AcZ, Tmp, GyX, GyY, GyZ;
Every I2C device has a fixed address using which other devices identify it and communicate with it. For MPU6050, that address is 0x68. We will use it later when initializing the I2C communication with the MPU6050. We next move to the setup code.
void setup() { Serial.begin(115200); Wire.begin(21, 22, 100000); // sda, scl, clock speed Wire.beginTransmission(MPU_ADDR); Wire.write(0x6B); // PWR_MGMT_1 register Wire.write(0); // set to zero (wakes up the MPU−6050) Wire.endTransmission(true); Serial.println("Setup complete"); }
The first line is trivial. We are initiating communication with the serial monitor at 115200 baud rate. Next, we begin the I2C communication. For that, we provide 3 arguments to the Wire.begin() function.
These are the SDA and SCL pins and the clock speed. Now, I2C communication requires two lines: the Data line (SDA) and the Clock line (SCL). On ESP32, pins 21 and 22 are generally reserved for I2C, with 21 being SDA and 22 being SCL. For communicating with MPU6050, we have two speed options: 100kbit/s and 400kbit/s. We have chosen 100kHz here. You can choose the higher speed option as well if your use-case requires it.
Next, we indicate to the ESP32 that we want to communicate with the chip which has the address equal to MPU_ADDR, using the Wire.beginTransmission() command. At this point, you would have guessed that one ESP32 chip can communicate with multiple I2C peripherals. In fact, there are 128 unique addresses possible (address field is 7 bits long), and so the ESP32 can communicate with 128 different peripherals using I2C, provided all of them have different addresses.
In the next couple of lines, we are setting the PWR_MGMT_1 register of MPU6050 to 0. This is used to wake up the MPU6050. The address 0x6B of the PWR_MGMT_1 register is the address within MPU6050's memory.
It has nothing to do with the I2C address of MPU6050. Once the MPU is woken up, we end this particular transmission over I2C and our setup is complete, and we indicate that on the Serial Monitor using a print statement. Now let's jump into the loop. You will notice that we pass a boolean true as an argument to Wire.endTransmission. This tells the ESP32 to send a stop command and release the I2C lines. If we replace true with false, the ESP32 will send a restart instead of stop, keeping the connection active.
void loop() { Wire.beginTransmission(MPU_ADDR); Wire.write(0x3B); // starting with register 0x3B (ACCEL_XOUT_H) Wire.endTransmission(true); Wire.beginTransmission(MPU_ADDR); Wire.requestFrom(MPU_ADDR, 14, true); // request a total of 14 registers AcX = Wire.read() −− 8 | Wire.read(); // 0x3B (ACCEL_XOUT_H) & 0x3C (ACCEL_XOUT_L) AcY = Wire.read() −− 8 | Wire.read(); // 0x3D (ACCEL_YOUT_H) & 0x3E (ACCEL_YOUT_L) AcZ = Wire.read() −− 8 | Wire.read(); // 0x3F (ACCEL_ZOUT_H) & 0x40 (ACCEL_ZOUT_L) Tmp = Wire.read() −− 8 | Wire.read(); // 0x41 (TEMP_OUT_H) & 0x42 (TEMP_OUT_L) GyX = Wire.read() −− 8 | Wire.read(); // 0x3B (ACCEL_XOUT_H) & 0x3C (ACCEL_XOUT_L) GyY = Wire.read() −− 8 | Wire.read(); // 0x3D (ACCEL_YOUT_H) & 0x3E (ACCEL_YOUT_L) GyZ = Wire.read() −− 8 | Wire.read(); // 0x3F (ACCEL_ZOUT_H) & 0x40 (ACCEL_ZOUT_L) Serial.print(AcX); Serial.print(" , "); Serial.print(AcY); Serial.print(" , "); Serial.print(AcZ); Serial.print(" , "); Serial.print(GyX); Serial.print(" , "); Serial.print(GyY); Serial.print(" , "); Serial.print(GyZ); Serial.print("\n"); }
In the loop, if you scan through the above code snippet, you will see that we perform a total of two transmissions. In the first one, we indicate to the MPU6050 the address from which we would like to start reading the data, or rather set the MPU6050's internal pointer to this particular address. In the second transmission, we tell the MPU that we request 14 bytes starting from the address sent earlier. Then we read the bytes one by one. You may notice that we don't have a Wire.endTransmission(true) command at the end of read. This is because the third argument of Wire.requestFrom(MPU,14,true) indicates to the ESP32 to send a stop command after reading the required number of bytes. Had we passed false instead of true, ESP32 would have sent a restart command instead of stop command.Now, you might be wondering how was it determined that which register corresponds to which reading. The answer is the MPU6050 register map. It, as the name suggests, provides information on which value can be obtained from which register. Based on this map, we realized that we understand that 0x3B and 0x3C correspond to the higher and lower bytes of the 16−bit X−direction acceleration value. The next two registers (0x3D and 0x3E) contain the higher and lower bytes of the 16−bit Y−direction acceleration value, and so on. In between accelerometer and gyroscope readings, there are two bytes containing temperature readings, which we read and ignore, because we don't require them.
So with this, you can successfully acquire data from MPU6050 on an ESP32. Congratulations!! Move on to the next tutorial for learning how to acquire data from an analog sensor on ESP32.
References
Interfacing ESP32 with Analog Sensors
Another important category of sensors that you need to interface with ESP32 is analog sensors. There are many types of analog sensors, LDRs (Light Dependent Resistors), current and voltage sensors being popular examples. Now, if you are familiar with how analogRead works on any Arduino board, like Arduino Uno, then this chapter will be a cakewalk for you because ESP32 uses the same functions. There are only a few nuances you should be aware of, that will be covered in this chapter.
A brief about the Analog to Digital Conversion (ADC) process
Every microcontroller which supports ADC will have a defined resolution and a reference voltage. The reference voltage is generally the supply voltage. The analog voltage provided to the ADC pin should be less than or equal to the reference voltage. The resolution indicates the number of bits that will be used to represent the digital value. Thus, if the resolution is 8 bits, then the value will be represented by 8 bits, and the maximum value possible is 255. This maximum value corresponds to the value of the reference voltage. The values for other voltages are often derived by scaling.
Thus, if the reference voltage is 5V and an 8−bit ADC is used, then 5V corresponds to a reading of 255, 1V corresponds to a reading of (255/5*1) = 51, 2V corresponds to a reading (255/5*2) = 102 and so on. If we had a 12 bit ADC, then 5V would correspond to a reading of 4095, 1V would correspond to a reading of (4095/5*1) = 819, and so on.
The reverse calculations can be performed similarly. If you get a value of 1000 on a 12 bit ADC with a reference voltage of 3.3V, then it corresponds to a value of (1000/4095*3.3) = 0.8V approximately. If you get a reading of 825 on a 10 bit ADC with a reference voltage of 5V, it corresponds to a value of (825/1023*5) = 4.03V approximately.
With the above explanation, it will be clear that both the reference voltage and the number of bits used for ADC determine the minimum possible voltage change that can be detected. If the reference voltage is 5V and the resolution is 12-bit, you have 4095 values to represent a voltage range of 0−5V. Thus, the minimum change that can be detected is 5V/4095 = 1.2mV. Similarly, for a 5V and 8-bit reference voltage, you have only 255 values to represent a range of 0-5V. Thus, the minimum change that can be detected is 5V/255 = 19.6mV, or about 16 times higher than the minimum change detected with a 12-bit resolution.
Connecting the ADC Sensor with ESP32
Considering the popularity and availability of the sensor, we will use an LDR for the demonstration. We will essentially connect LDR in series with a regular resistor, and feed the voltage at the point connecting the two resistors to the ADC pin of ESP32. Which pin? Well, there are lots of them. ESP32 boasts of 18 ADC pins (8 in channel 1 and 10 in channel 2). However, channel 2 pins cannot be used along with WiFi. And some pins of channel 1 are not exposed on some boards. Therefore, I generally stick to the following 6 pins for ADC− 32, 33, 34, 35, 36, 39. In the image shown below, an LDR with a resistance of 90K is connected to a resistor of resistance 150K. The free end of the LDR is connected to the 3.3V pin of ESP32 and the free end of the resistor is connected to GND. The common end of the LDR and the resistor is fed to the ADC pin 36 (VN) of ESP32.
Code Walkthrough
GitHub link −
The code here is straightforward. No libraries need to be included. We just define the LDR pin as a constant, initialize serial in the setup(), and set the resolution of the ADC. Here we have set a resolution of 10-bits (meaning the maximum value is 1023). By default the resolution is 12-bits and for ESP32, the minimum possible resolution is 9 bits.
const int LDR_PIN = 36; void setup() { // put your setup code here, to run once: Serial.begin(115200); analogReadResolution(10); //default is 12. Can be set between 9-12. }
In the loop, we just read the value from the LDR pin and print it to the serial monitor. Also, we convert it to voltage and print the corresponding voltage as well.
void loop() { // put your main code here, to run repeatedly: // LDR Resistance: 90k ohms // Resistance in series: 150k ohms // Pinouts: // Vcc −> 3.3 (CONNECTED TO LDR FREE END) // Gnd −> Gnd (CONNECTED TO RESISTOR FREE END) // Analog Read −> Vp (36) − Intermediate between LDR and resistance. int LDR_Reading = analogRead(LDR_PIN); float LDR_Voltage = ((float)LDR_Reading*3.3/1023); Serial.print("Reading: ");Serial.print(LDR_Reading); Serial.print("\t");Serial.print("Voltage: ");Serial.println(LDR_Voltage); }, and see the effect on the voltage, and then flash a torch on the LDR and see the voltage swing to the opposite extreme on the Serial Monitor. That's it. You've successfully captured data from an analog sensor on ESP32.
References
Preferences in ESP32
Non−volatile storage is an important requirement for embedded systems. Often, we want the chip to remember a couple of things, like setup variables, WiFi credentials, etc. even between power cycles. It would be so inconvenient if we had to perform setup or config every time the device undergoes a power reset. ESP32 has two popular non-volatile storage methods: preferences and SPIFFS. While preferences are generally used for storing key-value pairs, SPIFFS (SPI Flash File System), as the name suggests, is used for storing files and documents. In this chapter, let's focus on preferences.
Preferences are stored in a section of the main flash memory with type as data and subtype as nvs. nvs stands for non−volatile storage. By default, 20 KB of space is reserved for preferences, so don't try to store a lot of bulky data in preferences. Use SPIFFS for bulky data (SPIFFS has 1.5 MB of reserved space by default). What kind of key−value pairs can be stored within preferences? Let's understand through the example code.
Code Walkthrough
We will use the example code provided. Go to File −> Examples −> Preferences −> StartCounter. It can also be found on GitHub.
This code keeps a count of how many times the ESP32 was reset. Therefore, every time it wakes up, it fetches the existing count from preferences, increments it by one, and saves the updated count back to preferences. It then resets the ESP32. You can see using the printed statements on the ESP32 that the value of the count is not lost between resets, that it is indeed non−volatile.
This code is very heavily commented, and therefore, largely, self-explanatory. Nevertheless, let's walk through the code.
We begin by including the Preferences library.
#include <Preferences.h>
Next, we create an object of Class Preferences.
Preferences preferences;
Now let's look at the setup line by line. We begin by initializing Serial.
void setup() { Serial.begin(115200); Serial.println();
Next, we open preferences with a namespace. Now, think of the preference storage like a bank locker−room. There are several lockers, and you open one at a time. The namespace is like the name of the locker. Within each locker, there are key−value pairs that you can access. If the locker whose name you mentioned does not exist, then it will be created, and then you can add key−value pairs to that locker. Why are there different lockers? To avoid clashes in the name. Say you have a WiFi library that uses preferences to store credentials and a BlueTooth library that also uses preferences to store credentials. Say both of these are being developed by different developers. What if both use the same key name credentials? This will obviously create a lot of confusion. However, if both of them have their keys in different lockers, there will be no confusion at all.
// Open Preferences with my-app namespace. Each application module, library, etc // has to use a namespace name to prevent key name collisions. We will open storage in // RW-mode (second parameter has to be false). // Note: Namespace name is limited to 15 chars. preferences.begin("my−app", false);
The second argument false of preferences.begin() indicates that we want to both read from and write to this locker. If it was true, we could only read from the locker, not write to it. Also, the namespace, as mentioned in the comments, shouldn't be more than 15 characters in length.
Next, the code has a couple of commented statements, which you can make use of depending on the requirement. One enables you to clear the locker, and the other helps you delete a particular key−value pair from the locker (having the key as "counter")
// Remove all preferences under the opened namespace //preferences.clear(); // Or remove the counter key only //preferences.remove("counter");
As a next step, we get the value associated with the key "counter". Now, for the first time when you run this program, there may be no such key existing. Therefore, we also provide a default value of 0 as an argument to the preferences.getUInt() function. What this tells ESP32 is that if the key "counter" doesn't exist, create a new key-value pair, with key as "counter" and value as 0. Also, note that we are using getUInt because the value is of type unsigned int. Other functions like getFloat, getString, etc. need to be called depending on the type of the value. The full list of options can be found here.
unsigned int counter = preferences.getUInt("counter", 0);
Next, we increment this count by one and print it on the Serial Monitor.
// Increase counter by 1 counter++; // Print the counter to Serial Monitor Serial.printf("Current counter value: %u\n", counter);
We then store this updated value back to non-volatile storage. We are basically updating the value for the key "counter". Next time the ESP32 reads the value of the key "counter", it will get the incremented value.
// Store the counter to the Preferences preferences.putUInt("counter", counter);
Finally, we close the preferences locker and restart the ESP32 in 10 seconds.
// Close the Preferences preferences.end(); // Wait 10 seconds Serial.println("Restarting in 10 seconds..."); delay(10000); // Restart ESP ESP.restart(); }
Because we restart ESP32 before diving into the loop, the loop is never executed. Therefore, it is kept blank
void loop() {}
This example demonstrates quite well how ESP32 preferences storage is indeed non−volatile. When you check the printed statements on the Serial Monitor, you can see the count getting incremented between successive resets. This would not have happened with a local variable. It was only possible by using non−volatile storage through preferences.
References_22<<_23<<
Note − If in case you get "SPIFFS Mount Failed" on running the sketch, set the value of FORMAT_SPIFFS_IF_FAILED to false and try again.
References
Interfacing OLED Display with ESP32
The combination of OLED with ESP32 is so popular that there are some boards of ESP32 with the OLED integrated. We'll, however, assume that you will be using a separate OLED module with your ESP32 board. If you have an OLED module, it perhaps looks like the image below.
Connecting the OLED Display Module to ESP32
Like the MPU6050 module that we discussed in a previous chapter, the OLED module also generally uses I2C for communication. Therefore, the connection will be similar to the MPU6050 module. You need to connect the SDA line to pin 21 on ESP32, SCL line to pin 22, GND to GND, and VCC to 3V3 pin
Library for OLED Display
There are a number of libraries available for interfacing the OLED display with ESP32. You are free to use anyone you are comfortable with. For this example, we will use the 'ESP8266 and ESP32 OLED driver for SSD1306 displays, by ThingPulse, Fabrice Weinberg'. You can install this library from Tools −> Manage Libraries. It can also be found on GitHub
Code Walkthrough
The code becomes very simple thanks to the library we just installed. We will run a counter code, which will just count the seconds since the last reset and print them on the OLED module. The code can be found on GitHub
We begin with the inclusion of the SSD1306 library.
#include "SSD1306Wire.h"
Next, we define the OLED pins and its I2C address. Note that some OLED modules contain an additional Reset pin. A good example is the ESP32 TTGO board, which comes with an inbuilt OLED display. For that board, pin 16 is the reset pin. If you are connecting an external OLED module to your ESP32, you will most likely not use the Reset pin. The I2C address of 0x3c is generally common for all OLED modules.
//OLED related variables #define OLED_ADDR 0x3c #define OLED_SDA 21//4 //TTGO board without SD Card has OLED SDA connected to pin 4 of ESP32 #define OLED_SCL 22//15 //TTGO board without SD Card has OLED SCL connected to pin 15 of ESP32 #define OLED_RST 16 //Optional, TTGO board contains OLED_RST connected to pin 16 of ESP32
Next, we create the OLED display object and the counter variable.
SSD1306Wire display(OLED_ADDR, OLED_SDA, OLED_SCL); int counter = 0;
After that, we define two functions. One for initializing the OLED display (this function is redundant if your OLED module doesn't contain a reset pin), and the other for printing text messages on the OLED Display. The showOLEDMessage() function breaks down the OLED display area into 3 lines and asks for 3 strings, one for each line.
void initOLED() { pinMode(OLED_RST, OUTPUT); //Give a low to high pulse to the OLED display to reset it //This is optional and not required for OLED modules not containing a reset pin digitalWrite(OLED_RST, LOW); delay(20); digitalWrite(OLED_RST, HIGH); } void showOLEDMessage(String line1, String line2, String line3) { display.init(); // clears screen display.setFont(ArialMT_Plain_16); display.drawString(0, 0, line1); // adds to buffer display.drawString(0, 20, line2); display.drawString(0, 40, line3); display.display(); // displays content in buffer }
Finally, in the setup, we just initialize the OLED display, and in the loop, we just utilize the first two lines of the display to show the counter.
void setup() { // put your setup code here, to run once: initOLED(); } void loop() { // put your main code here, to run repeatedly showOLEDMessage("Num seconds is: ", String(counter), ""); delay(1000); counter = counter+1; }
That's it. Congratulations on displaying your first text statements on the OLED display.
WiFi on ESP32
The availability of a WiFi stack is one of the main differentiators between ESP32 and other microcontrollers. This chapter will give you a brief overview of the various WiFi modes available on ESP32. Subsequent chapters cover the transmission of data of WiFi using HTTP, HTTPS, and MQTT. There are 3 primary modes in which the WiFi can be configured on ESP32:
Station Mode − This is like the WiFi client mode. The ESP32 connects to an available WiFi field which in turn is connected to your internet. This is exactly similar to connecting your mobile phone to an available WiFi network.
Access Point Mode − This is equivalent to turning on the hotspot on your mobile phone so that other devices can connect to it. Similarly, ESP32 creates a WiFi field around itself that other devices can connect to. ESP32, however, does not have internet access by itself. Therefore, with this mode, you can generally display only a couple of webpages hardcoded into ESP32's memory. This mode is generally used to perform device setup during installation. Say you are taking your ESP32 to an unknown client site whose WiFi credentials you don't know beforehand. You will program the ESP32 to start operation in the Access Point mode. As soon as your mobile phone connects to the WiFi field created by ESP32, a page can open up (Captive Portal) and it will prompt you to enter WiFi credentials. Once you enter those credentials, the ESP32, will switch to station mode and try to connect to the available WiFi network using the credentials provided.
Combined AP-STA mode − As you might have guessed, in this mode, ESP32 is connected to an existing WiFi network and at the same time it is creating its own field, which other devices can connect to.
Most of the time, you will be using the ESP32 in the station mode. In all the 3 subsequent chapters as well, we will be using the ESP32 in the station mode. However, you should know about the AP mode as well and you are encouraged to explore examples of the AP mode yourself.
Transmitting data over WiFi using HTTP
HTTP (HyperText Transfer Protocol) is one of the most common forms of communications and with ESP32 we can interact with any web server using HTTP requests. Let's understand how in this chapter.
A brief about HTTP requests.
Code Walkthrough);
POST Request": "" }
); }
GET Request": "" }
As you can see, the parameters send to the server are now returned in the args field, because they were sent as arguments in the URL itself.
Congratulations!! You've successfully sent your HTTP requests using ESP32.
References
Transmitting data over WiFi using HTTPS
We.
Converting any HTTP request to.
Code Walkthrough).
.
Notice the URL field in the server response. It contains https instead of http, confirming that our transmission was secure. In fact, if you edit the CA certificate slightly, say you just delete one character, and then try to run the sketch, you will see the connection getting failed.
.
References
Transmitting data over WiFi using MQTT
MQTT (Message Queuing Telemetry Transport) has gained a lot of prominence in the context of IoT devices. It is a protocol that runs generally over TCP/IP. Instead of the server−client model that we saw for HTTP, MQTT uses the broker−client model. Wikipedia defines MQTT brokers and clients as −
An MQTT broker is a server that receives all messages from the clients and then routes the messages to the appropriate destination clients. An MQTT client is any device (from a micro controller up to a full−fledged server) that runs an MQTT library and connects to an MQTT broker over a network.
Think of the broker as a service like Medium. The topics would be the Medium publications, and the clients would be the Medium users. A user (client) can post to a publication, and another user (client) who has subscribed to that publication (topic) would be told that a new post is available for reading. By now, you would have understood a major difference between HTTP and MQTT. In HTTP, your messages are directly sent to the intended server and you even get an acknowledgment in the form of status codes. In MQTT, you just send messages to the broker in the hope that your intended server(s) will take it from there. Several features of MQTT turn out to be a boon if you are resource−constrained. They are listed below −
With MQTT, header overheads are very short and throughput is high. This helps save time and also battery.
MQTT sends information as a byte array instead of the text format. This makes the message lightweight.
Because MQTT isn't dependent on the response from the server, the client is independent and can go to sleep (conserve battery) as soon as it has transmitted the message.
These are just some of the points which have resulted in the popularity of MQTT. You can get a more detailed comparison between MQTT and HTTP here.
Code Walkthrough
In general, testing MQTT requires you to sign up for a free/ paid account with a broker. AWS IoT and Azure IoT are very popular platforms providing MQTT broker services, but they come with a lengthy signup and configuration process. Luckily, there is a free broker service from HiveMQ which can be used for testing MQTT without any signup or configuration. It is ideal for those of you who are new to MQTT and just want to get your hands dirty, and also lets you focus more on the firmware of ESP32. Therefore, that is the broker we will be using for this chapter. Of course, because it is a free service, there will be limitations. You can't share sensitive information, because all your messages are public, anyone can subscribe to your topics. For testing purposes, of course, these limitations won't matter.
The code can be found on GitHub
We will be using the PubSubClient library. You can install it from Tools −> Manage Libraries.
Once the library is installed, we include WiFi and PubSubClient libraries.
#include <WiFi.h> #include <PubSubClient.h>
Next, we will define some constants. Remember to replace the WiFi credentials. The mqttServer and mqttPort are the once mandated by−dashboard.com/.The mqtt_client_name, mqtt_pub_topic and mqtt_sub_topic can be any strings of your choice. Just make sure that you do change their values. If multiple users copy the same code from this tutorial, you will receive a lot of messages from unknown clients when testing.
We also define the WiFiClient and mqttClient object. The MQTTClient object requires the network client as an argument. If you are using Ethernet, you would provide the Ethernet client as an argument. Since we are using WiFi, we have provided the WiFi client as an argument.
const char* ssid = "YOUR_SSID"; const char* password = "YOUR_PASSWORD"; //The broker and port are provided by−dashboard.com/ char *mqttServer = "broker.hivemq.com"; int mqttPort = 1883; //Replace these 3 with the strings of your choice const char* mqtt_client_name = "ESPYS2111"; const char* mqtt_pub_topic = "/ys/testpub"; //The topic to which our client will publish const char* mqtt_sub_topic = "/ys/testsub"; //The topic to which our client will subscribe WiFiClient client; PubSubClient mqttClient(client);
Next, we define the callback function. A callback function is an interrupt function. Every time a new message is received from a subscribed topic, this function will be triggered. It has three arguments− the topic from which the message was received, the message as a byte array, and the length of the message. You can do whatever you want to do with that message (store it in SPIFFS, send it to another topic, and so on). Here, we are just printing the topic and the message.
void callback(char* topic, byte* payload, unsigned int length) { Serial.print("Message received from: "); Serial.println(topic); for (int i = 0; i < length; i++) { Serial.print((char)payload[i]); } Serial.println(); Serial.println(); }
In the setup, we connect to the WiFi like in every other sketch. The last two lines concern MQTT. We set the server and port for MQTT and also the callback function.
void setup() { // put your setup code here, to run once: Serial.begin(115200); WiFi.mode(WIFI_STA); //The WiFi is in station mode WiFi.begin(ssid, password); while (WiFi.status() != WL_CONNECTED) { delay(500); Serial.print("."); } Serial.println(""); Serial.print("WiFi connected to: "); Serial.println(ssid); Serial.println("IP address: "); Serial.println(WiFi.localIP()); delay(2000); mqttClient.setServer(mqttServer, mqttPort); mqttClient.setCallback(callback); }
Within the loop, we do the following:
If the client is not connected to the broker, we connect it using our client name.
Once connected, we also subscribe our client to the mqtt_sub_topic.
We then publish a message to mqtt_pub_topic
We then run the mqttClient.loop(). This loop() function should be called regularly. It maintains the connection of the client with the broker and also helps the client process incoming messages. If you don't have this mqttClient.loop() line, you will be able to publish to mqtt_pub_topic, but won't get messages from mqtt_sub_topic, because the incoming messages are processed only when this line is called.
Finally, we wait for 5 seconds, before starting this cycle again.
void loop() { // put your main code here, to run repeatedly: if (!mqttClient.connected()){ while (!mqttClient.connected()){ if(mqttClient.connect(mqtt_client_name)){ Serial.println("MQTT Connected!"); mqttClient.subscribe(mqtt_sub_topic); } else{ Serial.print("."); } } } mqttClient.publish(mqtt_pub_topic, "TestMsg"); Serial.println("Message published"); mqttClient.loop(); delay(5000); }
Testing the Code
In order to test the above code, you need to go to
Follow these steps once you are on that webpage −
Click Connect
Click on Add New Topic Subscription and enter the name of the topic to which your ESP32 will publish (/ys/testpub in this case)
Once you flash your ESP32, you will start receiving messages on that topic every 5 seconds.
- Next, to test reception of message on ESP32, enter the name of the topic your ESP32 is subscribed to (ys/testsub in this case), then type a message in the message box and click publish. You should see the message on the Serial Monitor.
Congratulations!! You've tested both publish and subscribe using MQTT on ESP32.
References_37<<_38<<
Pairing and Communication
.
Getting current time from NTP Servers
In IoT devices, the timestamp becomes an important attribute of the packet exchanged between the device and the server. Therefore, it is necessary to have the correct time on your device at all times. One way is to use an RTC (Real Time Clock) interfaced with your ESP32. You can even use ESP32's internal RTC. Once given a reference time, it can correctly output future timestamps. But how will you get the reference time? One way is to hardcode the current time while programming the ESP32. But that is not a neat method. Secondly, the RTC is prone to drift and it is a good idea to keep providing it with reference timestamps regularly. In this chapter, we will see how to get the current time from NTP Servers, feed it to ESP32's internal RTC once, and print future timestamps.
A brief about NTP
NTP stands for Network Time Protocol. It is a protocol for clock synchronization between computer systems. In layperson terms, there is a server sitting somewhere which maintains time accurately. Whenever a client requests the current time from the NTP server, it sends back time accurate up to 100s of milliseconds. You can read more about NTP here. and the time libraries.
#include <WiFi.h> #include "time.h"
Next, we define some global variables. Replace the WiFi SSID and password with the corresponding values for your WiFi. Next, we have defined the URL for the NTP Server. The gmtOffset_sec refers to the offset in seconds of your timezone from the GMT or the closely related UTC. For instance, in India, where the timezone is 5 hours and 30 mins ahead of the UTC, the gmtOffset_sec will be (5+0.5)*3600 = 19800.
The daylightOffset_sec is relevant for countries that have daylight savings. It can simply be set to 0 in other countries.
const char* ssid = "YOUR_SSID"; const char* password = "YOUR_PASS"; const char* ntpServer = "pool.ntp.org"; const long gmtOffset_sec = 3600; const int daylightOffset_sec = 3600;
Next, you can see a function printLocalTime(). It simply fetches the local time from the internal RTC and prints it to serial.
void printLocalTime() { struct tm timeinfo; if(!getLocalTime(&timeinfo)){ Serial.println("Failed to obtain time"); return; } Serial.println(&timeinfo, "%A, %B %d %Y %H:%M:%S"); }
You might be having three questions here −
- Where is the struct tm defined?
- Where is the getLocalTime() function defined?
- What are the %A, %B, etc. formatters?
The struct tm is defined in the time.h file that we have included at the top. In fact, the time library is not an ESP32 specific library. It is an AVR library that is compatible to ESP32. You can find the source code at here. If you look at the time.h file, you will see the struct tm.
struct tm { int8_t tm_sec; /**< seconds after the minute - [ 0 to 59 ] */ int8_t tm_min; /**< minutes after the hour - [ 0 to 59 ] */ int8_t tm_hour; /**< hours since midnight - [ 0 to 23 ] */ int8_t tm_mday; /**< day of the month - [ 1 to 31 ] */ int8_t tm_wday; /**< days since Sunday - [ 0 to 6 ] */ int8_t tm_mon; /**< months since January - [ 0 to 11 ] */ int16_t tm_year; /**< years since 1900 */ int16_t tm_yday; /**< days since January 1 - [ 0 to 365 ] */ int16_t tm_isdst; /**< Daylight Saving Time flag */ };
Now, the getLocalTime function is ESP32 specific. It is defined in the esp32−hal−time.c file. It is a part of the Arduino core for ESP32 and doesn't need a separate include in Arduino. You can see the source code here.
Now, the meaning of the formatters is given below −
/* %a Abbreviated weekday name %A Full weekday name %b Abbreviated month name %B Full month name %c Date and time representation for your locale %d Day of month as a decimal number (01−31) %H Hour in 24-hour format (00−23) %I Hour in 12-hour format (01−12) %j Day of year as decimal number (001−366) %m Month as decimal number (01−12) %M Minute as decimal number (00−59) %p Current locale's A.M./P.M. indicator for 12−hour clock %S Second as decimal number (00−59) %U Week of year as decimal number, Sunday as first day of week (00−51) %w Weekday as decimal number (0−6; Sunday is 0) %W Week of year as decimal number, Monday as first day of week (00−51) %x Date representation for current locale %X Time representation for current locale %y Year without century, as decimal number (00−99) %Y Year with century, as decimal number %z %Z Time-zone name or abbreviation, (no characters if time zone is unknown) %% Percent sign You can include text literals (such as spaces and colons) to make a neater display or for padding between adjoining columns. You can suppress the display of leading zeroes by using the "#" character (%#d, %#H, %#I, %#j, %#m, %#M, %#S, %#U, %#w, %#W, %#y, %#Y) */
Thus, with our formatting scheme of %A, %B %d %Y %H:%M:%S, we can expect the output to be similar to the following: Sunday, November 15 2020 14:51:30.
Now, coming to the setup and the loop. In the setup, we initialize Serial, connect to the internet using our WiFi, and configure the internal RTC of ESP32 using the configTime() function. As you can see, that function takes in three arguments, the gmtOffset, the daylightOffset and the ntpServer. It will fetch the time from ntpServer in UTC, apply the gmtOffset and the daylightOffset locally, and return the output time. This function, like getLocalTime, is defined in the esp32-hal-time.c file. As you can see from the file, TCP/IP protocol is used for fetching time from the NTP server.
Once we've obtained the time from the NTP server and fed it to the internal RTC of the ESP32, we no longer need WiFi. Thus, We disconnect the WiFi and keep printing time in the loop every second. You can see on the serial monitor that the time gets incremented by one second in every print. This is because the internal RTC of ESP32 maintains the time once it got the reference.(); }
The Serial Monitor output will look like −
That's it. You've learned how to get the correct time from the NTP servers and configure your ESP32's internal RTC. Now, in whatever packets you send to the server, you can add the timestamp.
References
Performing Over-The-Air Update of ESP32 firmware
Say you have a thousand IoT devices out in the field. Now, if one fine day, you find a bug in the production code, and wish to fix it, will you recall all the thousand devices and flash the new firmware in them? Probably not! What you'll prefer to have is a way to update all the devices remotely, over-the-air. OTA updates are very common these days. Every now and then, you keep receiving software updates to your Android or iOS smartphones. Just like software updates can happen remotely, so can firmware updates. In this chapter, we will look at how to update the firmware of ESP32 remotely.
OTA Update Process
The process is quite simple. The device first downloads the new firmware in chunks and stores it in a separate area of the memory. Let's call this area the 'OTA space'. Let's call the area of the memory where the current code or the application code is stored as the 'Application space'. Once the entire firmware has been downloaded and verified, the device bootloader swings into action. Consider the bootloader as a code written in a separate area of the memory (let's call it the 'Bootloader space'), whose sole purpose is to load the correct code in the Application space every time the device restarts.
Thus, every time the device restarts, the code in the Bootloader space gets executed first. Most of the time, it simply passes control to the code in the Application space. However, after downloading the newer firmware, when the device restarts, the bootloader will notice that a newer application code is available. So it will flash that newer code from the OTA space into the Application space and then give control to the code in the Application space. The result will be that the device firmware will be upgraded.
Now, digressing a bit, the bootloader can also flash the factory reset code from the 'Factory Reset space' to the Application space, if the Application code is corrupted, or a factory reset command is sent. Also, often, the OTA code and the factory reset codes are stored on external storage devices like an SD Card or an external EEPROM or FLASH chip, if the microcontroller doesn't have enough space. However, in the case of ESP32, the OTA code can be stored in the microcontroller's memory itself.
Code Walkthrough
We will be using an example code for this chapter. You can find it in File −> Examples −> Update −> AWS_S3_OTA_Update. It can also be found on GitHub.
This is one of the very detailed examples available for ESP32 on Arduino. The author of this sketch has even provided the expected Serial Monitor output of the sketch in comments. So while much of the code will be self−explanatory through the comments, we'll walk over the broad idea and also cover the important details. This code makes use of the Update library which, like many other libraries, makes working with ESP32 very easy, while keeping the rigorous work under−the−hood.
In this specific example, the author has kept the binary file of the new firmware in an AWS S3 bucket. Providing a detailed overview of AWS S3 is beyond the scope of this chapter, but very broadly, S3 (Simple Storage Service) is a cloud storage service provided by Amazon Web Services (AWS). Think of it like Google Drive. You upload files to your drive and share a link with people to share it. Similarly, you can upload a file to S3 and access it via a link. S3 is much more popular because a lot of other AWS services can interface seamlessly with it. Getting started with AWS S3 will be easy. You can get help from several resources available through a quick Google search. In the comments at the beginning of the sketch as well, a few steps to get started are mentioned.
An important recommendation to note is that you should use your own binary file for this code. The comments at the top of the sketch suggest that you can use the same binary file that the author has used. However, downloading a binary compiled on another machine/ another version of Arduino IDE has been known to cause errors sometimes in the OTA process. Also, using your own binary will make your learning more 'complete'. You can export the binary of any ESP32 sketch by going to Sketch −> Export Compiled Binary. The binary (.bin) file gets saved in the same folder in which your Arduino (.ino) file is saved.
Once your binary is saved, you just need to upload it to S3 and add the link to the bucket and address of the binary file in your code. The binary you save should have some print statement to indicate that it is different from the code you flash in the ESP32. A statement like "Hello from S3" maybe. Also, don't keep the S3 bucket link and bin address in the code as it is.
Alright! Enough talk! Let's begin the walkthrough now. We will begin by including the WiFi and Update libraries.
#include <WiFi.h> #include <Update.h>
Next, we define a few variables, constants, and also the WiFiClient object. Remember to add your own WiFi credentials and S3 credentials.
WiFiClient client; // Variables to validate // response from S3 long contentLength = 0; bool isValidContentType = false; // Your SSID and PSWD that the chip needs // to connect to const char* SSID = "YOUR−SSID"; const char* PSWD = "YOUR−SSID−PSWD"; // S3 Bucket Config String host = "bucket−name.s3.ap−south−1.amazonaws.com"; // Host => bucket−name.s3.region.amazonaws.com int port = 80; // Non https. For HTTPS 443. As of today, HTTPS doesn't work. String bin = "/sketch−name.ino.bin"; // bin file name with a slash in front.
Next, a helper function getHeaderValue() has been defined, which basically is used to check the value of a particular header. For example, if we get the header "Content-Length: 40" and it is stored in a String called headers, getHeaderValue(headers,"Content−Length: ") will return 40.
// Utility to extract header value from headers String getHeaderValue(String header, String headerName) { return header.substring(strlen(headerName.c_str())); }
Next, the main function execOTA(), which performs the OTA. This function has the entire logic related to the OTA. If you look at the Setup, we simply connect to the WiFi and call the execOTA() function.
void setup() { //Begin Serial Serial.begin(115200); delay(10); Serial.println("Connecting to " + String(SSID)); // Connect to provided SSID and PSWD WiFi.begin(SSID, PSWD); // Wait for connection to establish while (WiFi.status() != WL_CONNECTED) { Serial.print("."); // Keep the serial monitor lit! delay(500); } // Connection Succeed Serial.println(""); Serial.println("Connected to " + String(SSID)); // Execute OTA Update execOTA(); }
So you would have understood that understanding the execOTA function means understanding this entire code. Therefore, let's begin the walkthrough of that function.
We begin by connecting to our host, which is the S3 bucket in this case. Once connected, we fetch the bin file from the bucket, using a GET request (refer to the HTTP tutorial for more information on GET requests)
void execOTA() { Serial.println("Connecting to: " + String(host)); // Connect to S3 if (client.connect(host.c_str(), port)) { // Connection Succeed. // Fecthing the bin Serial.println("Fetching Bin: " + String(bin)); // Get the contents of the bin file client.print(String("GET ") + bin + " HTTP/1.1\r\n" + "Host: " + host + "\r\n" + "Cache-Control: no-cache\r\n" + "Connection: close\r\n\r\n");
Next, we wait for the client to get connected. We give a maximum of 5 seconds for the connection to get established, otherwise we say that the connection has timed out and return.
unsigned long timeout = millis(); while (client.available() == 0) { if (millis() - timeout > 5000) { Serial.println("Client Timeout !"); client.stop(); return; } }
Assuming that the code has not returned in the previous step, we have a successful connection established. The expected response from the server is provided in the comments. We begin by parsing that response. The response is read line by line, and each new line is stored in a variable called line. We specifically check for the following 3 things −
If the response status code is 200 (OK)
What is the Content-Length
Whether the content type is application/octet-stream (this is the type expected for a binary file)
The first and third are required, and the second is just for information.
while (client.available()) { // read line till /n String line = client.readStringUntil('\n'); // remove space, to check if the line is end of headers line.trim(); // if the the line is empty, // this is end of headers // break the while and feed the // remaining `client` to the // Update.writeStream(); if (!line.length()) { //headers ended break; // and get the OTA started } // Check if the HTTP Response is 200 // else break and Exit Update if (line.startsWith("HTTP/1.1")) { if (line.indexOf("200") < 0) { Serial.println("Got a non 200 status code from server. Exiting OTA Update."); break; } } // extract headers here // Start with content length if (line.startsWith("Content-Length: ")) { contentLength = atol((getHeaderValue(line, "Content-Length: ")).c_str()); Serial.println("Got " + String(contentLength) + " bytes from server"); } // Next, the content type if (line.startsWith("Content-Type: ")) { String contentType = getHeaderValue(line, "Content-Type: "); Serial.println("Got " + contentType + " payload."); if (contentType == "application/octet-stream") { isValidContentType = true; } } }
With this, the if block that checks if the connection with the server was successful ends. It is followed by the else block, which just prints that we were unable to establish connection to the server.
} else { // Connect to S3 failed // May be try? // Probably a choppy network? Serial.println("Connection to " + String(host) + " failed. Please check your setup"); // retry?? // execOTA(); }
Next, if we have hopefully received the correct response from the server, we will have a positive contentLength (remember, we had initialized it with 0 at the top and so it will still be 0 if we somehow did not reach that line where we parse the Content−Length header). Also, we will have isValidContentType as true (remember, we had initialized it with false). So we check if both of these conditions are true and if yes, proceed with the actual OTA. Note that so far, we have only made use of the WiFi library, for interacting with the server. Now if the server interaction turns out to be alright, we will begin use of the Update library, otherwise, we simply print that there was no content in the server response and flush the client. If the response was indeed correct, we first check if there is enough space in the memory to store the OTA file. By default, about 1.2 MB of space is reserved for the OTA file. So if the contentLength exceeds that, Update.begin() will return false. This 1.2MB number can change depending on the partitions of your ESP32.
// check contentLength and content type if (contentLength && isValidContentType) { // Check if there is enough to OTA Update bool canBegin = Update.begin(contentLength);
Now, if we indeed have space to store the OTA file in memory, we begin writing the bytes to the memory area reserved for OTA (the OTA space), using the Update.writeStream() function. If we don't, we simply print that message and flush the client, and exit the OTA process. The Update.writeStream() function returns the number of bytes that were written to the OTA space. We then check if the number of bytes written is equal to the contentLength. If the Update is completed, in which case the Update.end() function will return true, we check if it has finished properly, i.e. all bytes are written, using the Update.isFinished() function. If it returns true, meaning that all bytes have been written, we restart the ESP32, so that the bootloader can flash the new code from the OTA space to the Application space, and our firmware gets upgraded. If it returns false, we print the error received.
// If yes, begin if (canBegin) { Serial.println("Begin OTA. This may take 2 − 5 mins to complete. Things might be quite for a while.. Patience!"); // No activity would appear on the Serial monitor // So be patient. This may take 2 - 5mins to complete size_t written = Update.writeStream(client); if (written == contentLength) { Serial.println("Written : " + String(written) + " successfully"); } else { Serial.println("Written only : " + String(written) + "/" + String(contentLength) + ". Retry?" ); // retry?? // execOTA(); } if (Update.end()) { Serial.println("OTA done!"); if (Update.isFinished()) { Serial.println("Update successfully completed. Rebooting."); ESP.restart(); } else { Serial.println("Update not finished? Something went wrong!"); } } else { Serial.println("Error Occurred. Error #: " + String(Update.getError())); } } else { // not enough space to begin OTA // Understand the partitions and // space availability Serial.println("Not enough space to begin OTA"); client.flush(); } }
Of course, you would have realized by now that we need not do anything in the loop here.
That's it. You've successfully upgraded the firmware of your ESP32 chip remotely. If you are more curious about what each function of the Update library does, you can refer to the comments in the Update.h file.
References . | http://www.tutorialspoint.com/esp32_for_iot/esp32_for_iot_quick_guide.htm | CC-MAIN-2022-27 | refinedweb | 10,656 | 64.91 |
strtok, strtok_r - split string into tokens
#include <string.h>
char *strtok(char *restrict s, const char *restrict sep);
For strtok(): .
A sequence of calls to strtok() breaks the string pointed to by s into a sequence of tokens, each of which is delimited by a byte from the string pointed to by sep. The first call in the sequence has s as its first argument, and is followed by calls with a null pointer as their first argument. The separator string pointed to by sep may be different from call to call.
The first call in the sequence searches the string pointed to by s for the first byte that is not contained in the current separator string pointed to by sep. If no such byte is found, then there are no tokens in the string pointed to by s-2017 calls strtok().
[CX]
The strtok() function need not be thread-safe.The strtok() function need not be thread-safe.., state,.
POSIX.1-2008, Technical Corrigendum 2, XSH/TC2-2008/0350 [878] is applied.
return to top of pagereturn to top of page | https://pubs.opengroup.org/onlinepubs/9699919799/functions/strtok.html | CC-MAIN-2021-39 | refinedweb | 183 | 81.02 |
A Django 1.7+ model field for use with Python 3 enums.
Project description
A Django 1.7+ model field for use with Python 3 enums.
Works with any enum whose values are integers. Subclasses the IntegerField to store the enum as integers in the database.
When creating/loading fixtures, values are serialized to dotted names, like “AnimalType.Cat” for the example below.
A decorator is needed on Python enums in order to make them work with Django migrations, which require a deconstruct() method on the enum members.
Installation:
pip install enum3field
Example:
import enum from enum3field import EnumField, django_enum @django_enum class AnimalType(enum.Enum): Cat = 1 Dog = 2 Turtle = 3 class Animal(models.Model): animalType = EnumField(AnimalType)
Requires Python 3. Not tested with Django versions prior to 1.7 but might work.
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/enum3field/ | CC-MAIN-2018-39 | refinedweb | 160 | 62.04 |
Getting Started with LINQ in C#
This section contains basic background information that will help you understand the rest of the LINQ documentation and samples.
In This Section
Introduction to LINQ Queries
Describes the three parts of the basic LINQ query operation that are common across all languages and data sources.
LINQ and Generic Types
Provides a brief introduction to generic types as they are used in LINQ.
Basic LINQ Query Operations (C#)
Describes the most common types of query operations and how they are expressed in Visual Basic and C#.
Data Transformations with LINQ (C#)
Describes the various ways that you can transform data retrieved in queries.
Type Relationships in LINQ Query Operations (C#)
Describes how types are preserved and/or transformed in the three parts of a LINQ query operation
LINQ Query Syntax versus Method Syntax (C#)
Compares method syntax and query syntax as two ways to express a LINQ query.
C# 3.0 Features That Support LINQ
Describes the language constructs added in C# 3.0 that support LINQ.
Walkthrough: Writing Queries in C# (LINQ)
Step-by-step instructions for creating a C# LINQ project, adding a simple data source, and performing some basic query operations.
Related Sections
Language-Integrated Query (LINQ)
Provides links to topics that explain the LINQ technologies.
LINQ Query Expressions (C# Programming Guide)
Includes an overview of queries in LINQ and provides links to additional resources.
Visual Studio IDE and Tools Support for LINQ
Describes tools available in the Visual Studio environment for designing, coding, and debugging LINQ-enabled application.
How to: Create a LINQ Project
Describes the .NET Framework version, namespaces and references required to create LINQ projects.
LINQ General Programming Guide
Includes links to topics that provide details about specific LINQ features.
LINQ Samples
Provides links to topics that explain the LINQ samples.
Getting Started with LINQ in Visual Basic
Provides links to topics about using LINQ with Visual Basic.
See Also
Other Resources
Language-Integrated Query (LINQ) | https://docs.microsoft.com/en-us/previous-versions/visualstudio/visual-studio-2008/bb397933%28v%3Dvs.90%29 | CC-MAIN-2019-26 | refinedweb | 327 | 53.51 |
Improving String Handling Performance in .NET Framework Applications
James Musson
Developer Services, Microsoft UK
April 2003
Applies to:
Microsoft® .NET Framework®
Microsoft Visual Basic .NET®
Microsoft Visual C#®
Summary: Many .NET Framework applications use string concatenation to build representations of data, be it in XML, HTML or just some proprietary format. This article contains a comparison of using standard string concatenation and one of the classes provided by the .NET Framework specifically for this task, System.Text.StringBuilder, to create this data stream. A reasonable knowledge of .NET Framework programming is assumed. (11 printed pages)
Contents
Introduction
String Concatenation
What is a StringBuilder?
Creating the Test Harness
Testing
Results
Conclusion
Introduction
When writing .NET Framework applications, there is invariably some point at which the developer needs to create some string representation of data by concatenating other pieces of string data together. This is traditionally achieved by using one of the concatenation operators (either '&' or '+') repeatedly. When examining the performance and scalability characteristics of a wide range of applications in the past, it has been found that this is often an area where substantial gains in both performance and scalability can be made for very little extra development effort.
String Concatenation
Consider the following code fragment taken from a Visual Basic .NET class. The BuildXml1 function simply takes a number of iterations (Reps) and uses standard string concatenation to create an XML string with the required number of Order elements.
' build an Xml string using standard concatenation Public Shared Function BuildXml1(ByVal Reps As Int32) As String Dim nRep As Int32 Dim sXml As String For nRep = 1 To Reps sXml &= "<Order orderId=""" _ & nRep _ & """ orderDate=""" _ & DateTime.Now.ToString() _ & """ customerId=""" _ & nRep _ & """ productId=""" _ & nRep _ & """ productDescription=""" _ & "This is the product with the Id: " _ & nRep _ & """ quantity=""" _ & nRep _ & """/>" Next nRep" & sXml & "</Orders>" Return sXml End Function
This equivalent Visual C# code is shown below.
// build an Xml string using standard concatenation public static String BuildXml1(Int32 Reps) { String"; }" + sXml + "</Orders>"; return sXml; }
It is quite common to see this method used to build large pieces of string data in both .NET Framework applications and applications written in other environments. Obviously, XML data is used here simply as an example, and there are other, better methods for building XML strings provided by the.NET Framework, such as System.Xml.XmlTextWriter. The problem with the BuildXml1 code lies in the fact that the System.String data type exposed by the .NET Framework represents an immutable string. This means that every time the string data is changed, the original representation of the string in memory is destroyed and a new one is created containing the new string data, resulting in a memory allocation operation and a memory de-allocation operation. Of course, this is all taken care of behind the scenes, so the true cost is not immediately apparent. Allocating and de-allocating memory causes increased activity related to memory management and garbage collection within the Common Language Runtime (CLR) and thus can be expensive. This is especially apparent when strings get big and large blocks of memory are being and allocated and de-allocated in quick succession, as happens during heavy string concatenation. While this may present no major problems in a single user environment, it can cause serious performance and scalability issues when used in a server environment such as in an ASP.NET® application running on a Web server.
So, back to the code fragment above: how many string allocations are being performed here? In fact the answer is 14. In this situation every application of the '&' (or '+') operator causes the string pointed to by the variable sXml to be destroyed and recreated. As I have already mentioned, string allocation is expensive, becoming increasingly more so as the string grows, and this is the motivation for providing the StringBuilder class in the .NET Framework.
What is a StringBuilder?
The concept behind the StringBuilder class has been around for some time and my previous article, Improving String Handling Performance in ASP Applications, demonstrates how to write a StringBuilder using Visual Basic 6. The basic principle is that the StringBuilder maintains its own string buffer. Whenever an operation is performed on the StringBuilder that might change the length of the string data, the StringBuilder first checks that the buffer is large enough to hold the new string data, and if not, the buffer size is increased by a predetermined amount. The StringBuilder class provided by the .NET Framework also offers an efficient Replace method that can be used instead of String.Replace.
Figure 1 shows a comparison of what the memory usage pattern looks like for the standard concatenation method and the StringBuilder concatenation method. Notice that the standard concatenation method causes a new string to be created for every concatenation operation, whereas the StringBuilder uses the same string buffer each time.
Figure 1 Comparison of memory usage pattern between standard and StringBuilder concatenation
The code to build XML string data using the StringBuilder class is shown below in BuildXml2.
' build an Xml string using the StringBuilder Public Shared Function BuildXml2(ByVal Reps As Int32) As String Dim nRep As Int32 Dim oSB As StringBuilder ' make sure that the StringBuilder capacity is ' large enough for the resulting text oSB = New StringBuilder(Reps * 165) oSB.Append("<Orders method=""2"">") For nRep = 1 To Reps("""/>") Next nRep oSB.Append("</Orders>") Return oSB.ToString() End Function
The equivalent Visual C# code is shown below.
// build an Xml string using the StringBuilder public static String BuildXml2(Int32 Reps) { // make sure that the StringBuilder capacity is // large enough for the resulting text StringBuilder oSB = new StringBuilder(Reps * 165); oSB.Append("<Orders method=\"2\">"); for( Int32 nRep = 1; nRep<=Reps; nRep++ ) {("\"/>"); } oSB.Append("</Orders>"); return oSB.ToString(); }
How the StringBuilder method performs against the standard concatenation method depends on a number of factors, including the number of concatenations, the size of the string being built, and how well the initialization parameters for the StringBuilder buffer are chosen. Note that in most cases it is going to be far better to overestimate the amount of space needed in the buffer than to have it grow often.
Creating the Test Harness
I decided that I wanted to test the two string concatenation methods using Application Center Test® (ACT) and this implies that the methods should be exposed by an ASP.NET Web application. Because I didn't want the processing involved in creating an ASP.NET page for each request to show up in my results, I created and registered an HttpHandler that accepted requests for my logical URL, StringBuilderTest.jemx, and called the relevant BuildXml function. Although a detailed discussion of HttpHandlers is outside the scope of this article, I have included the code for my test below.
Public Class StringBuilderTestHandler Implements IHttpHandler Public Sub ProcessRequest(ByVal context As HttpContext) _ Implements IHttpHandler.ProcessRequest Dim nMethod As Int32 Dim nReps As Int32 ' retrieve test params from the querystring If Not context.Request.QueryString("method") Is Nothing Then nMethod = Int32.Parse( _ context.Request.QueryString("method").ToString()) Else nMethod = 0 End If If Not context.Request.QueryString("reps") Is Nothing Then nReps = Int32.Parse( _ context.Request.QueryString("reps").ToString()) Else nReps = 0 End If context.Response.ContentType = "text/xml" context.Response.Write( _ "<?xml version=""1.0"" encoding=""utf-8"" ?>") ' write the Xml to the response stream Select Case nMethod Case 1 context.Response.Write( _ StringBuilderTest.BuildXml1(nReps)) Case 2 context.Response.Write( _ StringBuilderTest.BuildXml2(nReps)) End Select End Sub Public ReadOnly Property IsReusable() As Boolean _ Implements IHttpHandler.IsReusable Get Return True End Get End Property End Class
The equivalent Visual C# code is shown below.
public class StringBuilderTestHandler : IHttpHandler { public void ProcessRequest(HttpContext context) { Int32 nMethod = 0; Int32 nReps = 0; // retrieve test params from the querystring if( context.Request.QueryString["method"]!=null ) nMethod = Int32.Parse( context.Request.QueryString["method"].ToString()); if( context.Request.QueryString["reps"]!=null ) nReps = Int32.Parse( context.Request.QueryString["reps"].ToString()); // write the Xml to the response stream context.Response.ContentType = "text/xml"; context.Response.Write( "<?xml version=\"1.0\" encoding=\"utf-8\" ?>"); switch( nMethod ) { case 1 : context.Response.Write( StringBuilderTest.BuildXml1(nReps)); break; case 2 : context.Response.Write( StringBuilderTest.BuildXml2(nReps)); break; } } public Boolean IsReusable { get{ return true; } } }
The ASP.NET HttpPipeline creates an instance of StringBuilderTestHandler and invokes the ProcessRequest method for each HTTP request to StringBuilderTest.jemx. ProcessRequest simply extracts a couple of parameters from the query string and chooses the correct BuildXml function to invoke. The return value from the BuildXml function is passed back into the Response stream after creating some header information.
For more information about HttpHandlers please see the IHttpHandler documentation.
Testing
The tests were performed using ACT from a single client (Windows® XP Professional, PIII-850MHz, 512MB RAM) against a single server (Windows Server 2003 Enterprise Edition, dual PIII-1000MHz, 512MB RAM) over a 100mbit/sec network. ACT was configured to use 5 threads so as to simulate a load of 5 users connecting to the web site. Each test consisted of a 10-second warm-up period followed by a 50-second load period in which as many requests as possible were made.
The test runs were repeated for various numbers of concatenation operations by varying the number of iterations in the main loop as shown in the code fragments for the BuildXml functions.
Results
Below is a series of charts showing the effect of each method on the throughput of the application and also the response time for the XML data stream to be served back to the client. This gives some idea of how many requests the application could process and also how long the users, or client applications, would be waiting to receive the data.
Table 1 Key to concatenation method abbreviations used
While this test is far from realistic in terms of simulating the workload for a typical application, it is evident from Table 2 that even at 425 repetitions the XML data string is not particularly large; there are many applications where the average size of data transmissions fall in the higher ranges of these figures and above.
Table 2 XML string sizes and number of concatenations for test samples
Figure 2 Chart showing throughput results
Figure 3 Chart showing response time results
As we can clearly see from Figures 2 and 3, the StringBuilder method (BLDR) outperforms the standard concatenation (CAT) method both in terms of how many requests can be processed and the elapsed time required to start generating a response back to the client (represented by Time To First Byte, or TTFB, on the graph). At 425 iterations the StringBuilder method is processing 17 times more requests and taking just 3% of the elapsed time for each request as compared with the standard concatenation method.
Figure 4 Chart giving an indication of system health during the tests
Figure 4 gives some indication of the load the server was under during the testing. It is interesting to note that as well as outperforming the standard concatenation method (CAT) at every stage, the StringBuilder method (BLDR) also caused considerably less CPU usage and time to be spent in Garbage Collection. While this does not actually prove that the resources on the server were used more effectively during StringBuilder operations, it certainly does strongly suggest it.
Conclusion. | https://docs.microsoft.com/en-us/previous-versions/dotnet/articles/aa302329(v=msdn.10)?redirectedfrom=MSDN | CC-MAIN-2020-16 | refinedweb | 1,896 | 53.41 |
Subject: Re: [boost] [stopwatches] About reducing the scope of the library
From: Vicente J. Botet Escriba (vicente.botet_at_[hidden])
Date: 2011-09-11 13:26:45
Le 11/09/11 11:17, John Maddock a écrit :
>>> My first proposal of Boost.Chrono included Stopwatches, but some on
>>> this list sugested that i would be better to split the library :(
>>
>> That design still has merit, but -
>>
>>> If no body is agains I will move Boost.Stopwatches to the namespace
>>> boost::chrono, remove the reporting facilities, and find a date for
>>> a review.
>>
>> My point was that since you own Chrono you can add whatever you want
>> to it. So you could get something distributed now and see about a
>> review later.
>>
>> But perhaps I'm the only one thinking the need for a Boost.Timer
>> replacement is urgent.
>
> Nod. Sounds like an important addition to me.
>
> IMO a small addition of a stopwatch could be done without a formal
> review - maybe just post the design and get feedback?
>
I would prefer a review.
> But something like:
>
> template <class Clock>
> struct stopwatch
> {
> void reset();
> double elapsed();
> };
>
> would seem hard to go wrong with? OK arguably the result of elapsed()
> should be a duration, but that makes it harder to use....
>
Why do you think it is harder to use? Could you give an example when
returning double will be clearer? Which will be the units of this double?
> BTW I spotted a typo in your docs:
>
> "The standard defines tree system-wide clocks"
> ^^
>
Thanks,
Vicente
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2011/09/185782.php | CC-MAIN-2020-29 | refinedweb | 275 | 76.11 |
#include <pwd.h>
The
stream.
The passwd structure is
defined in
<
pwd.h
> as
follows:
For more information about the fields of this structure, see passwd(5)..
On success, these functions return 0 and *
pwbufp is a pointer to the
struct passwd. On
error, these functions return an error value and *
pwbufp is NULL.
No more entries.
Insufficient buffer space supplied. Try again with larger buffer.
For.
These);
The function
getpwent_r() is
not really reentrant since it shares the reading position in
the stream with all other threads.
#define _GNU_SOURCE #include <pwd.h> #include <stdio.h> #include <stdint.h> #define BUFLEN 4096 int main(void) { struct passwd pw; struct passwd *pwp; char buf[BUFLEN]; int i; setpwent(); while (1) { i = getpwent_r(&pw, buf, sizeof(buf), &pwp); if (i) break; printf("%s (%jd)\tHOME %s\tSHELL %s\n", pwp−>pw_name, (intmax_t) pwp−>pw_uid, pwp−>pw_dir, pwp−>pw_shell); } endpwent(); exit(EXIT_SUCCESS); }
fgetpwent(3), getpw(3), getpwent(3), getpwnam(3), getpwuid(3), putpwent(3), passwd(5) | https://manpages.courier-mta.org/htmlman3/getpwent_r.3.html | CC-MAIN-2021-21 | refinedweb | 163 | 52.36 |
I just gone through the error you had mentioned in my sample.
I had made some changes in my codes and created a new sample, which can solve that error.
SpinnerItem.axml
<?xml version="1.0" encoding="utf-8"?> <TextView xmlns:
CustomAdapter.cs
using System; using System.Collections.Generic; using Android.Content; using Android.Graphics; using Android.Views; using Android.Widget; namespace SpinnerTest { public class CustomAdapter : ArrayAdapter<String> { public CustomAdapter(Context context, int textViewResourceId, List<String> objects) : base(context, textViewResourceId, objects) { } public override bool IsEnabled (int position) { // Disable the first item from Spinner // First item will be use for hint return position == 0 ? false : true; } public override View GetDropDownView (int position, View convertView, ViewGroup parent) { View view = base.GetDropDownView (position, convertView, parent); TextView tv = (TextView) view; if(position == 0){ // Set the hint text color gray tv.SetTextColor(Color.Gray); } else { tv.SetTextColor(Color.Black); } } } }
MainActivity.cs
using System; using System.Collections.Generic; using Android.App; using Android.OS; using Android.Widget; namespace SpinnerTest { [Activity (Label = "SpinnerTest", MainLauncher = true, Icon = "@mipmap/icon")] public class MainActivity : Activity { protected override void OnCreate (Bundle savedInstanceState) { base.OnCreate (savedInstanceState); SetContentView (Resource.Layout.Main); var mySpinner = FindViewById<Spinner> (Resource.Id.spinner); var list = new List<String>(); list.Add ("[Select one]"); list.Add("string1"); list.Add("string2"); list.Add("string3"); var dataAdapter = new CustomAdapter(this, Resource.Layout.SpinnerItem, list); dataAdapter.SetDropDownViewResource (Resource.Layout.SpinnerItem); mySpinner.Adapter = dataAdapter; mySpinner.ItemSelected += (s, e) => { String selectedItemText= mySpinner.SelectedItem.ToString(); int position = mySpinner.SelectedItemPosition; // If user change the default selection // First item is disable and it is used for hint if (position > 0) // Notify the selected item text Toast.MakeText (ApplicationContext, "Selected : " + selectedItemText, ToastLength.Short).Show (); }; mySpinner.NothingSelected += (s, e) => { }; } } }
Check this, and give a reply
Answers
@Rishi Naithani,
refer this document.....................
@Rish,
I suggest you links of similar post :
Hi @YkshLeo . I have gone through the links , but cudn't get the solution. Could you provide a sample code for it in Xamarin. ?
@Rish,
Try this attached sample.
Sample Code
MainActivity.cs
CustomAdapter.cs
Thanks @YkshLeo for the sample code . It works wonderfully well when i run it in a new project.
But In my case , the spinner is invisible when I launch the app .
Then , On rotation The spinner becomes visible ,
but spinner text shows to be "string 3" instead of "[Select one]" .
Could be because of the theme I am using i.e. Theme.Appcompat.? Or is it something else.
@Rish,
Can you create a sample project and attach.
Hi @YkshLeo . Everything seems to be working fine now . except on rotation , instead of "select one", it shows the second last item of the list . note: all this code comes inside a fragment.
@YkshLeo The SpinnerSample example you provided works great at startup.
But ,on rotation (or whenever the activity is recreated ) it doesn't show up "Select One" .
Can you pls check it and tell the reason behind it ?
Thanks .
@Rish, Let me check it
Hi @YkshLeo were you able to find the reason behind the rotation issue .
@Rish, When you navigate to back from page2 to page1, the spinner in the page1 will show the previously selected data. It's because that page is not refreshed when it is navigated back. If you want the
[Select one]in the spinner then I suggest you to give some code to refresh page1 when navigated from back from page2.
I m sorry @ykshleo .. but i cudn't understand.
The detailed problem is like this:
The list is like this:
var list = new List();
list.Add("string1");
list.Add("string2");
list.Add("string3");
list.Add("[Select one]");
At first, page1 shows the correct value "[Select One]" as the spinner title .
then On rotating page 1, spinner title changes from "[Select One]" to "string 3" by itself.
It should show "[Select One ]" because the spinner is (not clicked) or untouched .
@Rish, I just noted that, Let check that
@Rish,
I just gone through the error you had mentioned in my sample.
I had made some changes in my codes and created a new sample, which can solve that error.
SpinnerItem.axml
CustomAdapter.cs
MainActivity.cs
Check this, and give a reply
Thanks @YkshLeo . I ll check this and reply back
The above code worked wonderfully well.
Thank You so much @YkshLeo for the effort .
Thank You @YkshLeo I was looking so many hours for this! | https://forums.xamarin.com/discussion/60417/how-to-hide-first-item-in-an-android-spinner | CC-MAIN-2020-10 | refinedweb | 724 | 52.66 |
Python, the TV show called Monty Python’s Flying Circus. During execution the Python source code is translated into bytecode which is then interpreted by the Python interpreter. Python source code can also run on the Java Virtual Machine, in this case you are using Jython.
Key features of Python are:
high-level data types, as for example extensible lists
statement grouping is done by indentation instead of brackets
variable or argument declaration is not necessary
supports for object-orientated, procedural and functional programming style
1.2. Block concept in Python via indentation
Python identify blocks of code by indentation. If you have an if statement and the next line is indented then it means that this indented block belongs to the if. The Python interpreter supports either spaces or tabs, e.g. you can not mix both. The most "pythonic" way is to use 4 spaces per indentation level.
1.3. About this tutorial
This tutorial will first explain how to install Python and the Python plugins for Eclipse. It will then create a small Python project to show the usage of the plugin. Afterwards the general constructs of Python are explained.
2. Installation
2.1. Python
Download Python from. Download the version 3.3.1 or higher of Python. If you are using Windows you can use the native installer for Python.
2.2. Eclipse Python plugin
The following assume that you have already Eclipse installed. For an installation description of Eclipse please see Eclipse IDE for Java.
For Python development under Eclipse you can use the PyDev Plugin which is an open source project. Install PyDev via the Eclipse update manager via the following update site:.
2.3. Configuration of Eclipse
You also have to maintain in Eclipse the location of your Python installation. Open in themenu.
Press the
New
button
and enter the path to
python.exe
in your
Python installation directory. For Linux and Mac OS X users
this is
normally /usr/bin/python.
The result should look like the following.
3. Your first Python program in Eclipse
Select. Select Pydev → Pydev Project.
Create a new project with the name "de.vogella.python.first". Select Python version 2.6 and your interpreter.
Press finish.
Select. Select the PyDev perspective.
Select the "src" folder of your project, right-click it and select New → PyDev Modul. Create a module "FirstModule".
Create the following source code.
''' Created on 18.06.2009 @author: Lars Vogel ''' def add(a,b): return a+b def addFixedValue(a): y = 5 return y +a print add(1,2) print addFixedValue(1)
Right-click your model and select Run As → Python run.
Congratulations! You created your first (little) Python modul and ran it!
4. Debugging
Just right-click in the source code and add a breakpoint.
Then select Debug as → Python Run
You can now inspect and modify the variables in the variables view.
Via the debug buttons (or shortcuts F5, F6, F7, F8) you can move in your program.
You can use F5 / F6, F7 and F8 to step through your coding.
You can of course use the ui to debug. The following displays the key bindings for the debug buttons.
5. Programming in Python
5.1. Comments
The following create a single line comment.
# This is a comment
5.2. Variables
Python provides dynamic typing of its variables, e.g. you do not have to define a type of the variable Python will take care of this for you.
# This is a text s= "Lars" # This is an integer x = 1 y=4 z=x+y
5.3. Assertions
Python provides assertions. These assertions are always called.
assert(1==2)
5.4. Methods / Functions in Python
Python allows to define methods via the keyword def. As the language is interpreted the methods need to be defined before using it.
def add(a,b): return a+b print add(1,2)
5.5. Loops and if clauses
The following demonstrates a loop the usage of an if-clause.
i = 1 for i in range(1, 10): if i <= 5 : print 'Smaller or equal than 5.\n', else: print 'Larger than 5.\n',
5.6. String manipulation
Python allows the following String operations.
For example:
s = "abcdefg" assert (s[0:4]=="abcd") assert ( s[4:]=="efg") assert ("abcdefg"[4:0]=="") assert ("abcdefg"[0:2]=="ab")
5.7. Concatenate strings and numbers
Python does not allow to concatenate a string directly with a number. It requires you to turn the number first into a string with the str() function.
If you do not use str() you will get "TypeError: cannot concatenate 'str' and 'int' objects".
For example:
print 'this is a text plus a number ' + str(10)
5.8. Lists
Python has good support for lists. See the following example how to create a list, how to access individual elements or sublists and how to add elements to a list.
''' Created on 14.09.2010 @author: Lars Vogel ''' mylist = ["Linux", "Mac OS" , "Windows"] # Print the first list element print(mylist[0]) # Print the last element # Negativ values starts the list from the end print(mylist[-1]) # Sublist - first and second element print(mylist[0:2]) # Add elements to the list mylist.append("Android") # Print the content of the list for element in mylist: print(element)
If you want to remove the duplicates from a list you can use:
mylist = ["Linux", "Linux" , "Windows"] # remove duplicates from the list mylist = list(set(mylist))
5.9. Processing files in Python
The following example is contained in the project "de.vogella.python.files". The following reads a file, strips out the end of line sign and prints each line to the console.
''' Created on 07.10.2009 @author: Lars Vogel ''' f = open('c:\\temp\\wave1_new.csv', 'r') print f for line in f: print line.rstrip() f.close()
The following reads the same file but write the output to another file.
''' @author: Lars Vogel ''' f = open('c:\\temp\\wave1_new.csv', 'r') output = open('c:\\temp\\sql_script.text', 'w') for line in f: output.write(line.rstrip() + '\n') f.close()
5.10. Splitting strings and comparing lists.
The following example is contained in the project "de.vogella.python.files". It reads to files which contain one long comma separated string. This string is splitted into lists and the lists are compared.
f1 = open('c:\\temp\\launchconfig1.txt', 'r') s= "" for line in f1: s+=line f1.close() f2 = open('c:\\temp\\launchconfig2.txt', 'r') s2= "" for line in f2: s2+=line f2.close() list1 = s.split(",") list2 = s2.split(","); print(len(list1)) print(len(list2)) difference = list(set(list1).difference(set(list2))) print (difference)
5.11. Writing Python Scripts in Unicode
If you read special non ASCII sign, e.g. ö, ä. ü or ß, you have to tell Python which character set to use. Include the following in the first or second line of your script.
# -*- coding: UTF-8 -*-
5.12. Classes in Python
The following is a defined class in Python. Python uses the naming convension _name_ for internal functions. Python allows operator overloading, e.g. you can define what the operator + will do for a specific class.
The empty object (null) is called _None_ in Python.
class Point: def __init__(self, x=0, y=0): self.x = x self.y = y def __str__(self): return "x-value" + str(self.x) + " y-value" + str(self.y) def __add__(self,other): p = Point() p.x = self.x+other.x p.y = self.y+other.y return p p1 = Point(3,4) p2 = Point(2,3) print p1 print p1.y print (p1+p2)
6. Google App Engine
Google offers free hosting of small Python based web application. Please see Google App Engine development with Python. | https://www.vogella.com/tutorials/Python/article.html | CC-MAIN-2021-17 | refinedweb | 1,286 | 69.38 |
Hey, Scripting Guy! How can I tell if any of my contacts have a birthday this month?
-- VG
Hey, VG. You sly fox, you: how did you know that the Scripting Guy who writes this column is celebrating his birthday?!? And no, you don’t need to send him a present; after all, he writes this column because he loves his work, not because he expects people to shower him with gifts on his birthday. Heaven forbid.
But, if you insist, send your gifts here:
The Scripting Guy Who Writes That Column
Microsoft Corporation
Building 42/4039
One Microsoft Way
Redmond, WA 98052
And yes, cash will be fine. Thanks!
Editor’s Note: Just kidding everyone. It’s not even his birthday (read on). He tries to get free desserts from restaurants like this too.
Of course, now we feel bad that we didn’t get you anything for your birthday, VG. Tell you what; how about a script that can tell you if any of your contacts have a birthday this month? Sound good? Hang on a second; we have to zip over to Scripts ‘R Us. We’ll be right back.
Whew; we got the last one they had in stock. Hope this fits, VG: Month(objContact.Birthday) = Month(Date) Then
Wscript.Echo objContact.FullName, objContact.Birthday
End If
Before we explain how this script works we should note that the assumption here is that all your contacts are in the same folder. If that’s not the case, that is, if you have subfolders in your main contacts folder, well, then this script will be far from foolproof: it will return information about the contacts in the main folder, but not for those contacts in any subfolders. If you have subfolders then you’ll need to write a recursive function that can access the information found in those subfolders. And how do you do that? Beats the heck out of us. But you can find an example of a recursive function that works with Microsoft Outlook in the Hey, Scripting Guy! archive.
As for the script itself, we start things off with the On Error Resume Next statement. That’s something we typically don’t use in our scripts, but we occasionally encountered an error when dealing with contact birthdays. We’re not totally sure why we got the error, but, seeing as how On Error Resume Next took care of things, we decided to just accept things as they were and not worry too much about it. If we ever go back and look into this a little closer we’ll let you know.
But don’t hold your breath waiting.
After enabling error handling we define a constant named olFolderContacts; we’ll use this constant to tell the script which Outlook folder we want to work with. We then use these two lines of code to create an instance of the Outlook.Application object and to bind to the MAPI namespace:
Set objOutlook = CreateObject("Outlook.Application")
Set objNamespace = objOutlook.GetNamespace("MAPI")
We should also point out that this script assumes that Outlook is already up and running. What are you supposed to do if Outlook isn’t already up and running? Beats the heck out of us. But you can find an example of a script that determines whether or not Outlook is already running (and, if not, start it) in the Office Space archive.
Now, in theory, we could probably write a complicated filter that would automatically weed out contacts that don’t have a birthday this month. To tell you the truth, however, that seemed like more trouble than it was worth; as near as we could tell it was just as fast (and way easier) to individually check each contact’s birthday. Therefore, our next step is to use this line of code to retrieve a collection of all the contacts in the Contacts folder:
Set colContacts = objNamespace.GetDefaultFolder(olFolderContacts).Items
What are we going to do with that collection? Funny you should ask. To begin with, we’re going to set up a For Each loop to walk through the entire collection. For each contact in that collection we’ll use this line of code to determine whether the contact has a birthday in the current month:
If Month(objContact.Birthday) = Month(Date) Then
All we’re doing here is using the command Month(objContact.Birthday) to determine the numeric value of the month in which the contact was born (1 for January, 2 for February, 3 for March, etc.). We then compare that with the current month; that’s what Month(Date) tells us. If the values match then the contact has a birthday this month. In turn, we echo back the contact name and birthday:
Wscript.Echo objContact.FullName, objContact.Birthday
If the two months don’t match then we simply loop around and check the next contact in the collection.
When we’re all done we should get back a report similar to this (assuming that the script is run sometime in March):
Jonathan Haas 3/28/1977
Ken Myer 3/21/1938
Pilar Ackerman 3/14/1968
One thing to watch out for here. If you haven’t specified a birthday for a contact Outlook automatically assigns that person a birthday of 1/1/4501. (Interestingly, that’s the same year that we expect Scripting Guy Jean Ross to finally stop complaining about our good-natured jab about her age. [Editor’s Note: Actually, Jean thinks it wouldn’t be all bad for people to believe she was born in 1938; she’d get senior discounts, and a lot of comments on how great she looks for her age – not that she doesn’t get those comments anyway.]) What that means is that, as far as this script is concerned, anyone without a designated birthday will appear to have a birthday in January. Unless, of course, you run this modified script, which simply ignores anyone with a birthday of 1/1/4501: objContact.Birthday <> #1/1/4501# Then
If Month(objContact.Birthday) = Month(Date) Then
Wscript.Echo objContact.FullName, objContact.Birthday
End If
End If
OK, we need to come clean with you here: the Scripting Guy who writes this column had his birthday on December 18th, a birthday he shares with baseball immortal Ty Cobb and actor Brad Pitt. (The Scripting Guy who writes this column likes to think he combines the traits of both of these legends: he looks exactly like Ty Cobb and plays baseball exactly like Brad Pitt.) Does that mean that you shouldn’t send him a birthday present after all? Well, you know, it’s never too early to get a jump on his next birthday ….
thanks | https://blogs.technet.microsoft.com/heyscriptingguy/2007/03/08/how-can-i-tell-if-any-of-my-contacts-have-a-birthday-this-month/ | CC-MAIN-2019-09 | refinedweb | 1,126 | 70.63 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.