text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
User Tag List
Results 1 to 10 of 10
Thread: XML DTDs Vs XML Schema
Article Discussion
This is an article discussion thread for discussing the SitePoint article, "XML DTDs Vs XML Schema"
- SXCHALUSitePoint Community Guest
Its good !!
- Steven McNenaSitePoint Community Guest
It hits the point ,easy to read, Many other sites relating to DTD's and XML Schemas are simply pants
- tatviSitePoint Community Guest
It's good. Simple, Precise keeping it to the point.
- Mike GibbonsSitePoint Community Guest
Very readable, clear presentation.
- Steve WhalenSitePoint Community Guest
Excellent summary. One point in favor of Schema which I did not glean from this article is that I think they more flexibly support namespace prefixes than DTDs do. If you want to prefix your elements with namespace prefixes, I believe a DTD has to include those prefixes whereas a schema does not. In other words, schema allow you to add or change namespace prefixes "after the fact". Does that sound correct? Any comments?
- Bhupendra S. BramheSitePoint Community Guest
please send the correct procedure and study material(Tutorial) of XML: DTD,XSLT,CDATA,PARSER,SCHEMA.
- 131SitePoint Community Guest
Thanks a lot
- BenSitePoint Community Guest
Good article. I think the benefits of DTD over Schema are a little overstated though
I'd like to see this article updated. This is a pretty useful stuff, but with latest LINQ developments, REST and other technologies it would make sense to revise.
Bookmarks | http://www.sitepoint.com/forums/showthread.php?184588-XML-DTDs-Vs-XML-Schema | CC-MAIN-2018-09 | refinedweb | 236 | 53.71 |
32 bit is 4 bytes of size, where as 64 bit is 8 bytes of size. That means 64 bit takes more memory than 32 machines
Most of the times, if your code is complied with 32 bit jdk version, then you need to execute this class files in 32 bit machine/64 bit machine only. This will complies and executes fine, but in some cases if there is machine specific , you are in trouble with issues,so careful while installing jdk versions.
32 bit JVM is not compatible with other 64 bit hosts/frameworks as i have encountered issues tomcat 64 bit on java 32 bit
I am listing down the ways to find out the 32 bit vs 64 bit
As per Sun Specification "There is no public API to find out the 32 bit or 64 bit".
There are number of ways to find out the 32bit version or not.
One way is as "sun.arch.data.model" is one of the system property in JVM which has 32 bit or 64 bit or unknown, so we have to write a simple program to read the property value
So i have written sample code to find out bit version
public class BitVersionDemo { public static void main(String args[]) { System.out.println("version =" + System.getProperty("sun.arch.data.model")); } }
and the output is
version =32that means your installed jdk is 32 bit
on Linux :-
so if your system is installed java 64 bit, by running java -d32 command it gives the following message
bash-3.2$ java -d32 -version Running a 32-bit JVM is not supported on this platform.
By giving -d64 command, the following informationn is displayed
-bash-3.2$ java -d64 -version java version "1.6.0_27" Java(TM) SE Runtime Environment (build 1.6.0_27-b07) Java HotSpot(TM) 64-Bit Server VM (build 20.2-b06, mixed mode)
On windows, "java -d32 -version" is giving error so you can write above samle to find out the version
Hope this helps for your debugging.
Good tip. If you want to know what the reference size is you can use Unsafe.addressSize(). Note: this is usually 4 as most 64-bit JVM will be using 32-bit references for up to 32 GB of heap.
Thanks peter for your valuable suggestion.
Thanks for you comment on my post 10 Points on JVM heap space king. I see you have also shared a pretty useful tip. though in production we always run with JDK which is used for compilation but since Java is write once run anywhere this kind of issue is quite possible. you may also like my post 10 HotSpot JVM Option Java developer should know. | http://www.cloudhadoop.com/2012/01/find-out-32-bit-jvm-version-or-not-in.html | CC-MAIN-2014-49 | refinedweb | 452 | 68.1 |
@darkbasic
Fortunately, I haven't come across any problems with this version of the Blink.
There's no package like 'googleapiclient' and you could install it via pip.
[sudo pacman -S python2-pip; sudo pip2 install google-api-python-client]
!! However I don't advise you to touch these things manually. !!
Probably it would do the trick, but there's also a github version of Blink, which is provided by our fellow; ogarcia. Try to install his PKGBUILD first then try to irrevocably mess up your system :D
Search Criteria
Package Details: blink-darcs 20160602-2
Dependencies (5)
- libvncserver (libvncserver-git)
- python2-gmpy2
- python2-pyqt (python2-pyqt4)
- python2-sipsimple
- darcs (make)
Required by (0)
Sources (0)
Latest Comments
promike commented on 2016-08-10 17:44
@darkbasic
darkbasic commented on 2016-08-10 15:43
$ blink
.
Traceback (most recent call last):
File "/usr/bin/blink", line 33, in <module>
from blink import Blink
File "/usr/lib/python2.7/site-packages/blink/__init__.py", line 41, in <module>
from blink.chatwindow import ChatWindow
File "/usr/lib/python2.7/site-packages/blink/chatwindow.py", line 37, in <module>
from blink.contacts import URIUtils
File "/usr/lib/python2.7/site-packages/blink/contacts.py", line 27, in <module>
from googleapiclient.discovery import build
ImportError: No module named googleapiclient.discovery
promike commented on 2016-06-02 10:18
@glitsj16
Good point!
Thank you for your help. I updated the PKGBUILD.
glitsj16 commented on 2016-06-01 22:31
Hi, thanks for offering this package. The PKGBUILD can drop some dependencies though.
python2-sipsimple depends on python2-application (which installs python2), python-otr, python2-eventlib and python2-msrplib (which takes care of installing python2-gnutls and twisted).
So to summarize: depends=('cython2' 'libvncserver' 'python2-cjson' 'python2-gmpy2' 'python2-pyqt' 'python2-sipsimple') | https://aur.archlinux.org/packages/blink-darcs/?comments=all | CC-MAIN-2016-36 | refinedweb | 296 | 59.9 |
Why We Chose Next.js
This might sound funny, but at Stackbit, we're big fans of Squarespace and friends. Of course, we're also all in on the cost, security, and speed benefits of the Jamstack, so we think you should use Stackbit, not other site builders! But hear us out: site builders are the future of web site development.
Components are the hidden magic behind site builders today, allowing content creators to experiment without worrying that they'll "break" a web site or make it "ugly." Next.js.
After evaluating all our options, we felt like building on top of Next.js would strengthen our library, while improving the developer experience of using our components with our themes, and we'd love to give you a peek behind the curtain to share why.
Focus facilitates quality
At the time of researching this article, Jamstack.org listed 333 site generators (it's probably more by the time you're reading).
In our early days, we developed our own theme transpiler, Unibit, that let us write components once and transform them into themes for Jekyll, Hugo, Gatsby, and Next.js.
By removing this extra step and picking one templating technology, we knew we could focus on making rich components (like forms) that:
Might not have worked in every site generator
Are easy to customize without asking developers to learn a templating language we made up
Modern frameworks for modern functionality
Jekyll and 11ty will always have a special place in our hearts, thanks to their support of beginner-friendly templating languages like Liquid and Nunjucks. Don't worry -- you'll still see plenty of Eleventy on our blog, because it's an excellent teaching framework!
Although simpler site generators do components perfectly well, trying to develop a rich component library with one can be "death by a thousand papercuts" to productivity.
1. We like bundling JavaScript and CSS
Site generators that piggyback off of modern front-end JavaScript frameworks like React, Vue, or Svelte make it easy to ship front-end JavaScript responsible for enhancing a given component (like a form) to the web browser only when someone actually visits a web page that includes that component.
With "classic" static site generators, you often write (or use tooling to bundle) something along the lines of a massive main.js file that every visitor to every web page in your site has to download (and that a theme developer has to wade through to customize!).
We've also found that modern site generators like Next.js typically facilitate breaking up CSS into modules, too.
Styling can be extremely finicky to debug, so we're keen on keeping things clean.
2. JavaScript frameworks are handy
Speaking of writing client-side JavaScript, the more "interactive" you want a given component to be, the more of a nuisance it is to hand-compose vanilla JavaScript.
With React hooks, you can do in a few lines of code what might require pages of straight JavaScript code.
3. Frameworks include JavaScript itself
Liquid isn't nearly as complex of a programming language as JavaScript. You can't simply throw a little Ruby into a Jekyll Liquid template or a little JavaScript into an 11ty Nunjucks template any time you'd like.
On the other hand, vanilla JavaScript is always valid syntax in React's JSX templating language. You'd be amazed how many lines of code a little
.map().reduce() or destructuring can save when rendering content into components.
import becomes equally useful if you'd like to develop components that piggyback off each other (the same "card gallery" look and feel is often a great choice both for a staff directory and for a teaser about your latest 6 blog posts).
The Case for Next.js
Sorry Vue and Svelte, you're great, but there are a lot of React developers in the world, including us! So that narrowed our choice down to writing components for Gatsby or writing components for Next.js, the two leading React site generators.
How did we pick Next.js between the two?
1. Fetching data seamlessly
Writing code that fetches data from a source and passes it into the properties available to a given page-rendering React template is a bit more straightforward in Next.js than in Gatsby.
As James Bedford points out, GraphQL is almost handing your roommate a specific shopping list for dinner ... and sending them over to the dining room instead of the store, where you've already set out all of the ingredients on a serving tray.
GraphQL's original intent was to let client-side JavaScript fetch data sparingly, but it's often overkill when your data lives on the same filesystem that your build process runs on -- or when your headless CMS API has an efficient syntax of its own.
To us, adding GraphQL code between content fetching and component templating felt a little cluttered.
2. Dynamic and catch-all routing
Although both do it now, Next.js released Dynamic Routing earlier than Gatsby, letting you surround the filename of a given page-generating React template with punctuation marks to specify that it's responsible for building many URLs, each named according to a property that will be passed to the template when it is called.
(Hint: this is the same idea as programmatically setting 11ty permalink values with pagination or directory data files, if you're feeling a little out of the loop!)
3. Server-side rendering
Although Gatsby also jumped aboard, Next.js let you choose Wordpress-style server-side rendering vs. Jekyll-style static site generation on a per-page basis earlier than Gatsby. (And Gatsby didn't offer server-side rendering at all when we had to make up our minds.)
When you have a site with thousands or millions of web pages, publishing new content or codebase updates into production can get slow, so letting some pages be served via SSR is nice.
4. Speed
Next.js only loads the CSS and JavaScript required for a specific page and its components. Consequently, we’ve observed faster page generation in development environments with Next.js over Gatsby.
Takeaways
After evaluating our options, we felt like we could deliver killer components in Next.js, and we're super excited to share them with you a little later this year. Stay tuned! After evaluating all our options, we felt like building on top of Next.js would strengthen our library, while improving the developer experience of using our components with our themes, and we'd love to give you a peek behind the curtain to share why. | https://www.stackbit.com/blog/why-we-chose-nextjs/ | CC-MAIN-2022-05 | refinedweb | 1,107 | 60.24 |
Fast conversion of numpy array to list of c4d.Vect
On 11/11/2015 at 05:50, xxxxxxxx wrote:
Hello all,
I am using numpy for array calculations on a c4d.pointobject. The manipulations within numpy are super fast, but it turns out that converting to and from c4d.Vectors is a real bottleneck. Since this is an essential step when using numpy, I was hoping that someone else has already found a good solution for it?
The fastest code I could come up with is attached below. Having a look at the execution times is interesting, because it shows that the internal getting and setting of vectors is much faster than the conversion to and from numpy of those vector objects.
c4d.GetAllPoints: 0.073354
list2np : 0.455179
np manipulation: 0.067916
np2list : 0.439967
c4d.SetAllPoints : 0.030023
Does anybody have suggestions as to further speed up this code? Especially the np2list function is critical, since it has to be called every frame (when used in a generator). I am open to exotic solutions such as cython, numba or whatever, as long as I can get it working on my machine.
Your help is really appreciated!
regards,
Hermen
code to be run from a script with a point object selected
import c4d
from c4d import Vector
import numpy as np
#import c4d2numpy as npc
import time
now = time.clock
Vec = Vector
y & z are reversed because of left-handed system in cinema 4d
def ipts(pts) :
for p in pts:
yield p.x
yield p.z
yield p.y
def tprint(tl) :
t0 = [t[0] for t in tl]
s0 = [t[1] for t in tl]
print ' '
for i,t in enumerate(t0) :
if i==0: continue
print s0 _, t-t0[i-1]
def list2np(lst) :
A = np.fromiter(ipts(lst), dtype=np.float, count=3*len(lst))
return A.reshape((len(lst), 3))
def np2list(A) :
return map(Vec, A[:,0], A[:,2], A[:,1])
def main() :
op.ResizeObject(1000000)
t = []
t.append( (now(), 'start' ) )
pts = op.GetAllPoints()
t.append( (now(), 'c4d.GetAllPoints:' ) )
A = list2np(pts)
t.append( (now(), 'list2np :' ) )
#print A.shape
B = A + 10. * np.random.random((A.shape))
t.append( (now(), 'np manipulation:' ) )
pts = np2list(B)
#print len(pts)
t.append( (now(), 'np2list :' ) )
op.SetAllPoints(pts)
op.Message(c4d.MSG_UPDATE)
t.append( (now(), 'c4d.SetAllPoints:' ) )
op.SetAllPoints(pts)
op.Message(c4d.MSG_UPDATE)
tprint(t)
if __name__=='__main__':
main()
On 11/11/2015 at 14:09, xxxxxxxx wrote:
mmm, quiet here…
After some further investigation I found the numpy function "np.frompyfunc" and it does what it says pretty fast. For those of you interested, see an example of the code below. It executed in about half the time as the previous np2list, in 0.288461 sec.
Meanwhile, I've played around with multiprocessing a bit, but the overhead for a function with return value seems just too big, it only takes longer on more processes. So I am fairly happy with his latest result, it's still about three times faster than an ordinary for loop.
But if anybody has a better suggestion, I would love to hear it!
regards,
Hermen
code to replace np2list
vec_array = np.frompyfunc(Vector, 3, 1)
def np2list_2(A) :
"""
y & z are reversed because of left-handed
system in cinema 4d
"""
pts = (vec_array(A[:,0], A[:,2], A[:,1])).tolist()
return pts
On 12/11/2015 at 02:08, xxxxxxxx wrote:
Hello
not exacly for conversation (as name of topic), but i made several tests by wrapping into pyCuda or cythonize script-body(with numpy code too). And found it was faster. I test with pyVoro(python extesion for Voro++) and pyOpenVDB.
On 12/11/2015 at 04:56, xxxxxxxx wrote:
Hello Ilya,
What do you mean by 'not exactly for conversion'? It is meant to be working from within c4d.
PyVoro, PyCuda, PyOpenVDB sound a little too exciting for me. As far as I can see, they require building extra packages, and I am on mac osx, so it'd be different than on windows.
Cython might be an option though, but I am having trouble typing stuff right. For example, do you know how to cdef/ctypedef the Vector object from c4d? Without it, I only got a marginal increase in performance. How much (%) did you get, btw?
regards,
On 12/11/2015 at 06:18, xxxxxxxx wrote:
several colleagues and i had tasks for scripting environment to speed up conversation of mesh/procedural data to numpy arrays and then to pyVoro/OpenVDB. Yes, i compiled these py-extesions. also a few processing was boosted in cuda/gpu compute environment or slightly modified cython.
I forgot to point at such moment - Hermen, try to create topic somewhere else, forum, social network in your region, ... from my experince and search over support forum - teamers do not support 3d party's plugins and extensions.
For example french4d has topic about speed algos in python
i use network between several math/programming russian universities
On 12/11/2015 at 15:16, xxxxxxxx wrote:
OK, I've got cython running as well, but the performance increase is very marginal compared to the function I posted previously. That's good enough for me, I guess, and if I need more speed, I'll check elsewhere. Thanks for your suggestions! | https://plugincafe.maxon.net/topic/9200/12223_fast-conversion-of-numpy-array-to-list-of-c4dvect | CC-MAIN-2020-40 | refinedweb | 888 | 66.44 |
Opened 5 years ago
Closed 5 years ago
Last modified 5 years ago
#14991 closed (invalid)
SQL injection in quote_name()
Description
183 def quote_name(self, name): 184 if name.startswith("`") and name.endswith("`"): 185 return name # Quoting once is enough. 186 return "`%s`" % name
name = '`column_name!`; DROP database `dbname!`' # take from request for sort table. Insert value to extra() or order_by()
sql='SELECT * FROM... ORDER BY `column_name!`; DROP database `dbname!`'
Change History (2)
comment:1 Changed 5 years ago by russellm
- Needs documentation unset
- Needs tests unset
- Patch needs improvement unset
- Resolution set to invalid
- Status changed from new to closed
comment:2 Changed 5 years ago by EvoTech
Ok, Thanks.
But I think, the better way is:
def quote_name(self, name): if name.startswith("`") and name.endswith("`"): name = name.strip('`') return "`%s`" % name.replace('`', '``')
This code does not depend on other checks.
Note: See TracTickets for help on using tickets.
First off, for future reference: if you want to report a security issue, PLEASE use our security issue reporting procedure. Reporting potential security issues into the wild is very poor form.
Secondly, as far as I can make out from the information you've provided, this isn't a plausible injection attack.
Yes, quote_name is weak, and easily exploited. Which would be a problem if it was used anywhere to sanitize user-provided input. Which it isn't. At least, not anywhere that I can find, providing you follow the advice of the documentation.
order_by() only accepts existing columns (annotated with a very small number of allowed extra bits like '-' and '?') for sorting. You can't insert arbitrary content, even if you *were* using user-provided data to drive that clause -- which it itself a pretty unlikely set of circumstances. The error is hidden under some circumstances by #14766, but if you inspect the SQL log, or you attempt to print the underlying query, you'll see the error that is generated:
extra() is a slightly different beast, but as long as you use it right (that is to say, as documented), you're safe. If, for example, you allow user-provided input to be used in the "where=" argument, you can construct an injection attack:
...but if you use params, the user-provided data is correctly escaped by the database cursor. Our docs tell you to do this, too. They could say it a little more emphatically, perhaps, but it is there in black and white, with an example. Given that extra() is one step away from raw SQL, there really isn't anything else we can do here. Raw SQL is, by definition, exposing the bare metal, so we rely on people using it the right way. If you hold a sword by the blade, you're going to cut your hand, no matter how many warnings and safety catches are on the scabbard.
In summary: I (and several other members of the core team) can't find an in-principle attack here, or in any code related to what you describe. The examples you have provided are either incomplete or incorrect. Closing this ticket as invalid.
If you think we have missed something, and you can present an actual in-principle or in-practice attack (including a complete example, not just vague handwaving at the order_by clause), we're happy to reopen this. But, repeating the first point -- if you even *think* you have found a security issue, *PLEASE* report it to security@…, not on Trac. | https://code.djangoproject.com/ticket/14991 | CC-MAIN-2015-35 | refinedweb | 580 | 65.22 |
[Date Index]
[Thread Index]
[Author Index]
Re: Evaluation of a whole notebook from another one
Am 04.08.2012 12:26, schrieb Dr. Robert Kragler:
>
>
Which version are you using? Since Version 8 there is exactly that function:
?NotebookEvaluate
NotebookEvaluate[notebook] evaluates all the evaluatable cells in
notebook. >>
On the other hand, I'm almost sure that it's not the best possible way
to have your "library" in the form of a notebook to be evaluated. Maybe
you want to have a look at how to organize your code into package-files
with proper use of Contexts (or namespaces), see e.g. the documentation
of BeginPackage and EndPackage...
hth,
albert | http://forums.wolfram.com/mathgroup/archive/2012/Aug/msg00074.html | CC-MAIN-2014-35 | refinedweb | 111 | 58.38 |
Ask Slashdot: What Can You Do About SOPA and PIPA?
Soulskill posted more than 2 years ago | from the grab-some-tea-and-head-to-boston dept.
.
Note: This will be the last story we post today until 6pm EST in protest of SOPA.Why is it bad? (5, Insightful)
betterunixthanunix (980855) | more than 2 years ago | (#38737336)
Re:Spread the word (5, Insightful)
Tsingi (870990) | more than 2 years ago | (#38737386) (-1, Troll)
TechGZ (2555776) | more than 2 years ago | (#38737408)
Re:Spread the word (1, Insightful)
Anonymous Coward | more than 2 years ago | (#38737544)
New shill account?
You are becoming pretty transparent, maybe coming up with an original name would help.
If Google didn't care, they wouldn't put the link there. I suppose it could have been bigger, but it's not like there is much else on a Google page.
Re:Spread the word (5, Insightful)
Anonymous Coward | more than 2 years ago | (#38737558)
???? (5, Insightful)
TechGZ (2555776) | more than 2 years ago | (#38737690)
Re:Spread the word (4, Interesting)
PT_1 (2425848) | more than 2 years ago | (#38737964)
Re:Spread the word (0)
Anonymous Coward | more than 2 years ago | (#38737622)
The blacked-out Google logo wasn't enough?
Re:Spread the word (5, Interesting)
The Moof (859402) | more than 2 years ago | (#38737628) (1)
TechGZ (2555776) | more than 2 years ago | (#38737770)
Re:Spread the word (2)
modernzombie (1496981) | more than 2 years ago | (#38737912)
Re:Spread the word (1)
Denogh (2024280) | more than 2 years ago | (#38737914)
Re:Spread the word (-1)
Anonymous Coward | more than 2 years ago | (#38737700)
But not here on
/.
My ears are already starting to bleed from all the anti-SOPA/PIPA blowhorns.
Make a campaign contribution (3, Insightful)
elrous0 (869638) | more than 2 years ago | (#38737338)
Include a big campaign contribution with your letter if you want to make sure it's not just thrown in the trash or just added to the pile.
Re:Make a campaign contribution (4, Insightful)
JoeMerchant (803320) | more than 2 years ago | (#38737396) (4, Insightful)
shentino (1139071) | more than 2 years ago | (#38737870)
I bet most of the reps taking point on cramming this down our throats already have their campaign contributions safely tucked away in their bank accounts, along with cushy jobs waiting for them in the private sector.
Re:Make a campaign contribution (1)
berashith (222128) | more than 2 years ago | (#38737596)
That wouldnt even help me with my senator. I live in one of the few zip codes in Georgia that has both black people and gay people in it. I am sure that just postmarking a letter gets it binned instantly.
Not Blacked Out? (3, Interesting)
Anonymous Coward | more than 2 years ago | (#38737342)
Why is slashdot ignoring the blackout?
With so many links to questionable content, this illegal news source seems like a hive of crime.
Re:Not Blacked Out? (4, Informative)
JoeMerchant (803320) | more than 2 years ago | (#38737412)? (5, Funny)
Tsingi (870990) | more than 2 years ago | (#38737638)? (5, Informative)
Thiez (1281866) | more than 2 years ago | (#38737456)? (0)
0racle (667029) | more than 2 years ago | (#38737546)
Re:Not Blacked Out? (4, Insightful)
JWSmythe (446288) | more than 2 years ago | (#38737746)? (4, Informative)
geminidomino (614729) | more than 2 years ago | (#38737754))...
Not care (-1)
Anonymous Coward | more than 2 years ago | (#38737358)
Let the get rid of ICANN all they want.
When they have made the internet unusable for Joe Average I will finally be able to charge for warez again.
One other thing... (5, Insightful)
jholyhead (2505574) | more than 2 years ago | (#38737360)... (2, Insightful)
WankersRevenge (452399) | more than 2 years ago | (#38737810)
Re:One other thing... (1)
jholyhead (2505574) | more than 2 years ago | (#38737848)
Oblig XKCD (5, Insightful)
Anonymous Coward | more than 2 years ago | (#38737378) [xkcd.com]
Stop SOPA and PIPA now!!!
Re:Oblig XKCD (-1)
roman_mir (125474) | more than 2 years ago | (#38737648)
Here is a policy statement from XKCD [xkcd.com]
Can we print xkcd in our magazine/newspaper/other publication? am not sure if this is reliance on copyright itself or just assuming moral rights over the original work, but I don't think one can be pro-copyright and anti-SOPA, anti-PIPA and not be a hypocrite at the same time [slashdot.org] .
Re:Oblig XKCD (5, Insightful)
jholyhead (2505574) | more than 2 years ago | (#38737798) (4, Insightful)
roman_mir (125474) | more than 2 years ago | (#38737944)
I totally believe that if you produce something you should be paid for your efforts
- and you should follow the link in my comment and then leave your comments there, where I explained why this is an untenable position.
Generating content is not different from any other business, and since other businesses that do not necessarily generate content do not get this preferential treatment by government (nor should they), neither should content generating businesses get this preferential treatment.
Saying that you must have government standing on your side for some reason and protecting your business model is ridiculous on its face, when no other businesses (except those who own the government, so big banks, big insurance, bigt pharma, big energy, big food, military and such) get the same treatment.
So a restaurant owner does not get bailed out, nor does car mechanic, nor should they. Nobody should be in a position to use government to subsidise their business model.
As to getting paid - you only get paid for your businesses by willing participants, and just as people may not go to your new restaurant, no matter how much of your life's savings or other people's savings you put into that business, same people may not buy your stuff from you.
As to others using your material freely (as in beer) and putting it on torrent or even selling it at lower price - set the right price. I have an example there, Louis C.K., who is not going after torrents and other sites sharing his show, but he priced it properly and the revenue is over 1 million USD and counting.
Nobody should be in a position to subsidise their businesses and risks that they take when they choose a business model with government money and power.
Why not slashdot? (4, Insightful)
xtracto (837672) | more than 2 years ago | (#38737382)
There was a time when Slashdot was at the forefront of such kind of fights against "the man" (e.g., Sony Rootkit fiasco).
Re:Why not slashdot? (5, Insightful)
jholyhead (2505574) | more than 2 years ago | (#38737398)
Re:Why not slashdot? (0)
Anonymous Coward | more than 2 years ago | (#38737430)
Couldn't they have blocked idle then?
I thought this too (1)
Mateo_LeFou (859634) | more than 2 years ago | (#38737448)
But I think a dark
/. would be a good solidarity statement anyway. Geeks who weren't planning to do anything special in protest today might put some extra effort in.
Re:I thought this too (0, Flamebait)
Ash-Fox (726320) | more than 2 years ago | (#38737922)
Indeed. I for one would put extra effort into dispelling the myths people keep coming up with. Like how the proposed DNS filtering system breaks DNSSEC, despite the fact DNS resolvers would use the response code REFUSED (see RFC 1035) for A/AAAA/CNAME related queries which would tell the DNSSEC client that the resolver refused to resolve it's request, not fake it. This doesn't break the DNSSEC zone chains and doesn't prevent DNSSEC validation regardless.
Or how people completely misrepresent the purpose of the DNS filter, which is to stop copyright infringing websites from posing as legitimate sites and charging customers for advertising time or trick them into paying for a product that isn't actually genuine.
It is not intended to be a magic stop all for all piracy like people who are trying to stop PIPA and SOPA are claiming. It's meant to make the line between genuine and non-genuine content much easier to see.
Not to mention these anti PIPA and SOPA advocates conveniently forget to note that a lot of the take down issues are more of a problem when it comes to the already existing DMCA because there is ZERO validation by a judge.
The only additionally area (talking about the scope in take downs) that the DMCA does not particularly cover where SOPA and PIPA are intended to deal with is a loop hole that sites like the pirate bay exploit. Where they are not handling copyright infringing content directly and by doing so, they are in a loop hole of US law where the domain cannot be closed despite the fact there is 100%, absolute clear intent in their assistance of doing copyright infringement.
Now, there are definitely issues with SOPA and PIPA, mainly the lack of evidence requirement before a judge should be a changed (although I expect that many judges will want to see some evidence regardless - They didn't get into their position by screwing people, despite what people think). Yes, there will be abuses, all laws will get abused at some point or another. But when you compare the abuses to current existing laws, there isn't actually that much more it could do.
And before someone makes the argument that they can make a website poof, if you actually read the legislation, that is a last measure when there has been no cooperation with the people involved in the matter. The decisions can be challenged in court just fine, there is nothing that says you cannot do that, just like with the DMCA.
It pisses me off so many people get their information from a 3rd party sources and don't even bother verifying the information. You're on the Internet, you can get access to the original legislation as well as many related documents - Why are people advocating something that is blatantly lying about many things, didn't anyone learn in school to verify facts at all?
People are lying worse than the politicians right now. I am appalled by so many people who represent themselves as someone knowledgeable in the tech industry.
FYI: I am against SOPA and PIPA as I feel that the legislation should require more evidence on the copyright holder before they can get a judge to issue a take down request, but a lot of the other crap people are talking about is just complete utter bullshit to me.
I don't want to associate with the anti SOPA and PIPA crowd.
What you can do (1)
Anonymous Coward | more than 2 years ago | (#38737392)
Disabling Javascript on en.wikipedia.org is a good start.
Re:What you can do (2, Interesting)
crymeph0 (682581) | more than 2 years ago | (#38737822)
bypassing SOPA blockades: piracy? (5, Interesting)
Speare (84249) | more than 2 years ago | (#38737400).
Re:bypassing SOPA blockades: piracy? (0)
Anonymous Coward | more than 2 years ago | (#38737674)
Why would you use the Google cache when you can just follow Wikimedia's own instructions [wikimedia.org] on how to get past the blockade?
Re:bypassing SOPA blockades: piracy? (1)
cornicefire (610241) | more than 2 years ago | (#38737814)
Re:bypassing SOPA blockades: piracy? (1)
xrtvxrt (2555796) | more than 2 years ago | (#38737950)
I'm not in America! (4, Informative)
duguk (589689) | more than 2 years ago | (#38737410)
I'd really like to help, since if this passes it's only a matter of time before it's in the UK too.
What can we non-US citizens do to help?
Re:I'm not in America! (5, Funny)
Anonymous Coward | more than 2 years ago | (#38737608)
Liberate us. The US does have oil, after all.
Re:I'm not in America! (2, Informative)
Anonymous Coward | more than 2 years ago | (#38737624)
Try petitioning the State Department: [americancensorship.org]
Re:I'm not in America! (1)
Anonymous Coward | more than 2 years ago | (#38737634)
Re:I'm not in America! (5, Informative)
Sharkus (677553) | more than 2 years ago | (#38737640).
Re:I'm not in America! (1)
coogan (850562) | more than 2 years ago | (#38737974)
Why is slashdot not participating? (5, Insightful)
sl4shd0rk (755837) | more than 2 years ago | (#38737442)
I would have expected the tech-savvy slashdot to do something similar to what google and reddit have done in protest. Why not?
Re:Why is slashdot not participating? (-1)
Anonymous Coward | more than 2 years ago | (#38737532)
It would cut into their ad revenue.
Seriously, if you think Slashdot even gives a fuck you're seriously mistaken. Only reason it's even being covered here is because Reddit was the site where the blackout idea originated and the BoingBoing reject editors don't want to be left in the dust (you'll note, though, that almost every SOPA blackout post on Slashdot is very careful not to mention Reddit in any way).
Re:Why is slashdot not participating? (-1)
Anonymous Coward | more than 2 years ago | (#38737618)
Nobody visits Slashdot so it wouldn't matter.
Re:Why is slashdot not participating? (3, Insightful)
airfoobar (1853132) | more than 2 years ago | (#38737776)
Re:Why is slashdot not participating? (4, Insightful)
Anonymous Coward | more than 2 years ago | (#38737852)
Protesting to the informed would serve no purpose whatsoever.
It's the general sheeple that need to be informed.
I get the concerns (4, Interesting)
thepainguy (1436453) | more than 2 years ago | (#38737452) (4, Interesting)
Kidbro (80868) | more than 2 years ago | (#38737560)
To be honest, I don't know how many sales this is costing me, but not knowing isn't a particularly comfortable feeling.
Do you know how many sales it is giving you [youtube.com] ?
Re:I get the concerns (4, Interesting)
thepainguy (1436453) | more than 2 years ago | (#38737760):I get the concerns (5, Insightful)
Spad (470073) | more than 2 years ago | (#38737796) (2)
CrimsonAvenger (580665) | more than 2 years ago | (#38737872)
Alas, for all the hype about SOPA/PIPA, it won't be that easy.
Any action under SOPA/PIPA requires a Court Order. Which you won't get by pressing a button and filling out a form.
Plus the Court Order has to be properly delivered to whoever it is. Not by you, mind, but getting an officer of the Court to go to East Bumfuckistan to deliver a court order and get a signed receipt for same is going to be interesting.
And, of course, the Court Order can be challenged (yes, there's a provision for that in both bills), which would pretty much hold it in abeyance until the Court considered the case.
After the wife came home last night bitching about the Bills in question, I went to the trouble of reading the actual texts of the SOPA and PIPA. They're remarkably alike, really, and neither is the bogeyman they''re being made out to be. Requirements for Court actions for pretty much everything means that they're less of a nuisance than the DMCA, when all is said and sifted.
Oh, and they have a clause about prior restraint of the First Amendment - so no, you won't have to worry that you might be linking to a site that does that nasty ol' piracy thing.
Actually, you won't have to worry about it in any case, unless your site is based outside the USA, and you're not a US resident. In either of those cases, current law allows legal relief, and SOPA/PIPA don't deal with you at all....
Just tried to sign the form on the Reddit page... (3, Interesting)
Tomsk70 (984457) | more than 2 years ago | (#38737462)
...and got a response saying that the link did not complete because the site was down in protest over SOPA.
Isn't that shooting yourself in the foot a bit?
Re:Just tried to sign the form on the Reddit page. (-1, Offtopic)
Tomsk70 (984457) | more than 2 years ago | (#38737774)
Aand the usual score awarded by Slashdot.
I got a score of one years ago for warning that there would be a browser war v2.0 with the rise of FF (and here we are, with three).
I got a score of one for pointing out that 'Internet Standards' wouldn't mean much while the above was true. And here we are, with a comment-editor on this very site that's slower than ever (under the more-compatible-than-all-the-others-IE9).
I got a score of one when I pointed out that Apple were repeating their lock-down methodology with their devices (so don't bitch about Win8/ ARM now)
I got a score of one for pointing out to Linux-desktop fanboys that if their penetration of the desktop was really that massive, then why weren't there more contract jobs on offer supporting just that.
I could give more examples, but you get the idea - want a fanboy response? Comment on slashdot. Want a real discussion? Go on Reddit
:-)
Congressional Dead Enders (5, Interesting)
Sponge Bath (413667) | more than 2 years ago | (#38737476) (5, Insightful)
SirGarlon (845873) | more than 2 years ago | (#38737780)
He's assuming his colleagues will read it before voting on it. He should know better.
It hit me this morning (5, Insightful)
hackstraw (262471) | more than 2 years ago | (#38737486) (4, Insightful)
SirGarlon (845873) | more than 2 years ago | (#38737900)
In a democratic country.... (0)
Anonymous Coward | more than 2 years ago | (#38737498)
We can do nothing. The US is a wholly owned and operated subsidiary of Rich People Inc (tm). If not SOPA and PIPA then some other stupid bill later on or, more likely, a whole bunch of little amendments slipped in here and there that do the same thing. We are screwed. I would suggest you invest heavily in Bros. Jack & Jim, their buddy Weiser and great big baggie of Mexico's finest, crawl into a dark, dark hole and weep for what could have been. It is all over but the shouting.
Re:In a democratic country.... (3, Insightful)
SirGarlon (845873) | more than 2 years ago | (#38737918)
I know this is more or less a troll... (5, Insightful)
rsilvergun (571051) | more than 2 years ago | (#38737952)?
Craigslist? (0)
Anonymous Coward | more than 2 years ago | (#38737502)
Is Craigslist the Ron Paul of anti-SOPA websites participating today?
Top 100 website. Top 50 even maybe.
Head in the sand (0)
Anonymous Coward | more than 2 years ago | (#38737506)
By defining it as "domestic," Wikileaks would then fall under the jurisdiction of U.S. laws.
That's not a matter of definition. The domain wikileaks.org is, de facto, under the jurisdiction of the United States of America, now. The
.org top level domain is assigned to and operated by an American company under the jurisdiction of American law. Country code top level domains exist so that every country can establish its own rules regarding their part of the domain namespace.
Abolish copyrights and patents. (4, Interesting)
roman_mir (125474) | more than 2 years ago | (#38737516) Constitution for speech and property rights and other liberties and NOT have government take over all these rights and eventually destroy your way of life, like Patriot Act and NDAA do.
There are no half truths here, only one truth - you can't use government force to diminish liberties and freedoms of individuals, otherwise all liberties and freedoms will eventually be diminished.
can we go back (0)
Anonymous Coward | more than 2 years ago | (#38737518)
Can we go back to the good old days of using IP numbers instead of text string that goes thru a DNS server.
I think it might be harder for governments to censor that.
There are two big things: contact your representat (2)
mapkinase (958129) | more than 2 years ago | (#38737548)
No. Instead of being reactive, we need to be active.
Reactive is to fight laws. Active is to change laws and constitution, so SOPAs won't be possible in the future.
Businesses should not be taken down, harmed, punished, etc other than by court decision.
One does not have a right to go to police and shut down the business without court order. It started long time ago when health inspectors were given a right to shut down businesses (remember Friends episode?) and people let it be in the same name of security and safety.
Re:There are two big things: contact your represen (0)
Anonymous Coward | more than 2 years ago | (#38737726)
I'm trying....Seems my Senator can't even keep his own website running....
Re:There are two big things: contact your represen (1)
CrimsonAvenger (580665) | more than 2 years ago | (#38737920)
Interesting that you should say that, since SOPA/PIPA require Court Orders to do anything.
Get People to Panic (5, Interesting)
Anonymous Coward | more than 2 years ago | (#38737584)]
Re:Get People to Panic (0)
Anonymous Coward | more than 2 years ago | (#38737962)
ISP's need to stop resolving DNS for all MPAA member sites. How ironic would that be?
What can you do? Simple. (4, Insightful)
Anonymous Coward | more than 2 years ago | (#38737588) (5, Interesting)
Anonymous Coward | more than 2 years ago | (#38737592)? (4, Interesting)
dasunt (249686) | more than 2 years ago | (#38737598)
I'm being serious. Make a super-PAC and use it in the next election season against people who introduce or push bills like SOPA and PIPA. Attack politicians where it hurts: Election year.
the copyright industry facilitates piracy (0)
Anonymous Coward | more than 2 years ago | (#38737632)
If the entertainment industry didn't make music, movies, games and so on, and put them under copyright, it would not be possible to infringe the copyright on them. Therefore since piracy is ONLY possible because of their actions, they are facilitating that piracy.
Bang 'em in jail!
Ars story on how to protest (1)
jbrodkin (1054964) | more than 2 years ago | (#38737662)
WRITE your Congressman (5, Informative)
Port1080 (515567) | more than 2 years ago | (#38737668).
Eternal vigilance. (1)
niktemadur (793971) | more than 2 years ago | (#38737680)
The MPAA and SOPA-sponsor Lamar Smith (R-TX) are trying to brush off the protests as a stunt, and Smith has announced markup for the bill will resume in February.
Cynical corporate sluts in positions of political power are tenacious, to say the least.
They don't care (2)
trolman (648780) | more than 2 years ago | (#38737692)
hYUO FAIL IT (-1)
Anonymous Coward | more than 2 years ago | (#38737724)
Thank You Wikipedia! (1)
na1led (1030470) | more than 2 years ago | (#38737736)
Moratorium and Research, or War (4, Insightful)
Bob9113 (14996) | more than 2 years ago | (#38737742).
Win the War on Language (3, Insightful)
MxTxL (307166) | more than 2 years ago | (#38737758) (4, Insightful)
russotto (537200) | more than 2 years ago | (#38737866).
Re:Win the War on Language (1)
ch-chuck (9622) | more than 2 years ago | (#38737968)
Yea, it's kinda like in the abortion debate where the sides changed the language from the negatives 'anti-abortion' and 'baby killers' to the affirmatives 'pro-life' and 'pro-choice'.
Too bad the pirate sites aren't going along (1)
cornicefire (610241) | more than 2 years ago | (#38737782)
Anyone else feel like this is the end? (1)
Anonymous Coward | more than 2 years ago | (#38737804)
Or at least the beginning of the end?
Even if SOPA/PIPA are stopped, the next few bills will do essentially the same thing without being so obvious about it; a freedom lost here, a restriction applied there. If you try to boil a crab alive, it will protest and attempt to escape. But if you turn the heat up on the crab gradually, it will boil without ever realizing its peril.
I don't have any faith in the ability of the little guy anymore; if the corporations want it, they will get it, and it's only a matter of time. We have seen this time and time again.
Please tell me I'm wrong, and this nightmare will not come to be. I don't want my Internet broken, but I don't have billions of dollars to give to politicians to make them listen to me. What can I do against the likes of multinational companies that have more rights and power than I can ever hope to have?
Nothing you can do (3, Interesting)
alexo (9335) | more than 2 years ago | (#38737838):Nothing you can do (2)
ErikZ (55491) | more than 2 years ago | (#38737896)
Nonsense. The best solution is to limit government power so they can't do this.
Otherwise you'll be kicking out Representatives every year.
Cost them an election (5, Insightful)
nbauman (624611) | more than 2 years ago | (#38737874).
Go And Vote (0)
Anonymous Coward | more than 2 years ago | (#38737886)
hahahaha... you will all die of cancer because you are trolled so easily.. even by the government
Dissapointed with Google (0)
Anonymous Coward | more than 2 years ago | (#38737940)
I would have liked to have seen Google take a bigger blackout step. C'mon, Changing their logo? That's the same thing they do every third day for so-and-so's birthday. Doesn't web censorship merit a larger display of opposition?
Implementing it in stages (1)
gmuslera (3436) | more than 2 years ago | (#38737946)
As it basically means blocking all the search engines and anything with user interactions, implement it in 2 stages: first block for the families, known people, IP ranges, etc to anything related to the companies/corporations/politicians behind SOPA/PIPA (or that supports them) and later (anything between 10 years and 10 millenium later) to the rest of the internet. If you support it, well you can taste what it really mean before everyone else, and your family/employees/etc could give some helpful input giving some perspective to them
It could be done from the top or from the bottom, just removing from search engines results, and any kind social sites any reference to the politicians, political parties, companies, musical records and so on (hey, could eventually violate some copyright, better be safe than sorry) could make them have a hint on what would be the world with those laws they are pursuing.
A beautiful mind (0)
Anonymous Coward | more than 2 years ago | (#38737954)
I'm sure that most of the supporters in the Congress made their decision based on the fact that M. Bachmann is against SOPA. | http://beta.slashdot.org/story/163402 | CC-MAIN-2014-35 | refinedweb | 4,402 | 68.7 |
Core Data is a framework I really enjoy working with. Even though Core Data isn't perfect, it's great to see that Apple continues to invest in the framework. This year, for example, Apple added the ability to batch delete records. In the previous article, we discussed batch updates. The idea underlying batch deletes is very similar as you'll learn in this tutorial.
1. The Problem
If a Core Data application needs to remove a large number of records, then it's faced with a problem. Even though there's no need to load a record into memory to delete it, that's simply how Core Data works. As we discussed in the previous article, this has a number of downsides. Before the introduction of batch updates, there was no proper solution for updating a large number of records. Before iOS 9 and OS X El Capitan, the same applied to batch deletes.
2. The Solution
While the
NSBatchUpdateRequest class was introduced in iOS 8 and OS X Yosemite, the
NSBatchDeleteRequest class was added only recently, alongside the release of iOS 9 and OS X El Capitan. Like its cousin,
NSBatchUpdateRequest, a
NSBatchDeleteRequest instance operates directly on one or more persistent stores.
Unfortunately, this means that batch deletes suffer from the same limitations batch updates do. Because a batch delete request directly affects a persistent store, the managed object context is ignorant of the consequences of a batch delete request. This also means that no validations are performed and no notifications are posted when the underlying data of a managed object changes as a result of a batch delete request. Despite these limitations, the delete rules for relationships are applied by Core Data.
3. How Does It Work?
In the previous tutorial, we added a feature to mark every to-do item as done. Let's revisit that application and add the ability to delete every to-do item that is marked as done.
Step 1: Project Setup
Download or clone the project from GitHub and open it in Xcode 7. Make sure the deployment target of the project is set to iOS 9 or higher to make sure the
NSBatchDeleteRequest class is available.
Step 2: Create Bar Button Item
Open ViewController.swift and declare a property
deleteAllButton of type
UIBarButtonItem. You can delete the
checkAllButton property since we won't be needing it in this tutorial.
import UIKit import CoreData class ViewController: UIViewController, UITableViewDataSource, UITableViewDelegate, NSFetchedResultsControllerDelegate { let ReuseIdentifierToDoCell = "ToDoCell" @IBOutlet weak var tableView: UITableView! var managedObjectContext: NSManagedObjectContext! var deleteAllButton: UIBarButtonItem! ... }
Initialize the bar button item in the
viewDidLoad() method of the
ViewController class and set it as the left bar button item of the navigation item.
// Initialize Delete All Button deleteAllButton = UIBarButtonItem(title: "Delete All", style: .Plain, target: self, action: "deleteAll:") // Configure Navigation Item navigationItem.leftBarButtonItem = deleteAllButton
Step 3: Implement
deleteAll(_:) Method
Using the
NSBatchDeleteRequest class isn't difficult, but we need to take care of a few issues that are inherent to directly operating on a persistent store.
func deleteAll(sender: UIBarButtonItem) { // Create Fetch Request let fetchRequest = NSFetchRequest(entityName: "Item") // Configure Fetch Request fetchRequest.predicate = NSPredicate(format: "done == 1") // Initialize Batch Delete Request let batchDeleteRequest = NSBatchDeleteRequest(fetchRequest: fetchRequest) // Configure Batch Update Request batchDeleteRequest.resultType = .ResultTypeCount)") } }
Create Fetch Request
An
NSBatchDeleteRequest object is initialized with an
NSFetchRequest object. It's this fetch request that determines which records will be deleted from the persistent store(s). In
deleteAll(_:), we create a fetch request for the Item entity. We set fetch request's
predicate property to make sure we only delete Item records that are marked as done.
// Create Fetch Request let fetchRequest = NSFetchRequest(entityName: "Item") // Configure Fetch Request fetchRequest.predicate = NSPredicate(format: "done == 1")
Because the fetch request determines which records will be deleted, we have all the power of the
NSFetchRequest class at our disposal, including setting a limit on the number of records, using sort descriptors, and specifying an offset for the fetch request.
Create Batch Request
As I mentioned earlier, the batch delete request is initialized with an
NSFetchRequest instance. Because the
NSBatchDeleteRequest class is an
NSPersistentStoreRequest subclass, we can set the request's
resultType property to specify what type of result we're interested in.
// Initialize Batch Delete Request let batchDeleteRequest = NSBatchDeleteRequest(fetchRequest: fetchRequest) // Configure Batch Update Request batchDeleteRequest.resultType = .ResultTypeCount
The
resultType property of an
NSBatchDeleteRequest instance is of type
NSBatchDeleteRequestResultType.The
NSBatchDeleteRequestResultType enum defines three member variables:
ResultTypeStatusOnly: This tells us whether the batch delete request was successful or unsuccessful.
ResultTypeObjectIDs: This gives us an array of the
NSManagedObjectIDinstances that correspond with the records that were deleted by the batch delete request.
ResultTypeCount: By setting the request's
resultTypeproperty to
ResultTypeCount, we are given the number of records that were affected (deleted) by the batch delete request.
Execute Batch Update Request
You may recall from the previous tutorial that
executeRequest(_:) is a throwing method. This means that we need to wrap the method call in a
do-catch statement. The
executeRequest(_:) method returns a
NSPersistentStoreResult object. Because we're dealing with a batch delete request, we cast the result to an
NSBatchDeleteResult object. The result is printed to the console.
do { // Execute Batch Request let batchDeleteResult = try managedObjectContext.executeRequest(batchDeleteRequest) as! NSBatchDeleteResult print("The batch delete request has deleted \(batchDeleteResult.result!) records.") } catch { let updateError = error as NSError print("\(updateError), \(updateError.userInfo)") }
If you were to run the application, populate it with a few items, and tap the Delete All button, the user interface wouldn't be updated. I can assure you that the batch delete request did its work though. Remember that the managed object context is not notified in any way of the consequences of the batch delete request. Obviously, that's something we need to fix.
Updating the Managed Object Context
In the previous tutorial, we worked with the
NSBatchUpdateRequest class. We updated the managed object context by refreshing the objects in the managed object context that were affected by the batch update request.
We can't use the same technique for the batch delete request, because some objects are no longer represented by a record in the persistent store. We need to take drastic measures as you can see below. We call
reset() on the managed object context, which means that the managed object context starts with a clean slate.)") }
This also means that the fetched results controller needs to perform a fetch to update the records it manages for us. To update the user interface, we invoke
reloadData() on the table view.
4. Saving State Before Deleting
It's important to be careful whenever you directly interact with a persistent store(s). Earlier in this series, I wrote that it isn't necessary to save the changes of a managed object context whenever you add, update, or delete a record. That statement still holds true, but it also has consequences when working with
NSPersistentStoreRequest subclasses.
Before we continue, I'd like to seed the persistent store with dummy data so we have something to work with. This makes it easier to visualize what I'm about to explain. Add the following helper method to ViewController.swift and invoke it in
viewDidLoad().
// MARK: - // MARK: Helper Methods private func seedPersistentStore() { // Create Entity Description let entityDescription = NSEntityDescription.entityForName("Item", inManagedObjectContext: managedObjectContext) for i in 0...15 { // Initialize Record let record = NSManagedObject(entity: entityDescription!, insertIntoManagedObjectContext: self.managedObjectContext) // Populate Record record.setValue((i % 3) == 0, forKey: "done") record.setValue(NSDate(), forKey: "createdAt") record.setValue("Item \(i + 1)", forKey: "name") } do { // Save Record try managedObjectContext?.save() } catch { let saveError = error as NSError print("\(saveError), \(saveError.userInfo)") } }
In
seedPersistentStore(), we create a few records and mark every third item as done. Note that we call
save() on the managed object context at the end of this method to make sure the changes are pushed to the persistent store. In
viewDidLoad(), we seed the persistent store.
override func viewDidLoad() { super.viewDidLoad() ... // Seed Persistent Store seedPersistentStore() }
Run the application and tap the Delete All button. The records that are marked as done should be deleted. What happens if you mark a few of the remaining items as done and tap the Delete All button again. Are these items also removed? Can you guess why that is?
The batch delete request directly interacts with the persistent store. When an item is marked as done, however, the change isn't immediately pushed to the persistent store. We don't call
save() on the managed object context every time the user marks an items as done. We only do this when the application is pushed to the background and when it is terminated (see AppDelegate.swift).
The solution is simple. To fix the issue, we need to save the changes of the managed object context before executing the batch delete request. Add the following lines to the
deleteAll(_:) method and run the application again to test the solution.
func deleteAll(sender: UIBarButtonItem) { if managedObjectContext.hasChanges { do { try managedObjectContext.save() } catch { let saveError = error as NSError print("\(saveError), \(saveError.userInfo)") } } ... }
Conclusion
The
NSPersistentStoreRequest subclasses are a very useful addition to the Core Data framework, but I hope it's clear that they should only be used when absolutely necessary. Apple only added the ability to directly operate on persistent stores to patch the weaknesses of the framework, but the advice is to use them sparingly.
Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
| https://code.tutsplus.com/tutorials/core-data-and-swift-batch-deletes--cms-25380 | CC-MAIN-2019-43 | refinedweb | 1,580 | 56.66 |
,
I've just released a new version of GML4J available at. Check the
release notes to see what's new.
Cheers,
Alex
Hi All,
I've been trying to improve the spaghetti schema parser part of GML4J. It
seems like I am on the right track, as first tests have shown promising
results. There are also some other changes. The most noticeable will be the
change in the package names because people have complained about
com.galdosinc. I changed it to something less Galdosian com.gmlcentral,
although this domain is also owned by Galdos. Ideally, we'd have the domain
org.gml4j, but we're not committed to only one open-source project, so we'd
like to avoid having to register a domain for each one of them. Let me know
if you strongly dislike this domain name. If nobody complains with good
reason, I'll commit the changes. The old package names will not be used.
I've made the schema parser completely independent of prefixes. Earlier the
XML Schema namespace had to be prefixed with xsd. Now, it can be any prefix,
or no prefix at all for the default namespace. The new parser should also
treat references and global attributes better. There are still issues with
things like enumerations. Not all schema types are correctly supported.
If you have problems with GML4J, let me know. I am thinking about
overhauling the entire object model, and to introduce things like geometry
property, feature property, to provide a more fine-grained distinction
between GML constructs. The DOM-dependent classes will go away. Instead,
there will be new classes that dependend on no other object model. Object
factories will be used for their creation.
In addition, I've upgraded GML4J to the latest libraries (xerces 2.0.1, jdom
beta 8), and replaced werken.xpath with jaxen beta 8. This should provide
better compatability with other newer programs that might use GML4J.
I hope to wrap up the schema parser changes this weekend. For other changes,
you'll have to be more patient.
P.S. I am contemplating publishing the schema parser as a separate API. It
could be useful in non-GML XML applications too.
later
Alex
----------------------------------------------------------------------------
---------
Aleksandar Milanovic | Privileged or confidential information may be
contained
Software Engineer | in this message. If this message was not intended
for you,
Galdos Systems Inc. | destroy it and notify us immediately.
Tel: (604) 484-2750 | Opinions, conclusions, recommendations, and other
Fax: (604) 484-2755 | information presented in this message are not
given or
amilanovic@... | necessarily endorsed by my employer or firm.
I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details | https://sourceforge.net/p/gml4j/mailman/gml4j-announce/ | CC-MAIN-2017-34 | refinedweb | 473 | 68.77 |
Building Frogger with Flixel: Movement, Collision and Deployment
This is the second of two tutorials detailing how to use Flixel to build Frogger for the web and AIR on Android.
Introduction. There is a lot to cover so let's get started!
Final Result Preview
Here is a preview of what we will build in this tutorial:
Step 1: Creating The Player
Our player represents the last actor class we need to build. Create a
Frog class and configure it like this:
You will also need to add the following code to our new class:
package com.flashartofwar.frogger.sprites { import com.flashartofwar.frogger.enum.GameStates; import com.flashartofwar.frogger.states.PlayState; import flash.geom.Point; import org.flixel.FlxG; import org.flixel.FlxSprite; public class Frog extends FlxSprite { private var startPosition:Point; private var moveX:int; private var maxMoveX:int; private var maxMoveY:int; private var targetX:Number; private var targetY:Number; private var animationFrames:int = 8; private var moveY:Number; private var state:PlayState; public var isMoving:Boolean; /** * The Frog represents the main player's character. This class contains all of the move, animation, * and some special collision logic for the Frog. * * @param X start X * @param Y start Y */ public function Frog(X:Number, Y:Number) { super(X, Y); // Save the starting position to be used later when restarting startPosition = new Point(X, Y); // Calculate amount of pixels to move each turn moveX = 5; moveY = 5; maxMoveX = moveX * animationFrames; maxMoveY = moveY * animationFrames; // Set frog's target x,y to start position so he can move targetX = X; targetY = Y; // Set up sprite graphics and animations loadGraphic(GameAssets.FrogSpriteImage, true, false, 40, 40); addAnimation("idle" + UP, [0], 0, false); addAnimation("idle" + RIGHT, [2], 0, false); addAnimation("idle" + DOWN, [4], 0, false); addAnimation("idle" + LEFT, [6], 0, false); addAnimation("walk" + UP, [0,1], 15, true); addAnimation("walk" + RIGHT, [2,3], 15, true); addAnimation("walk" + DOWN, [4,5], 15, true); addAnimation("walk" + LEFT, [6,7], 15, true); addAnimation("die", [8, 9, 10, 11], 2, false); // Set facing direction facing = FlxSprite.UP; // Save an instance of the PlayState to help with collision detection and movement state = FlxG.state as PlayState; } /** * This manage what direction the frog is facing. It also alters the bounding box around the sprite. * * @param value */ override public function set facing(value:uint):void { super.facing = value; if (value == UP || value == DOWN) { width = 32; height = 25; offset.x = 4; offset.y = 6; } else { width = 25; height = 32; offset.x = 6; offset.y = 4; } } /** * The main Frog update loop. This handles keyboard movement, collision and flagging id moving. */ override public function update():void { //Default object physics update super.update(); } /** * Simply plays the death animation */ public function death():void { } /** * This resets values of the Frog instance. */ public function restart():void { } /** * This handles moving the Frog in the same direction as any instance it is resting on. * * @param speed the speed in pixels the Frog should move * @param facing the direction the frog will float in */ public function float(speed:int, facing:uint):void { } } }
I am not going to go through all the code since it is commented, but here are a few key points to check out. First we setup all the values for the frog's movement in the constructor along with its animations. Next we create a setter to handle changing the frog's orientation when we change directions. Finally we have a few helper methods to manage the update loop, death, restart, and floating.
Now we can add our player to the game level. Open up the
PlayState class and put the following code in between the logs and cars we setup in part one.
// Create Player player = add(new Frog(calculateColumn(6), calculateRow(14) + 6)) as Frog;
You will also need to import the Fog class and add the following property:
private var player:Frog;
It is important to place the player at the right depth level in the game. He needs to be above the logs and turtles yet below the cars so when he gets run over so he doesn't show up on top of a car. We can now test that our player is on the level by recompiling the game. Here is what you should see:
There isn't much we can do with the player until we add some keyboard controls, so let's move onto the next step.
Step 2: Keyboard Controls
In Frogger the player can move left, right, up and down. Each time the player moves, the frog jumps to the next position. In normal tile based games this is easy to set up. We simply figure out the next tile to move to and add to the x or y value until it reaches the new destination. Frogger however adds some complexity to the movement when it comes to the logs and turtles you can float on. Since these objects don't move according to the grid we need to pay extra attention to how the boarders of our level work.
Let's get started with basic controls. Open up the
Frog class and add the following block of code to our
update() method before
super.update():
if (state.gameState == GameStates.PLAYING) { // Test to see if TargetX and Y are equal. If so, Frog is free to move. if (x == targetX && y == targetY) { // Checks to see what key was just pressed and sets the target X or Y to the new position // along with what direction to face if ((FlxG.keys.justPressed("LEFT")) && x > 0) { targetX = x - maxMoveX; facing = LEFT; } else if ((FlxG.keys.justPressed("RIGHT")) && x < FlxG.width - frameWidth) { targetX = x + maxMoveX; facing = RIGHT; } else if ((FlxG.keys.justPressed("UP")) && y > frameHeight) { targetY = y - maxMoveY; facing = UP; } else if ((FlxG.keys.justPressed("DOWN")) && y < 560) { targetY = y + maxMoveY; facing = DOWN; } // See if we are moving if (x != targetX || y != targetY) { //Looks like we are moving so play sound, flag isMoving and add to score. FlxG.play(GameAssets.FroggerHopSound); // Once this flag is set, the frog will not take keyboard input until it has reacged its target isMoving = true; } else { // Nope, we are not moving so flag isMoving and show Idle. isMoving = false; } } // If isMoving is true we are going to update the actual position. if (isMoving == true) { if (facing == LEFT) { x -= moveX; } else if (facing == RIGHT) { x += moveX; } else if (facing == UP) { y -= moveY; } else if (facing == DOWN) { y += moveY; } // Play the walking animation play("walk" + facing); } else { // nothing is happening so go back to idle animation play("idle" + facing); } }
There is a lot of code here so let's go through it block by block starting with our first conditional. We need a way to know if the game state is set to playing. If you look at the end of the
Frog class's constructor you will see that we save a reference to the current play state in a local variable. This is a neat trick we can use to help our game objects read game state from the active FlxState. Since we know the frog is always going to be used in the
PlayState it is safe to assume that the correct state of the game is set to
PlayState. Now that we have this FlxState we can check the game's actual state before doing anything in the
FrogClass.
I just want to take a second to clear up some terminology. The word state has several connotations in our FlxFrogger game. There are
FlxStates with represent the current screen or display of the game and then there is game state. Game state represents the actual activity the game is currently in. Each of the game states can be found in the
GameState class inside of the
com.flashartofwar.frogger.enum package. I will do my best to keep the two uses of state as clear as possible.
So back to our frog. If the game state is set to "playing" then it is safe to detect any movement request for the frog and also update the its animation. You will see later on how we handle death animations and freezing the player based on collisions. Our first block of code determines if the Frog has reached a targetX and targetY position. The second block of code actually handles increasing the frog's x and y values.
Now we can talk about how to actually move the frog. This is the next conditional within out targetX/Y block. Once a keystroke has been detected and the new targetX/Y values have been set we can immediately validate that we are moving. If so we need to play the frog hop sound and set the frog
isMoving flag to true. If these values have not changes we are not moving. After this is set we can handle movement logic in the last conditional.
Finally we test if
isMoving is true and see what direction we need to move based on which way the frog is facing. The frog can only move in one direction at a time so this makes it very easy to set up. We also call the
play() method to animate the frog. The last thing you should know is how we calculate the
moveX and
moveY vales. If you look in the constructor you will see that we determine that the frog will move X number of pixels over Y number of frames. In this case we want our animation to last 8 frames so in order to move the 40 pixels we need each time, we will move by 5px each frame for 8 frames. This is how we get the smooth hop animation and keep the player from continually pressing the keys and interrupting the animation.
Step 3: Car Collision Detection
Flixel handles all of the collision logic we will need for us. This is a huge time saver and all you have to do is ask the
FlxU class to detect any overlapping sprites. What is great about this method is that you can test FlxSprites or FlxSpriteGroups. If you remember we set up our cars in their own
carGroup which will make it incredibly easy to test if they have collided with the frog. In order to get this started we need to open up the
PlayState and add the following code:
/** * This is the main game loop. It goes through, analyzes the game state and performs collision detection. */ override public function update():void { if (gameState == GameStates.GAME_OVER) { } else if (gameState == GameStates.LEVEL_OVER) { } else if (gameState == GameStates.PLAYING) { // Do collision detections FlxU.overlap(carGroup, player, carCollision); } else if (gameState == GameStates.DEATH_OVER) { } // Update the entire game super.update(); }
This includes a little more of the core game state logic which we will fill out in the next few steps but let's take a look at the code under the "Do collision detections" comment. As you can see we have a new class called
FlxU which helps manage collision detection in the game as I had mentioned above. This class accepts targetA, targetB and a callback method. The
overlap() method will test all of the children of targetA against the children of targetB then pass the results to your supplied callback method. Now we need to add the
carCollision callback to our class:
/** * This handles collision with a car. * @param target this instance that has collided with the player * @param player a reference to the player */ private function carCollision(target:FlxSprite, player:Frog):void { if (gameState != GameStates.COLLISION) { FlxG.play(GameAssets.FroggerSquashSound); killPlayer(); } }
We are taking a few liberties with this callback since we know that the first param will be a
FlxSprite and that the second one is a
Frog. Normally this is an anonymous and untyped method when it is called once a collision is detected. One thing you will notice is that we test that the
gameStates is set to collision. We do this because when a collision happens the entire game freezes to allow a death animation to play but technically the frog is still colliding with the carGroup. If we didn't add in this conditional we would be stuck in an infinite loop while the animation tried to play.
Now we just need to add the logic for killing the player. This will go in a
killPlayer() method and here is the code:
/** * This kills the player. Game state is set to collision so everything knows to pause and a life is removed. */ private function killPlayer():void { gameState = GameStates.COLLISION; player.death(); }
Finally we need to fill in the
death() logic in our
Frog class. Add the following code to the death method:
play("die");
Here we are telling the frog sprite to play the die animation we setup in the constructor. Now compile the game and test that the player can collide with any of the cars or truck.
Step 4: Restarting After A Death
Now that we have our first set of collision detection in place along with the death animation, we need a way to restart the game level so the player can continue to try again. In order to do this we need to add a way to notify the game when the death animation is over. Let's add the following to our
Frog update() method just above where we test if the
gameState is equal to playing:
// Test to see if the frog is dead and at the last death frame if (state.gameState == GameStates.COLLISION && (frame == 11)) { // Flag game state that death animation is over and game can perform a restart state.gameState = GameStates.DEATH_OVER; } else
Notice the trailing
else, that should run into
if (state.gameState == GameStates.PLAYING) so that we are testing for collision then playing state.
Now we need to go back into our
PlayState class and add the following method call to
else if (gameState == GameStates.DEATH_OVER):
restart();
Now we can add a restart method:
/** * This handles resetting game values when a frog dies, or a level is completed. */ private function restart():void { // Change game state to Playing so animation can continue. gameState = GameStates.PLAYING; player.restart(); }
Last thing we need to do is add this code to the
Frog Class's
restart method.
isMoving = false; x = startPosition.x; y = startPosition.y; targetX = startPosition.x; targetY = startPosition.y; facing = UP; play("idle" + facing); if (!visible) visible = true;
Now we should be ready to compile and test that all of this works. Here is what you should see, when a car hits the player everything freezes while the death animation plays. When the animation is over everything should restart and the player will show up at the bottom of the screen again. Once you have that set up we are ready to work out the water collision detection.
Step 5: Water Collision Detection
With a solid system in place to handle collision, death animation and restarting it should be really easy to add in the next few steps of collision detection. The water detection is a little different however. Since we always know where the water is, we can test against the player's y position. If the player's y value is greater then where the land ends, we can assume the player has collided with the water. We do this to help cut down on the amount of collision detection we would need to do if we tested each open space of water when the frog lands. Let's add the following to our
PlayState under where we test for a car collision:
// If nothing has collided with the player, test to see if they are out of bounds when in the water zone if (player.y < waterY) { if (!player.isMoving && !playerIsFloating) waterCollision(); if ((player.x > FlxG.width) || (player.x < -TILE_SIZE )) { waterCollision(); } }
You will also need to add two properties, the first lets us know where the water begins on the Y axis and the other is a boolean to let us know if the player is floating. We'll be using this floating boolean in the next step.
private var waterY:int = TILE_SIZE * 8; private var playerIsFloating:Boolean;
Determining the start coordinate of the water is easy considering everything is part of the grid. Next we need to add our
waterCollision() method:
/** * This is called when the player dies in water. */ private function waterCollision():void { if (gameState != GameStates.COLLISION) { FlxG.play(GameAssets.FroggerPlunkSound); killPlayer(); } }
Compile and test that if we go into the water the player dies.
Next we will look into how to allow the frog to float on the logs and turtles when they are within the water area of the level.
Step 6: Floating Collision Detection
In order to figure out if the Frog can float we need to test for a collision on the
logGroup or the
turtleGroup. Let's add the following code in our
PlayState class just under where we test for the car collision. Make sure it is above the water collision conditional. This is important because if we test for the player in the water before the logs or turtles we could never handle the floating correctly.
FlxU.overlap(logGroup, player, float); FlxU.overlap(turtleGroup, player, turtleFloat);
Here are the two methods we need to handle a collision:
/** * This handles floating the player sprite with the target it is on. * @param target this is the instance that collided with the player * @param player an instance of the player */ private function float(target:WrappingSprite, player:Frog):void { playerIsFloating = true; if (!(FlxG.keys.LEFT || FlxG.keys.RIGHT)) { player.float(target.speed, target.facing); } } /** * This is called when a player is on a log to indicate the frog needs to float * @param target this is the instance that collided with the player * @param player an instance of the player */ private function turtleFloat(target:TimerSprite, player:Frog):void { // Test to see if the target is active. If it is active the player can float. If not the player // is in the water if (target.isActive) { float(target, player); } else if (!player.isMoving) { waterCollision(); } }
You will also need to import
WrappingSprite and
TimerSprite. Once you have that in place we need to go back into our
Frog class and add the following to our
float() method:
if (isMoving != true) { x += (facing == RIGHT) ? speed : -speed; targetX = x; isMoving = true; }
We just added a lot of code and I commented most of it, but I wanted to talk about this last part right here. This code actually handles moving the frog in the same direction with the same speed as the log or turtle the player is on. We use a few tricks to make sure that when the player attempts to move off the floating object they are not overridden by what they are floating on. A big part of game development is state management and using flags such as
isMoving to help let other parts of the code know what can and can't be done are a huge help.
Let's compile the game and check out if the player is able to float on logs.
One thing you may have noticed is that once you land on a log or turtle you will no longer drown. That is because we need to reset the
playerIsFloating flag before we do all of our collision detection. Go back into the
PlayState and add the following just before we start testing for the car collision.
// Reset floating flag for the player. playerIsFloating = false;
So your test block should look like this:
As I have already mentioned, there is a delicate balance of maintaining state and making sure you set and unset these state flags at the right time in order to save yourself a lot of frustration when building your own games. Now you can do a new compile to make sure everything is working correctly and we can move onto the next step.
Step 7: Home Collision Detection
Adding in collision detection for the home bases should be very straight forward. Add the following collision test to the end of the collision detection code in our
PlayState class and above where we test if the player has jumped into the water:
FlxU.overlap(homeBaseGroup, player, baseCollision);
Now we need to create our
baseCollision() method to handle what happens when you land on a home:
/** * This handles collision with a home base. * @param target this instance that has collided with the player * @param player a reference to the player */ private function baseCollision(target:Home, player:Frog):void { // Check to make sure that we have not landed in a occupied base if (target.mode != Home.SUCCESS) { // Increment number of frogs saved safeFrogs ++; // Flag the target as success to show it is occupied now target.success(); } // Test to see if we have all the frogs, if so then level has been completed. If not restart. if (safeFrogs == bases.length) { levelComplete(); } else { restart(); } }
We will also need to add the following property to our class:
private var safeFrogs:int = 0;
Here you can see we are testing to see if the home has already been landed in, next we test if all of the homes have frogs in them and finally we just trigger restart on a successful landing at the home base. It is important to note that in this tutorial we are not testing for the state of the home. Remember there is a bonus fly and an alligator you could land on. Later if you want to add that in you can do it here.
Now we need to add some logic for when a level has been completed:
/** * This is called when a level is completed */ private function levelComplete():void { // Change game state to let system know a level has been completed gameState = GameStates.LEVEL_OVER; // Hide the player since the level is over and wait for the game to restart itself player.visible = false; }
We need to add the following code to the
restart() method above where we reset the game state:
// Test to see if Level is over, if so reset all the bases. if (gameState == GameStates.LEVEL_OVER) resetBases();
Finally we need to add a
resetBases() method to our
PlayState class:
/** * This loops through the bases and makes sure they are set to empty. */ private function resetBases():void { // Loop though bases and empty them for each (var base:Home in bases) { base.empty(); } // Reset safe frogs safeFrogs = 0; }
When all of the bases have been landed on we loop through them and call the
empty() method which will reset the graphic and landed value. Finally, we need to set the frogs that have been saved to zero. Let's compile the game and test what happens when you land in all the bases. After you land in a home base it should change to a frog icon indicating you have landed there already.
As you can see, when we have saved all the frogs the level freezes because there is no logic to restart the level again. We also need to add some game messaging to let the player know what is going on and to use as a notification system in the game so we can manage when to restart all the animations again. This is what we will add in the next step.
Step 8: Adding Game Messages
A lot happens in the game and one of the best ways to communicate to the player what is going on is by adding in game messaging. This will help inform the player when game pauses to activate a game state such as a level complete or a death. In our
PlayState class, add the following code above where we create our home bases in the
create() method:
// Create game message, this handles game over, time, and start message for player gameMessageGroup = new FlxGroup(); gameMessageGroup.x = (480 * .5) - (150 * .5); gameMessageGroup.y = calculateRow(8) + 5; add(gameMessageGroup); // Black background for message var messageBG:FlxSprite = new FlxSprite(0, 0); messageBG.createGraphic(150, 30, 0xff000000); gameMessageGroup.add(messageBG); // Message text messageText = new FlxText(0, 4, 150, "TIME 99").setFormat(null, 18, 0xffff00000, "center"); gameMessageGroup.visible = false; gameMessageGroup.add(messageText);
You will also need to add the following properties:
private var messageText:FlxText; private var gameMessageGroup:FlxGroup; private var hideGameMessageDelay:int = -1;
You will also need to import the
FlxText class. Now let's go into our
update() method and add the following code into the
gameState == GameStates.LEVEL_OVER conditional:
if (hideGameMessageDelay == 0) { restart(); } else { hideGameMessageDelay -= FlxG.elapsed; }
The basic idea here is that we use a timer to count down how long a game message should be displayed. When the timer reaches zero we can restart the level. This gives the player some time to rest in between levels. Also we will need to add the next block of code just below where we test the water collision around line 203:
// Manage hiding gameMessage based on timer if (hideGameMessageDelay > 0) { hideGameMessageDelay -= FlxG.elapsed; if (hideGameMessageDelay < 0) hideGameMessageDelay = 0; } else if (hideGameMessageDelay == 0) { hideGameMessageDelay = -1; gameMessageGroup.visible = false; }
Here we are able to manage the visibility of the game message. In our
baseCollision() method we need to add the following code below where we test for
target.mode != Home.success conditional around line 317:
// Regardless if the base was empty or occupied we still display the time it took to get there messageText.text = "TIME " + String(gameTime / FlxG.framerate - timeLeftOver); gameMessageGroup.visible = true; hideGameMessageDelay = 200;
Add the following properties which we will actually use in the next step:
private var gameTime:int = 0; private var timer:int = 0;
Then add this line of code around line 328 inside the conditional just under where we call
success() on the target:
var timeLeftOver:int = Math.round(timer / FlxG.framerate);
This allows us to calculate the total time it has taken to complete a label which we will connect in the next step. Now in
resetBases() add the following code at the end just under where we set
safeFrogs to 0:
// Set message to tell player they can restart messageText.text = "START"; gameMessageGroup.visible = true; hideGameMessageDelay = 200;
Compile the game and you should now see game status messages when you land in a home or restart the level.
Step 9: Game Timer
Now we are ready to add the logic for the game timer. In the
PlayState class's
create() method add the following code under where we create the bg image:
// Set up main variable properties gameTime = 60 * FlxG.framerate; timer = gameTime;
You will need the following property and constant:
private var timeAlmostOverWarning:int; private const TIMER_BAR_WIDTH:int = 300;
And below the last car added to the
carGroup add the following:
// Create Time text timeTxt = new FlxText(bg.width - 70, LIFE_Y + 18, 60, "TIME").setFormat(null, 14, 0xffff00, "right"); add(timeTxt); // Create timer graphic timerBarBackground = new FlxSprite(timeTxt.x - TIMER_BAR_WIDTH + 5, LIFE_Y + 20); timerBarBackground.createGraphic(TIMER_BAR_WIDTH, 16, 0xff21de00); add(timerBarBackground); timerBar = new FlxSprite(timerBarBackground.x, timerBarBackground.y); timerBar.createGraphic(1, 16, 0xFF000000); timerBar.scrollFactor.x = timerBar.scrollFactor.y = 0; timerBar.origin.x = timerBar.origin.y = 0; timerBar.scale.x = 0; add(timerBar);
You will also need the following properties:
private const LIFE_X:int = 20; private const LIFE_Y:int = 600; private const TIMER_BAR_WIDTH:int = 300; private var timerBarBackground:FlxSprite; private var timeTxt:FlxText; private var timerBar:FlxSprite; private var timeAlmostOverFlag:Boolean = false;
Now let's add the following code in our update function above where we manage hiding the gameMessage on line 234:
// This checks to see if time has run out. If not we decrease time based on what has elapsed // sine the last update. if (timer == 0 && gameState == GameStates.PLAYING) { timeUp(); } else { timer -= FlxG.elapsed; timerBar.scale.x = TIMER_BAR_WIDTH - Math.round((timer / gameTime * TIMER_BAR_WIDTH)); if (timerBar.scale.x == timeAlmostOverWarning && !timeAlmostOverFlag) { FlxG.play(GameAssets.FroggerTimeSound); timeAlmostOverFlag = true; } }
And this is the method that gets called when time is up:
/** * This is called when time runs out. */ private function timeUp():void { if (gameState != GameStates.COLLISION) { FlxG.play(GameAssets.FroggerSquashSound); killPlayer(); } }
Finally we need to reset the timer when the time runs up, the player lands on a home base or the level restarts. We can do this in our
restart() method just before we call
player.restart():
timer = gameTime; timeAlmostOverFlag = false;
You can compile the game now to validate all of this works. We just added a lot of code but hopefully the code and comments are straight forward enough that we don't need to explain it too much.
Step 10: Lives
What game would be complete without lives? In frogger you get 3 lives. Let's add the following function call in our
create() method under where we set up the
timeAlmostOverWarning = TIMER_BAR_WIDTH * .7 on line 69:
createLives(3);
And here are the methods that will manage lives for us:
/** * This loop creates X number of lives. * @param value number of lives to create */ private function createLives(value:int):void { var i:int; for (i = 0; i < value; i++) { addLife(); } } /** * This adds a life sprite to the display and pushes it to teh lifeSprites array. * @param value */ private function addLife():void { var flxLife:FlxSprite = new FlxSprite(LIFE_X * totalLives, LIFE_Y, GameAssets.LivesSprite); add(flxLife); lifeSprites.push(flxLife); } /** * This removes the life sprite from the display and from the lifeSprites array as well. * @param value */ private function removeLife():void { var id:int = totalLives - 1; var sprite:FlxSprite = lifeSprites[id]; sprite.kill(); lifeSprites.splice(id, 1); } /** * A simple getter for Total Lives based on life sprite instances in lifeSprites array. * @return */ private function get totalLives():int { return lifeSprites.length; }
Also make sure you add the following property:
private var lifeSprites:Array = [];
Again these methods are well commented, but the basic idea is that we keep all of our lives in an array. We start off by adding a life to the array based on the value passed in. To remove lives we simply splice 1 out of the array. This is a quick way to handle lives by taking advantage of some native classes such as the Array. Now we just need to set up some logic to remove a life when you die.
Add the following call to our
killPlayer() method above where we set
player.death():
removeLife();
Now you can test that when you die a life should be removed from the display.
Make sure you don't go past the total lives or you will get an error. In the next step we will add in some game over logic to prevent this error from happening.
Step 11: Game Over Screen
We can quickly test to see if the player is out of lives and the game is over in our
restart() method. Let's replace the entire method with the following:
/** * This handles resetting game values when a frog dies, or a level is completed. */ private function restart():void { // Make sure the player still has lives to restart if (totalLives == 0 && gameState != GameStates.GAME_OVER) { gameOver(); } else { // Test to see if Level is over, if so reset all the bases. if (gameState == GameStates.LEVEL_OVER) resetBases(); // Change game state to Playing so animation can continue. gameState = GameStates.PLAYING; timer = gameTime; player.restart(); timeAlmostOverFlag = false; } }
Here you see we are testing to see if the total lives equal zero and the game state is set to game over. If so we can call the game over method. If not, it is business as usual and the level gets restarted. Now we need to add a
gameOver() method:
/** * This is called when a game is over. A message is shown and the game locks down until it is ready to go * back to the start screen */ private function gameOver():void { gameState = GameStates.GAME_OVER; gameMessageGroup.visible = true; messageText.text = "GAME OVER"; hideGameMessageDelay = 100; }
Now, before we can see this in action, we just need to add a few more lines of code to our
update() method to handle removing the game over message and returning the player to the
StartState. Look for where we test for
gameState == GameStates.GAME_OVER and add the following code into the conditional:
if (hideGameMessageDelay == 0) { FlxG.state = new StartState(); } else { hideGameMessageDelay -= FlxG.elapsed; }
Now you can test the game over message by killing the player 3 times. You should see this before getting thrown back to the
StartState.
Now that all of the pieces are in place we can easily add in scoring.
Step 12: Score
We need to add up the score when the player jumps, lands in a base and finishes a level. Flixel makes it easy to remember a score and you can access it at any time by using the
FlxG.score property on the
FlxG singleton. First we need to create a class to store some score values for us. Create a class called
ScoreValues and configure it like this:
Here is the code for the class:
package com.flashartofwar.frogger.enum { /** * These represent the values when scoring happens. */ public class ScoreValues { public static const STEP:uint = 10; public static const REACH_HOME:uint = 50; public static const FINISH_LEVEL:uint = 1000; public static const TIME_BONUS:uint = 10; } }
Now go back into the
PlayState class and add the following above our gameMessageGroup code in the
create() method around line 77:
FlxG.score = 0; var scoreLabel:FlxText = add(new FlxText(0, 30, 100, "Score").setFormat(null, 10, 0xffffff, "right")) as FlxText; scoreTxt = add(new FlxText(0, scoreLabel.height, 100, "").setFormat(null, 14, 0xffe00000, "right")) as FlxText; scoreTxt.text = FlxG.score.toString();
You will need the following property:
private var scoreTxt:FlxText;
Let's add some scoring to our game in the following places.
After line 407 where we calculate the time left over in the
target.mode != Home.Success conditional:
// Increment the score based on the time left FlxG.score += timeLeftOver * ScoreValues.TIME_BONUS;
Make sure you import the
ScoreValues class. Next we will add the following to our
levelComplete() method:
//Increment the score based on FlxG.score += ScoreValues.FINISH_LEVEL;
Now we need to go into our
Frog class and add the following to our
update() method where we set
isMoving to true on line 141:
// Add to score for moving FlxG.score += ScoreValues.STEP;
Before we can test this we need to update the score in our game loop. Let's go back into the
PlayState class and add the following to our
update() method on line 281 just before where we test for
gameState == GameStates.DEATH_OVER:
// Update the score text scoreTxt.text = FlxG.score.toString();
Compile the game and make sure the score is working.
Step 13: Building for Mobile
Now that everything is in place we can easily compile and deploy FlxFrogger to an android device which has AIR installed. Instead of going through how to set up the pre-release of AIR and the Android SDK I just want to focus on how to get this project ready to compile and deploy. Before we jump in you should check out this excellent post by Karl Freeman on how to get FDT configured to build Air for Android.
You will need to have AIR 2.5 setup in your Flex SDK directory and a copy of the Android SDK on your computer. If you remember in part one our Ant script is pointed to a Flex SDK so we can compile. We need to setup where the Android SDK is so we can deploy our Air apk to the phone. Open up the
build.properties file and look for the
android.sdk property.
You will need to point this to where you downloaded your Android SDK. Here is what my path looks like:
android.sdk= /Users/jfreeman/Documents/AndroidDev/android-sdk-mac_86-2.2
As you can see I have renamed it to let me know it is the 2.2 build. I like to keep back ups of my SDKs based on the version number so this is a good habit to pick up. Let's open up the Run External Tools Configuration where we set up our original build target in part 1.
Right-click on our build and select "duplicate". Rename the build copy to
FlxFrogger Android Build. Now click on it and and let's change the default target to
deploy-to-phone.
Now you have a new run for compiling and deploying to an Android phone. If your phone is connected and you would like to test that it works, simply run the Ant task and wait for it to transfer over. Remember you need to have Air installed on your phone for it to run.
One last thing. You may have noticed that there is also a
deploy-to-emulator target. You can use this if you want to test with the Android Emulator but be warned that the emulator is incredibly slow. I found it almost unbearable for getting any real sense of how Flash ran on the phone so don't be surprised if you get under 3-4 fps.
Step 14: Touch Controls
By default Flixel is setup to work with the keyboard but on mobile devices you may only have a touchscreen to work with. Setting up touch controls is very easy, we will create our own touch buttons out of
FlxSprites in the next step. Create a new class called
TouchControls and configure it like this:
package com.flashartofwar.frogger.controls { import flash.events.KeyboardEvent; import org.flixel.FlxG; import org.flixel.FlxGroup; import org.flixel.FlxState; import org.flixel.FlxText; import org.flixel.FlxSprite; public class TouchControls extends FlxGroup { /*private var spriteButtons[0]:FlxSprite; private var spriteButtons[1]:FlxSprite; private var spriteButtons[2]:FlxSprite; private var spriteButtons[3]:FlxSprite;*/ private var spriteButtons:Array; /** * Touch controls are special buttons that allow virtual input for the game on devices without a keyboard. * * @param target Where should the controls be added onto * @param x x position to display the controls * @param y y position to display the controls * @param padding space between each button */ public function TouchControls(target:FlxState, x:int, y:int, padding:int) { this.x = x; this.y = y; var txt:FlxText; spriteButtons = new Array(4); //spriteButtons[0] = new FlxSprite(x, y) spriteButtons[0] = new FlxSprite(0, 0) spriteButtons[0].color =0x999999; spriteButtons[0].createGraphic(100, 100); add(spriteButtons[0]); txt = new FlxText(0, 30, 100, "UP").setFormat(null, 20, 0xffffff, "center"); add(txt); spriteButtons[1] = new FlxSprite(spriteButtons[0].right + padding, 0) spriteButtons[1].color =0x999999; spriteButtons[1].createGraphic(100, 100); add(spriteButtons[1]); txt = new FlxText(spriteButtons[1].x, 30, 100, "DOWN").setFormat(null, 20, 0xffffff, "center"); add(txt); spriteButtons[2] = new FlxSprite(spriteButtons[1].right + padding, 0) spriteButtons[2].color =0x999999; spriteButtons[2].createGraphic(100, 100); add(spriteButtons[2]); txt = new FlxText(spriteButtons[2].x, 30, 100, "LEFT").setFormat(null, 20, 0xffffff, "center"); add(txt); spriteButtons[3] = new FlxSprite(spriteButtons[2].right + padding, 0) spriteButtons[3].color =0x999999; spriteButtons[3].createGraphic(100, 100); add(spriteButtons[3]); txt = new FlxText(spriteButtons[3].x, 30, 100, "RIGHT").setFormat(null, 20, 0xffffff, "center"); add(txt); } public function justPressed(button:Number):Boolean { return FlxG.mouse.justPressed() && spriteButtons[button].overlapsPoint(FlxG.mouse.x, FlxG.mouse.y); } public function justReleased(button:Number):Boolean { return FlxG.mouse.justReleased() && spriteButtons[button].overlapsPoint(FlxG.mouse.x, FlxG.mouse.y); } override public function update():void { if (FlxG.mouse.justPressed()) { if (spriteButtons[0].overlapsPoint(FlxG.mouse.x, FlxG.mouse.y)) { spriteButtons[0].color = 0xff0000; } else if (spriteButtons[1].overlapsPoint(FlxG.mouse.x, FlxG.mouse.y)) { spriteButtons[1].color = 0xff0000; } else if (spriteButtons[2].overlapsPoint(FlxG.mouse.x, FlxG.mouse.y)) { spriteButtons[2].color = 0xff0000; } else if (spriteButtons[3].overlapsPoint(FlxG.mouse.x, FlxG.mouse.y)) { spriteButtons[3].color = 0xff0000; } } else if (FlxG.mouse.justReleased()) { spriteButtons[0].color = 0x999999; spriteButtons[1].color = 0x999999; spriteButtons[2].color = 0x999999; spriteButtons[3].color = 0x999999; } super.update(); //Passing update up to super } } }
Go into the
Frog class and replace the block of code where we test for key presses with the following code:
if ((FlxG.keys.justPressed("LEFT") || (touchControls != null && touchControls.justPressed(2))) && x > 0) { targetX = x - maxMoveX; facing = LEFT; } else if ((FlxG.keys.justPressed("RIGHT") || (touchControls != null && touchControls.justPressed(3))) && x < FlxG.width - frameWidth) { targetX = x + maxMoveX; facing = RIGHT; } else if ((FlxG.keys.justPressed("UP") || (touchControls != null && touchControls.justPressed(0))) && y > frameHeight) { targetY = y - maxMoveY; facing = UP; } else if ((FlxG.keys.justPressed("DOWN") || (touchControls != null && touchControls.justPressed(1))) && y < 560) { targetY = y + maxMoveY; facing = DOWN; }
As you can see, in addition to testing for key presses we will directly check our
TouchControls to see if they have been pressed. You will also need to import the
TouchControls class and add the following property:
public var touchControls:TouchControls;
Now, in order to show touch controls when you are compiling for a mobile device, we are going to use a compiler conditional. Add the following code to the
PlayState class in the
create() method just before where we set the
gameState to playing:
// Mobile specific code goes here /*FDT_IGNORE*/ CONFIG::mobile { /*FDT_IGNORE*/ touchControls = new TouchControls(this, 10, calculateRow(16) + 20, 16); player.touchControls = touchControls; add(touchControls); /*FDT_IGNORE*/ } /*FDT_IGNORE*/
I have added some special comments in here to help keep FDT from throwing an error since it doesn't understand compiler conditionals yet. Here is what the code looks like without these special FDT ignore comments:
CONFIG::mobile { touchControls = new TouchControls(this, 10, calculateRow(16) + 20, 16); player.touchControls = touchControls; add(touchControls); }
Also make sure you add the following property and import
TouchControls:
private var touchControls : TouchControls;
As you can see a compiler conditional is just like a normal
if statement. Here we are just testing that if the value of
CONFIG::mobile is set to true then we are building for mobile and, if so, it should show the controls. Telling the compiler about this config variable is all handled in our build script so you don't have to worry about anything. Depending on what type of build target you call, the value is changed when compiling. It couldn't be easier. This is a great technique to use when you have mobile specific code you need to execute that you wouldn't want to run in your web based version.
You can test these controls by deploying to the phone, here is what you should see:
If you test in the browser you will not see the controls. Hopefully you can see the power of compiler conditionals and how much of an advantage they will be when deploying content over multiple devices and platforms.
Step 15: Optimizations
Optimizing for mobile is a very time consuming process. Luckily this game runs very well on any Android phone with a 1ghz processor or faster. It also helps that we chose to build a game with a very low frame rate. A few things I have noticed which would help give the impression that the game was actually running faster is to speed up the time it takes to move the frog from tile to tile and also speed up the game time and game over wait delay.
Something else you may want to try is to group objects that need to have collision detection together by row instead of type. So right now all of the cars and trucks are being tested as one large group. Since the frog moves vertically along the grid we could speed up the collision detection by only testing a row at a time. Even though this is a much better approach to handling lots of collision detection, I would be surprised if you gain any extra frames per second. Air on Android executes code surprisingly well, the real bottleneck is in the renderer.
To address the slow Flash player renderer you could downscale the images. We built this game at the full 480 x 800 resolution. If we tweaked this to run at half of the pixel size it would be less overhead for Flash to render and may give us a few extra frames per second. Doing something like this would be incredibly time consuming and may not be worth the extra work. With any type of optimization you should have a set list of devices or computer specs to test against and try to build for the lowest common denominator. When building Flash games for web, desktop and mobile it is always best to start with a mobile version since you will hardly see much change in the desktop and web playback.
Conclusion What's Next?
Right now you have a full Frogger game engine. You can create a new level, move within the level, score and detect when a level is complete. The next thing you should add on your own are additional levels. The game is set up in a clean way that creating new levels based on this template shouldn't require much work. You may want to break out the level creation code into it's own class so you can load up specific levels when you need them.
Also you could add a multiplier to the default speed each object gets when they are created inside of the
PlayState so that each new level tells the game actors to move faster and faster.
Finally you could completely re-skin the game and make it your own. There are a lot of Frogger clones out there and since all of the heavy lifting has been done for you, the sky is the limit. If you want to add onto this project and fill in some of the missing features I invite you to fork the source code on GitHub and let me know what additions you make. I'll be happy to add them to the base project and give you credit.
If you run into any problems or have questions, leave a comment below and I will do my best to answer them. Thanks for reading :)
| http://code.tutsplus.com/tutorials/building-frogger-with-flixel-movement-collision-and-deployment--active-5701 | CC-MAIN-2015-11 | refinedweb | 7,615 | 63.09 |
HTTP Requests with `maxon` API
- dskeithbuck last edited by dskeithbuck
Hi,
As mentioned in this thread I'm looking to bring JSON data into Cinema 4D via HTTP GET request. I noticed in the C++ docs that there's a NetworkHttpHandlerInterface Class. I also noticed in the python docs that the
maxonAPI implements a
urltype.
What's the state of these implementations and can they be used to make a connection to an HTTP server and request data? I think I've managed to successfully connect to a remote server, but I can't figure out how to read the data or make more specific requests.
"""Name-en-US: Simple HTTP Requests Description-en-US: Prints the result of connecting to wikipedia.org """ import c4d from c4d import gui import maxon def main(): url = maxon.Url("") connection = url.OpenConnection() print connection if __name__=='__main__': main()
If it's possible, could someone share some example code for how to use the
maxonAPI to print the results of an HTTP GET request in
python. If it's not yet possible, may I request that it be added to the SDK?
Thank you,
Donovan
Hi,
I am getting a lot of
NotImplementedErrorwhen I start poking around, which suggests that things are heavily on the NYI side regarding URLs and StreamInterfaces. I also don't think that your code does establish a connection as you said. You instantiate a connection interface but I could not see any traffic from or to the given URL.
May I also ask why you are not using pythons own
urllib? While its reputation is (rightfully) somewhat bad, it certainly will be enough to fetch some JSON.
Cheers
zipit
Hi @dskeithbuck thanks a lot for trying to use the new Maxon API.
Unfortunately, the Python Maxon API is not completely ready and still, a lot of things need to be done.
While it's somehow possible to read data, due to Python Maxon API specific bugs: HTTPS does not support InputStreamRef.GetStreamLength, InputStreamRef.Read/Write doesn't work.
So even reading currently is not the most intuitive way so I share it, but consider there are actually some bugs/Not yet implemented features that prevents to have the same code than the one used in the C++ URL manual.
import maxon def main(): # Defines the URL url = maxon.Url("") # Creates an Input Stream (you are going to read) inputStream = url.OpenInputStream(maxon.OPENSTREAMFLAGS.NONE) # Creates a BaseArray or char that will received all the needed content data = maxon.BaseArray(maxon.Char) data.Resize(100) # Write the data into data inputStream.ReadEOS(data) # Converts the maxon.BaseArray(maxon.Char) to a list to then convert it to a string s = "".join(list(data)) print s if __name__=='__main__': main()
So yes for the moment the best way is to keep using the python standard module. As you want to use some webstuff I also want to mention you do not forget about SSL Error in Mac.
If you have any question, let me know.
And thanks again for poking around with the maxon API in python.
Cheers,
Maxime.
- dskeithbuck last edited by dskeithbuck
@zipit said in HTTP Requests with `maxon` API:
I also don't think that your code does establish a connection as you said.
Perhaps you're right, I tried a bunch of different things before landing here. With one of my tests I was getting a 404 error which implied that network requests were being made.
May I also ask why you are not using pythons own
urllib? While its reputation is (rightfully) somewhat bad, it certainly will be enough to fetch some JSON.
I'm connecting to a localhost server being run by another application. As I understand it
urllibcreates a brand new HTTP connection every time I pass data back and forth which is leading to some unfortunate latency. The
requestslibrary can make a connection that stays open and all future transactions are much faster.
I'm hoping to install this plugin cross-platform on a large number of machines and I want to avoid trying to package
requestsand all of its dependencies if possible which is why I was excited to see that the
maxonAPI seemed to have some of the same abilities as
requests.
Thanks for taking a look!
- dskeithbuck last edited by
@m_adam said in HTTP Requests with `maxon` API:
Hi @dskeithbuck thanks a lot for trying to use the new Maxon API.
Thank you so much for the working sample code, that answers my question. I look forward to switching over to the Maxon API for URL requests once these kinks get worked out. | https://plugincafe.maxon.net/topic/11778/http-requests-with-maxon-api/5 | CC-MAIN-2020-24 | refinedweb | 773 | 62.78 |
Function basics in C
A function is a collection of C statements to do something specific. A C program consists of one or more functions. Every program must have a function called
main().
Advantages of functions #
- A large problem can be divided into subproblems and then solved by using functions.
- The functions are reusable. Once you have created a function you can call it anywhere in the program without copying and pasting entire logic.
- The program becomes more maintainable because if you want to modify the program sometimes later, you need to update your code only at one place.
Types of function #
- Library function
- User defined function
Library function #
C has many built-in library functions to perform various operations, for example:
sqrt() function is used to find the square root of a number. Similarly,
scanf() and
printf() are also library functions, we have been using them since chapter 1.
To use a library function we must first include corresponding header file using
#include preprocessor directive. For
scanf() and
printf() corresponding header file is
stdio.h, for
sqrt() and other mathematical related functions, it is
math.h.
// Program to find the square root of a number #include<stdio.h> #include<math.h> int main() { float a; printf("Enter number: "); scanf("%f", &a); printf("Square root of %.2f is %.2f", a, sqrt(a)); // signal to operating system program ran fine return 0; }
Expected Output:
1st run:
Enter number: 441 Square root of 441.00 is 21.0
2nd run:
Enter number: 889 Square root of 889.00 is 29.82
Common mathematical functions #
To use these functions you must first include header file math.h.
User defined function #
User created function is known as user-defined functions. To create your own functions you need to know about three things.
- Function definition.
- Function call.
- Function declaration.
Function definition #
A function definition consists of the code that makes the function. A function consists of two parts function header and function body. Here is the general syntax of the function.
return_type function_name(type1 argument1, type2 argument2, ...) { local variables; statement1; statement2; return (expression); }
The first line of the function is known as function header. It consists of
return_type,
function_ name and function arguments.
The
return_type denotes the type of the value function returns for e.g
int,
float etc. The
return_type is optional, if omitted then it is assumed to be
int by default. A function can either return one value or no value at all, if a function doesn't return any value, then the
void is used in place of
return_type.
function_name is the name of the function. It can be any valid C identifier. After the name of the function, we have arguments declaration inside parentheses. It consists of type and name of the argument. Arguments are also known as formal arguments. A function can have any number of arguments or even no arguments at all. If the function does not have any arguments then the parentheses are left empty or sometimes void is used to represent a function which accepts no arguments.
The body of the function is the meat of the function, this is where you will write your business logic. The body of the function is a compound statement (or a block), which consists of any valid C statements followed by an optional
return statement. The variables declared inside function are called local variables because they are local to the function, means you can’t access the variables declared inside one function from another function. The return statement is used when a function needs to
return something to its caller. The
return statement is optional. If a function doesn't return any value then it's
return_type must be
void, similarly if a function returns an
int value its
return_type must be
int.
You can write function definition anywhere in the program, but usually, it is placed after the
main() function.
Let's create a small function.
void my_func() { printf("Hello i am my_func()"); }
my_func() function doesn’t return any value so it's
return_type is
void. Also, it doesn’t accept any argument that’s why parentheses are empty.
You can also write
void inside parentheses to indicate clearly that this function doesn't accept any arguments.
void my_func(void) { printf("Hello i am my_func()"); }
Throughout the tutorial, we will use this approach.
The body of
my_func() function consists of only one line which prints
"Hello i am my_func()" everytime function is called.
Let's create another small function.
int product(int num1, int num2) { int result; result = num1 * num2; return result; }
This function accepts two arguments and returns an integer value. The variable
result is declared inside a function, so it’s a local variable and only available inside the function. The
return statement in line 5 returns the product of
num1 and
num2 to its caller. Another important point to note is that, just like the variable
result,
num1 and
num2 are local variables, which means we can't access them outside the function
product().
Function call #
After the function is defined the next step is to use the function, to use the function you must call it. To call a function you must write its name followed by arguments separated by a comma (
,) inside the parentheses
().
For example, here is how we can call the
product() function we created above.
sum(12, 10);
Here we are passing two arguments
12 and
10 to the function
product(). The values
12 and
10 will be assigned to variables
num1 and
num2 respectively.
If we would have called the
product() function like this:
product(12);
We would have gotten the syntax error as follows:
As you can see the compiler is complaining "too few arguments to function product" which simply means that function is called with the lesser number of arguments than required.
If a function accepts no arguments then it must be called using empty parentheses.
my_func();
The following figure describes what happens when you call a function.
When
my_func() function is called from main() the control passes to the my_func(). At this point the activity of the
main() function is temporarily suspended; it falls asleep while my_func() function goes to work. When
my_func() function finishes its task or when there are no more statements to execute, the control returns back to
main() function. The
main() wakes up and
statement2 is executed. Then in the next line
sum() function is called and control passes to the
sum(). Again activity of
main() function is temporarily suspended, until
sum() is being executed. When
sum() runs out of statement to execute, control passes back to
main(). The function
main() wakes up again and
statement3 is executed. The important point to note is that
main() function is calling
my_func() and
sum(), so
main() is calling function whereas
my_func() and
sum() are called functions.
If a function returns a value then it can be used inside any expression like an operand. For example:
a = product(34, 89) + 100; printf( "product is = %d", product(a, b) );
You are under no obligation to use the return value of a function.
product();
Here the return value from
product() is discarded.
If a function doesn't return a value then we can't use it in the expression as follows:
s = myfunc();
One more thing to note is that statement inside a function will execute only when a function is called. If you have defined a function but never called it then the statements inside it will never be executed.
Function declaration #
The calling function needs some information about the called function. When function definition comes before the calling function then function declaration is not needed. For example:
#include<stdio.h> // function definition int sum(int x, int y) { int s; s = x + y; return s; } int main() { // function call printf("sum = %d", sum(10, 10)); // signal to operating system everything works fine return 0; }
Notice that the definition of function
sum() comes before the calling function i.e
main(), that’s why function declaration is not needed.
Generally function definition comes after
main() function. In this case, the function declaration is needed.
Function declaration consists of function header with a semicolon (
;) at the end.
here are function declarations of function
my_func() and
sum().
void my_func(void); int product(int x, int y);
Names of arguments in a function declaration is optional so,
int product(int x, int y)
can be written as:
int product(int , int ).
Note that return type and argument types must be same as defined while creating the function. So you can't write the following:
float product(int a, int b) – wrong because
product() function return type is
int.
int product(float a, int b) – wrong because
product() function first argument is of
int type.
Another important point I want to mention is that the name of the arguments defined in the function declaration needs not to be the same as defined in the function definition.
int sum(int abc, int xyx) // Function declaration int sum(int x, int y) // Function definition { int s; s = x + y; return s; }
This code is perfectly valid.
A function declaration is generally placed below preprocessor directives.
The following program demonstrates everything we have learned so far in this chapter.
#include<stdio.h> // function declaration int sum(int x, int y); int main() { // function call printf("sum = %d", sum(10, 10)); // signal to operating system everything works fine return 0; } // function definition int sum(int x, int y) { int s; s = x + y; return s; }
Expected Output:
sum = 20
The following program prints the largest number using a function.
#include<stdio.h> // function declaration int max(int x, int y); int main() { // function call max(100, 12); max(10, 120); max(20, 20); // signal to operating system program ran fine return 0; } // function definition int max(int x, int y) { if(x > y) { printf("%d > %d\n", x, y ); } else if(x < y) { printf("%d < %d\n", x, y ); } else { printf("%d == %d\n", x, y ); } }
Expected Output:
100 > 12 10 < 120 20 == 20 | https://overiq.com/c-programming/101/function-basics-in-c/ | CC-MAIN-2018-09 | refinedweb | 1,679 | 63.19 |
Spring is a popular web framework that provides easy integration with lots of common web tasks. So the question is, why do we need Spring when we have Struts2?.
First of all, you need to add the following files to the project's build path from Spring installation. You can download and install latest version of Spring Framework from
Finally add struts2-spring-plugin-x.y.z.jar in your WEB-INF/lib from your struts lib directory. If you are using Eclipse then you may face an exception java.lang.ClassNotFoundException: org.springframework.web.context.ContextLoaderListener.
To fix this problem, you should have to go in Marker tab and righ click on the class dependencies one by one and do Quick fix to publish/export all the dependences. Finally make sure there is no dependency conflict available under the marker tab.
Now let us setup the web.xml for the Struts-Spring integration> <listener> <listener-class> org.springframework.web.context.ContextLoaderListener </listener-class> </listener> <filter> <filter-name>struts2</filter-name> <filter-class> org.apache.struts2.dispatcher.FilterDispatcher </filter-class> </filter> <filter-mapping> <filter-name>struts2</filter-name> <url-pattern>/*</url-pattern> </filter-mapping> </web-app>
The important thing to note here is the listener that we have configured. The ContextLoaderListener is required to load the spring context file. Spring's configuration file is called applicationContext.xml file and it must be placed at the same level as the web.xml file
Let us create a simple action class called User.java with two properties - firstName and lastName.
package com.tutorialspoint.struts2; public class User { private String firstName; private String lastName; public String execute() { return "success"; } public String getFirstName() { return firstName; } public void setFirstName(String firstName) { this.firstName = firstName; } public String getLastName() { return lastName; } public void setLastName(String lastName) { this.lastName = lastName; } }
Now let us create the applicationContext.xml spring configuration file and instantiate the User.java class. As mentioned earlier, this file should be under the WEB-INF folder −
<?xml version = "1.0" Encoding = "UTF-8"?> <!DOCTYPE beans PUBLIC "-//SPRING//DTD BEAN//EN" ""> <beans> <bean id = "userClass" class = "com.tutorialspoint.struts2.User"> <property name = "firstName" value = "Michael" /> <property name = "lastName" value = "Jackson" /> </bean> </beans>
As seen above, we have configured the user bean and we have injected the values Michael and Jackson into the bean. We have also given this bean a name "userClass", so that we can reuse this elsewhere. Next let us create the User.jsp in the WebContent folder −
<%@ - Spring integration</h1> <s:form> <s:textfield<br/> <s:textfield<br/> </s:form> </body> </html>
The User.jsp file is pretty straight forward. It serves only one purpose - to display the values of the firstname and lastname of the user object. Finally, let us put all entities together using the struts.xml file.
<?xml version = "1.0" Encoding = "UTF-8"?> <!DOCTYPE struts PUBLIC "-//Apache Software Foundation//DTD Struts Configuration 2.0//EN" ""> <struts> <constant name = "struts.devMode" value = "true" /> <package name = "helloworld" extends = "struts-default"> <action name = "user" class="userClass" method = "execute"> <result name = "success">/User.jsp</result> </action> </package> </struts>
The important thing to note is that we are using the id userClass to refer to the class. This means that we are using spring to do the dependency injection for the User class.
Now right click on the project name and click Export > WAR File to create a War file. Then deploy this WAR in the Tomcat's webapps directory. Finally, start Tomcat server and try to access URL. This will produce the following screen −
We have now seen how to bring two great frameworks together. This concludes the Struts - Spring integration chapter. | https://www.tutorialspoint.com/struts_2/struts_spring.htm | CC-MAIN-2021-21 | refinedweb | 608 | 50.94 |
#include <definition.h>
We are using these enums to identify type type of and instance or definition during traversal.
default constructor
Returns the associated ast node of the definition.
Reimplemented in risc::Class, risc::Function, and risc::Object.
Returns the type of the ast node.
Returns the name of the type of the ast node.
Reimplemented in risc::Class, risc::Event, risc::EventAndList, risc::EventOrList, risc::Function, risc::Instance, risc::Interface, risc::Port, risc::PrimitiveChannelInstance, and risc::Variable.
Get the name of the file where the declaration is located.
Get the line number of the instance.
Returns the name of the defintion.
Reimplemented in risc::Class, risc::Function, and risc::Object.
Get the position of the in declaration in the corresponding line In other words the column in the line.
Determines if a ast node has an associated ast node. | http://www.cecs.uci.edu/~doemer/risc/v030/html_risc/classrisc_1_1Definition.html | CC-MAIN-2018-05 | refinedweb | 140 | 52.56 |
Creating Firedrake-compatible meshes in Gmsh¶
The purpose of this demo is to summarize the
key structure of a
gmsh.geo file that creates a
Firedrake-compatible mesh. For more details about Gmsh, please
refer to the Gmsh documentation.
The Gmsh syntax used in this document is for Gmsh version 4.4.1 .
As example, we will construct and mesh the following geometry: a rectangle with a disc in the middle. In the picture, numbers in black refer to Gmsh point tags, whereas numbers in read refer to Gmsh curve tags (see below).
The first thing we define are four corners of a rectangle. We specify the x,y, and z(=0) coordinates, as well as the target element size at these corners (which we set to 0.5).
Point(1) = {-6, 2, 0, 0.5}; Point(2) = {-6, -2, 0, 0.5}; Point(3) = { 6, -2, 0, 0.5}; Point(4) = { 6, 2, 0, 0.5};
Then, we define 5 points to describe a circle.
Point(5) = { 0, 0, 0, 0.1}; Point(6) = { 1, 0, 0, 0.1}; Point(7) = {-1, 0, 0, 0.1}; Point(8) = { 0, 1, 0, 0.1}; Point(9) = { 0, -1, 0, 0.1};
Then, we create 8 edges: 4 for the rectangle and 4 for the circle.
Note that the Gmsh command
Circle requires the arc to be
strictly smaller than \(\pi\).
Line(1) = {1, 4}; Line(2) = {4, 3}; Line(3) = {3, 2}; Line(4) = {2, 1}; Circle(5) = {8, 5, 6}; Circle(6) = {6, 5, 9}; Circle(7) = {9, 5, 7}; Circle(8) = {7, 5, 8};
Then, we glue together the rectangle edges and, separately, the circle edges.
Note that
Line,
Circle, and
Curve Loop (as well as
Physical Curve below)
are all curves in Gmsh and must possess a unique tag.
Curve Loop( 9) = {1, 2, 3, 4}; Curve Loop(10) = {8, 5, 6, 7};
Then, we define two plane surfaces: the rectangle without the disc first, and the disc itself then.
Plane Surface(1) = {9, 10}; Plane Surface(2) = {10};
Finally, we group together some edges and define
Physical entities.
Firedrake uses the tags of these physical identities to distinguish
between parts of the mesh (see the concrete example at the end of this page).
Physical Curve("HorEdges", 11) = {1, 3}; Physical Curve("VerEdges", 12) = {2, 4}; Physical Curve("Circle", 13) = {8, 7, 6, 5}; Physical Surface("PunchedDom", 3) = {1}; Physical Surface("Disc", 4) = {2};
For simplicity, we have gathered all this commands in the file immersed_domain.geo. To generate a mesh using this file, you can type the following command in the terminal
gmsh -2 immersed_domain.geo -format msh2
Note
Depending on your version of gmsh and DMPlex, the
gmsh option
-format msh2 may be omitted.
To illustrate how to access all these features within Firedrake, we consider the following interface problem. Denoting by \(\Omega\) the filled rectangle and by \(D\) the disc, we seek a function \(u\in H^1_0(\Omega)\) such that
where \(\sigma = 1\) in \(\Omega \setminus D\) and \(\sigma = 2\) in \(D\). Since \(\sigma\) attains different values across \(\partial D\), we need to prescribe the behavior of \(u\) across this interface. This is implicitly done by imposing \(u\in H^1_0(\Omega)\): the function \(u\) must be continuous across \(\partial \Omega\). This allows us to employ Lagrangian finite elements to approximate \(u\). However, we also need to specify the the jump of \(\sigma \nabla u \cdot \vec{n}\) on \(\partial D\). This term arises naturally in the weak formulation of the problem under consideration. In this demo we simply set
The resulting weak formulation reads as follows:
The following Firedrake code shows how to solve this variational problem using linear Lagrangian finite elements.
from firedrake import * # load the mesh generated with Gmsh mesh = Mesh('immersed_domain.msh') # define the space of linear Lagrangian finite elements V = FunctionSpace(mesh, "CG", 1) # define the trial function u and the test function v u = TrialFunction(V) v = TestFunction(V) # define the bilinear form of the problem under consideration # to specify the domain of integration, the surface tag is specified in brackets after dx # in this example, 3 is the tag of the rectangle without the disc, and 4 is the disc tag a = 2*dot(grad(v), grad(u))*dx(4) + dot(grad(v), grad(u))*dx(3) + v*u*dx # define the linear form of the problem under consideration # to specify the boundary of the boundary integral, the boundary tag is specified after dS # note the use of dS due to 13 not being an external boundary # Since the dS integral is an interior one, we must restrict the # test function: since the space is continuous, we arbitrarily pick # the '+' side. L = Constant(5.) * v * dx + Constant(3.)*v('+')*dS(13) # set homogeneous Dirichlet boundary conditions on the rectangle boundaries # the tag 11 referes to the horizontal edges, the tag 12 refers to the vertical edges DirBC = DirichletBC(V, 0, [11, 12]) # define u to contain the solution to the problem under consideration u = Function(V) # solve the variational problem solve(a == L, u, bcs=DirBC, solver_parameters={'ksp_type': 'cg'})
A python script version of this demo can be found here. | https://www.firedrakeproject.org/demos/immersed_fem.py.html | CC-MAIN-2022-40 | refinedweb | 864 | 60.04 |
Although Play is known as a microframework, really it's a total web development stack, including a build tool that closely integrates with application provisions like view and persistence support. You installed Play in Part 1, and got a quick tour of its programming basics. Now we'll extend that basic application to explore some exciting capabilities of Play.
As with the other deep dives, we'll begin by adding persistence to the basic app from Part 1. If you don't already have it, use the link below to download the source code for the Play demo app.
Connecting Play to a database
Listing 1 has the schema for the Play demo app. Note that we'll be using a MariaDB instance running on localhost.
Listing 1. DDL for the Play example app
create table groups (name varchar (200), id int not null auto_increment primary key);
Our first step is to tell Play how to connect to your database using MySQL. Open up
application.conf, and insert the connection properties as I have done in Listing 2. You'll notice I left some comments in there. You'll also see the comments in your configuration file, because Play includes them as an example of connecting to the H2 database. I've used root and password for this example, but you shouldn't do that in a real app. You can use a different RDBMS if you prefer; just check the documentation for the specific connection parameters you'll need.
Listing 2. Database settings for application.conf
#db.default.driver=org.h2.Driver #db.default.url="jdbc:h2:mem:play" #db.default.username=sa #db.default.password="" db.default.driver=com.mysql.jdbc.Driver db.default.url="mysql://root:password@localhost/play_app"
So far so good -- Play knows how to communicate with the MariaDB instance. Now let's consider our options for getting the data in and out from the app. Ninja had JPA/Hibernate bundled in, so we used that. For Spark we mixed a custom cocktail of DBUtils and Boon JSON. For Play we'll use EBean, a lightweight ORM that is similar in spirit to Hibernate but with lighter configuration. (Note that Play is capable of running a full JPA provider if you need that.)
It's your choice whether to start with configuration or code. Personally, I decide using some combination of celestial alignment and barometric pressure. For this project we'll start with the code, by creating a model class. In Part 1 we created a
Group class in order to model a musical group. Now we'll extend the
Group class into an Entity bean. Listing 3 shows the modified class.
Listing 3. An EBean-enabled model class
package models; import javax.persistence.Entity; import javax.persistence.GeneratedValue; import javax.persistence.Id; import javax.persistence.Table; import com.avaje.ebean.Model; @Entity @Table(name="groups") public class Group extends Model { @Id @GeneratedValue String id; String name; public Group(String name) { super(); this.name = name; } public String getName() { return name; } public void setName(String name) { this.name = name; } }
The
Group class in Listing 3 is in the
models package, which is in the
/app/models path. Note that this path lives at the same level as the
/controllers and
/views directories. Figure 1 shows the file's placement in Eclipse.
Figure 1. The Group class in models
Using standard JPA annotations,
@Entity denotes that the
Group class is persistent, while
@Table(name="groups") alters the name of the table when mapping the class to SQL. We need to do this because group is a reserved name in SQL -- MariaDB won't like it if we try to
select * from group.
We also annotate the
id field using
@Id and
@GeneratedValue, because we made that an auto-incremented field in our DB table. In keeping with the JPA convention, the field
name will be persisted without any additional annotations.
Configure the ORM
Next, we want to enable the EBean plugin that came with Play, and we do that in the
project/plugins.sbt file. Listing 4 shows the line that you want to ensure is uncommented in
plugins.sbt.
Listing 4. Adding the EBean plugin dependency
addSbtPlugin("com.typesafe.sbt" % "sbt-play-ebean" % "1.0.0")
We've enabled EBean as a plugin but we still have to tell Play to use it in our project. We do that by adding it to the build definition file,
build.sbt. You can see how this is done in Listing 5. Notice the project name is
microPlay (comment 1) and that the name is referenced in the line commented as 2.
Listing 5. Modifications to build.sbt, including enabling PlayEbean
addSbtPlugin("com.typesafe.sbt" % "sbt-play-ebean" % "1.0.0") name := "microPlay" // 1 version := "1.0-SNAPSHOT" lazy val microPlay = (project in file(".")).enablePlugins(PlayJava, PlayEbean) // 2 scalaVersion := "2.11.6" libraryDependencies ++= Seq( javaJdbc, cache, javaWs, "mysql" % "mysql-connector-java" % "5.1.18" ) // 3 // Play provides two styles of routers, one expects its actions to be injected, the // other, legacy style, accesses its actions statically. routesGenerator := InjectedRoutesGenerator //fork in run := true //4
In line 2 we just add
PlayEbean to our
enablePlugins call -- simple enough. Also in
build.sbt (comment 3), we add the MySQL driver as a dependency (
mysql" % "mysql-connector-java" % "5.1.18). This is done by adding the dependency to the
libraryDependency Sequence, which is just a Scala collection. The format uses the same semantics but different syntax than a Maven dependency; notice how the group ID, artifact ID, and version are separated by percentage signs.
Finally, in Listing 5, notice the line commented as 4. This line isn't normally commented when Play builds your app, but I observed very poor compile and load times when it was enabled, along with numerous timeouts, so your mileage may vary. The issue is addressed in this StackOverflow question.
One more thing ...
We're done with
build.sbt, but we're not quite done with configuration. We need to tell EBean where our persistent classes are. To do this, you'll open
application.conf and add the line from Listing 6 to the end of the file. Here we're telling the Play EBean plugin that our mapped entities are in the
/models directory.
Listing 6. Setting the EBean model location
ebean.default = ["models.*"]
Configure the app URL and interface
We now have a persistent model class and the infrastructure to use it, but we have no way to get at it. We can address that by mapping a URL in the
routes file, as seen in Listing 7.
Listing 7. Adding a group create URL
POST /group controllers.Application.createGroup()
Next, we'll create a form on the app's index page,
index.html.scala, which you might recall from Part 1. Add the code in Listing 8 to the index page.
Listing 8. HTML form to create a group
<form action="@routes.Application.createGroup()" method="post"> <input type="text" name="name"></input> <button>Create Group</button> </form>
By setting
@routes.Application.createGroup() as the action for the above form, we tell Play to route submitted requests to that endpoint. This is a reverse route -- essentially, we're telling Play to find whatever URL will get us the
Application.createGroup() method, and supply that. If you inspect the source of the page you'll see that the action is set to
/groups. You can also see this in Figure 2.
Figure 2. Routing a request
Before we can handle the request, we need a
createGroup() method on our controller. So, open up
app/controllers/Application.java and add the method in Listing 9.
Listing 9. Application.java: createGroup() handler
public Result createGroup() { Group group = Form.form(Group.class).bindFromRequest().get(); group.save(); return redirect(routes.Application.index()); }
Listing 9 shows the tools we can use in Play to extract the request into an object. The method is a three-liner: from getting the entity, to saving it, to returning a redirect. The
Form.form() utility function takes a class (in this case, the
Group class), allowing us to bind the request into a new instance. We can then simply call
group.save() on the instance, because we've done the work of mapping it via EBean. Note that we're dealing less with the ORM solution in this case than we would with a full JPA lineup; we don't ever directly handle or need to inject an EntityManager. Finally, the
redirect allows us to again use a reverse route to forward on to the
routes.Application.index() handler.
Make it RESTful
So far we've been developing a pretty traditional web application. The difference between that and a single-page AJAX-style REST-submitted app is just a few lines of JavaScript.. In Figure 3 you can see for yourself where this folder and file are in the Eclipse navigator.
Figure 3. JavaScripts folder in /public. Figure 4 has a screenshot of the process.
Figure 4. Loading a WebJar
Now jQuery is available to our front-end. We also want to include a small jQuery plugin, serializeObject, which will help us with the form submit. Copy the source of that GitHub project into a
/public/javascripts/serializeObject.js file, placing it right next to
hello.js in your
javascripts folder. Also include it in the
main.html.scala file (along with
hello.js), as shown in Listing 13.
Listing 13. Referencing serializeObject.js
<script src="@routes.Assets.versioned("javascripts/serializeObject.js")" type="text/javascript"></script>
Now we'll add our own JavaScrip, using
hello.js as a base. Start by adding the JavaScript in Listing 14 to
hello.js.
Listing 14. hello.js: Adding JavaScript support for addGroup
App = { startup: function(){ $("#addGroupButton").click(App.addGroup); }, addGroup: function(){ var group = $('#groupForm').serializeObject(); $.ajax({ url: '/group', type: 'POST', contentType: 'application/json', data: JSON.stringify(group), success: function(){ //App.loadPeople(); }, error: function(xhr,err){ alert("encountered a problem: ", err); } }); return false; } } $( document ).ready(function() { App.startup(); }); | http://www.javaworld.com/article/3045300/application-development/jump-into-java-microframeworks-part-4-play.html | CC-MAIN-2016-22 | refinedweb | 1,669 | 58.99 |
Classifying spam and ham messages is one of the most common natural language processing tasks for emails and chat engines. With the advancements in machine learning and natural language processing techniques, it is now possible to separate spam messages from ham messages with a high degree of accuracy.
In this article, you will see how to use machine learning algorithms in Python for ham and spam message classification. In the process, you will also see how to import CSV files and how to apply text cleaning to text datasets.
Importing Libraries
The first step is to import libraries that you will need to execute various codes in this article. Execute the following script in a Python editor of your choice.
import numpy as np import pandas as pd import nltk import re import matplotlib.pyplot as plt import seaborn as sns %matplotlib inline
Importing the Dataset
We will treat ham and spam message classification as a supervised machine learning problem. In a supervised machine learning problem, the inputs and the corresponding outputs are available during the algorithm training phase. During the training phase, the machine learning algorithm statistically learns to find the relationship between input texts and output labels. While testing, inputs are fed to the trained machine learning algorithm which then predicts the expected outputs without knowing the actual outputs.
For supervised ham and spam message classification, we need a dataset that contains both ham and spam messages along with the labels that specify whether a message is a ham or spam. One such dataset exists at this link:
To import the above dataset into your application, you can use the read_csv() method of the Pandas library. The following script imports the dataset and displays its first five rows on the console:
dataset_url = "" dataset = pd.read_csv(dataset_url, sep='\t') dataset.head()
Output:
Data Visualization
Before you apply machine learning algorithms to a dataset, it is always a good practice to visualize data to identify important data trends. Let’s first plot the distribution of ham and spam messages in our dataset using a pie plot.
plt.rcParams["figure.figsize"] = [8,10] dataset.Type.value_counts().plot(kind='pie', autopct='%1.0f%%')
Output:
The result shows that 12% of all the messages are spam while 88% of the messages are ham.
Let’s plot the histogram of messages with respect to the number of words for both ham and spam messages.
The following script creates a list that contains a number of words in ham messages and there count of occurrence in the dataset:
dataset_ham = dataset[dataset['Type'] == "ham"] dataset_ham_count = dataset_ham['Message'].str.split().str.len() dataset_ham_count.index = dataset_ham_count.index.astype(str) + ' words:' dataset_ham_count.sort_index(inplace=True)
Similarly, the following script creates a list that contains a number of words in spam messages, and there counts of occurrence in the dataset:
dataset_spam = dataset[dataset['Type'] == "spam"] dataset_spam_count = dataset_spam['Message'].str.split().str.len() dataset_spam_count.index = dataset_spam_count.index.astype(str) + ' words:' dataset_spam_count.sort_index(inplace=True)
Finally, the following script plots the histogram using the spam and ham message list that you just created:
bins = np.linspace(0, 50, 10) plt.hist([dataset_ham_count, dataset_spam_count], bins, label=['ham', 'spam']) plt.legend(loc='upper right') plt.show()
Output:
The output shows that most of the ham messages contain 0 to 10 words while the majority of spam messages are longer and contain between 20 to 30 words.
Data Preprocessing
Text data may contain special characters and digits. Most of the time these characters do not really play any role in classification. Depending upon the domain knowledge, sometimes it is good to clean your text by removing special characters and digits. The following script creates a method that accepts a text string and removes everything from the text except the alphabets. The single and double spaces that are created as a result of removing numbers and special characters are also removed subsequently. Execute the following script:
def text_preprocess(sen): sen = re.sub('[^a-zA-Z]', ' ', sen) sen = re.sub(r"\s+[a-zA-Z]\s+", ' ', sen) sen = re.sub(r'\s+', ' ', sen) return sen
Next, we will divide the data into features and labels i.e. messages and their types:
X = dataset["Message"] y = dataset["Type"]
Finally, to clean all the messages, execute a foreach loop that passes each message one by one to the text_preprocess() method which cleans the text. The following script does that:
X_messages = [] messages = list(X) for mes in messages: X_messages.append(text_preprocess(mes))
Converting Text to Numbers
Machine learning algorithms are statistical algorithms that work with numbers. Messages are in the form of text. You need to convert messages to text form. There are various ways to convert text to numbers. However, for the sake of this article, you will use TFIDF Vectorizer. The explanation of TFIDF is beyond the scope of this article. For now, just consider that this is an approach that converts text to numbers. You do not need to define your TFIDF vectorizer. Rather, you can use TfidfVectorizer class from the sklearn.feature_extraction.text module. To convert text to number, you have to pass the text messages to the fit_transform() method of the TfidifVectorizer class as shown in the following script:
from nltk.corpus import stopwords from sklearn.feature_extraction.text import TfidfVectorizer tfidf_vec = TfidfVectorizer (max_features=2500, min_df=7, max_df=0.8, stop_words=stopwords.words('english')) X= tfidf_vec.fit_transform(X_messages).toarray()
In the above script, we specify that the 2500 most occurring words should be included in the feature set where a word should occur in a minimum of 7 messages and a maximum of 80% of the messages. Words that occur for a very few times or in a large number of documents are not very good for classification. Hence they are removed. Also, English stop words such as a, to, i, am is, should be removed as they do not help much in classification.
Dividing Data into Training and Test Sets
As I explained earlier, machine learning algorithms learn from the training set, and to evaluate how well the trained machine learning algorithms perform, predictions are made on the test set. Therefore we need to divide our data into the training and test sets. To do so,
We have converted text to numbers. Now we can use any machine learning classification algorithm to train our machine learning model. We will use the Random Forest classifier because it usually gives the best performance. To use the Random Forest classifier in your application, you can use the RandomForestClassifier class from the sklearn.ensemble module as shown below:
from sklearn.ensemble import RandomForestClassifier rf_clf = RandomForestClassifier(n_estimators=250, random_state=0) rf_clf.fit(X_train, y_train) y_pred = rf_clf.predict(X_test)
To train the RandomForestClassifier class on the training set, you need to pass the training features (X_train) and training labels (y_train) to the fit() method of the RandomForestClassifier class. To make predictions on the test feature, pass the test features (X_test) to the predict() method of the RandomForestClassifier class.
Evaluating the Algorithms
Once predictions are made, you are ready to evaluate the algorithm. Algorithm evaluation involves comparing actual outputs in the test set with the outputs predicted by the algorithm. To evaluate the performance of a classification algorithm you can use, accuracy, F1, recall, and confusion matrix as performance metrics. Again,:
[[141 2] [ 8 13]] precision recall f1-score support ham 0.95 0.99 0.97 143 spam 0.87 0.62 0.72 21 accuracy 0.94 164 macro avg 0.91 0.80 0.84 164 weighted avg 0.94 0.94 0.93 164 0.9390243902439024
The output shows that our algorithm achieves an accuracy of 93.90% for spam message detection which is impressive. | https://pdf.co/blog/ham-and-spam-message-classification-using-machine-learning-in-python | CC-MAIN-2020-50 | refinedweb | 1,277 | 56.55 |
Hi
Using Microsoft.ink...I have to draw a image in the form and savethat image in .jpeg format.....So i haveto install Microsoft Tablet pc.....
so finally I installed ... and i did a program with 2 forms namely save and exit......
so no when i run this program..i can able to draw a image of my own in the form..now when i click save button...i should be able to save the image in .jpeg format.. BUT I couldnt save.. when i click ,save..and give a name for teh image .... an error is shown...
ERROR MESSAGE:
System.NullReferenceException: Object reference not set to an instance of an object.
at InkImage.Form1.button2_Click(Object sender, EventArgs e) in d:\inkimage\form1.cs:line 144..
ie, error in the last line of this program [ DrawArea.Save( sfd.FileName, format );]
using System;
using System.Drawing;
using System.Drawing.Imaging;
using System.Collections;
using System.ComponentModel;
using System.Windows.Forms;
using System.Data;
using Microsoft.Ink;
using System.IO;
namespace InkImage
{
public class Form1 : System.Windows.Forms.Form
{
// Declare the Ink Collector object
private InkCollector myInkCollector;
private const float ThinInkWidth = 50;
private System.Windows.Forms.Button button1;
private System.Windows.Forms.Button button2;
private Bitmap DrawArea;// make a persistent drawing area)
{
myInkCollector.Enabled = false;
this.Dispose();
}
//SAVE BUTTON
private void button2_Click(object sender, System.EventArgs e)
{
ImageFormat format = ImageFormat.Jpeg;
SaveFileDialog sfd = new SaveFileDialog();
sfd.Filter = "JPEG Files(*.jpg)|*.jpg";
if (sfd.ShowDialog() == DialogResult.OK)
{
// now save the image in the DrawArea
DrawArea.Save( sfd.FileName, format );
}
}
}
}
please help me to get the solution......
Dhol
I'll make a wild guess at it. I'm at work and don't have access right now to the C# IDE, plus I'm a newbie, but maybe I can get you on the right track.
It was my impression that the standard way of drawing on a form is to make use of the GDI graphics object which paints every form. This graphics object is part of the event object "e" passed by dot net into the form1_paint event, something like this (if i can recall)
Form1_Paint(Object sender, PaintEventArgs e)
{
Here you can make use of the graphics object to draw on the form. For example
Font f = new Font("New Times Roman", 24)
e.Graphics.DrawString("This is my form", brushes.black, 0, 0) //0,0 is location x,y to start drawing
}
The graphics object is a drawing surface. Your comomands such as DrawString draw on the surface, and then dotnet transfers the image from the surface onto the form at the end of the paint event. However, you don't have to put the commands in the block above. You can send the Graphics object (the drawing surface) to a separate sub (call it DrawingSub) to do the drawing in that sub using code seomthing like this
Dim DrawArea as Graphics = e.Graphics
DrawingSub(DrawArea) 'passes drawing surface to a drawing sub
and then
Private Sub DrawingSub(Graphics DrawingArea)
DrawingArea.DrawString("This is my form")
End sub
The advantange of doing it this way is that it saves code if you want to save this image. Elsewhere you can create a second drawing surface bound to a bitmpa, and pass it into the DrawingSub above
Dim Bitmpa1 As New Bitmap(850, 650, Imaging.PixelFormat.Format32bppArgb)
Dim DrawArea2 As Graphics = Graphics.FromImage(EobImage)
Call DrawingSub(DrawingArea2) 'draws the same stuff on the bitmap
Bitmap1.Save("C:myImage.Tif", ImageFormat1.Tiff)
But since dotNet probably does some default drawing on a Form, I can't guarantee (havent tried it) that your saved image will have EVERYTHIGN that the form has. Well, hope this helped anyway.
Hi
I have done the program..ie, I can able to draw a image and save as a .gif image now...Now I want one more thing to be done ie,..... I use Crystal Reports XI... here in crystal reports we can load a image at runtime using c#.....
Now wht I have to do is that add a crystal report viewer to the form...of teh previous program and add a Button named Browse... so that now when the program is run.. as usual we have to draw a image and save it as .gif image..then now by clicking browse button...we should load that saved image and display it in crystal report....
for this we have to design a xml schema and add a field... ie..for gif file..... and we haveto take that .xml sxhema as teh database for the crytsal report....I have done this program...but i can ableto show any image in crytsal reprot but except teh saved image.....
I will paste my coding part along with teh program. Please have a look at it and solve my problem..
using System;
using Microsoft.Ink;
using System.IO;
// Declare the Ink Collector object
private InkCollector myInkCollector;
private const float ThinInkWidth = 50;
//exit button
private System.Windows.Forms.Button button1;
//save button
private System.Windows.Forms.Button button2;
private CrystalDecisions.Windows.Forms.CrystalReportViewer crystalReportViewer1;
//browse button
private System.Windows.Forms.Button button3;
// Prcocedure: AddImageRow
// reads an image file and adds this image to a dataset table
//
// [in] tbl DataTable
void AddImageRow(DataTable tbl, string name, string filename)
{
FileStream fs = new FileStream(filename, FileMode.Open);// create a file stream
BinaryReader br = new BinaryReader(fs); // create binary reader
DataRow row;
// create a new datarow
row = tbl.NewRow();
// set name field and image field
row[0] = name;
row[1] = br.ReadBytes((int)br.BaseStream.Length);
// add this row to the table
tbl.Rows.Add(row);
// clean up
br = null;
fs = null;
})
{
// Turn the ink collector Off
myInkCollector.Enabled = false;
this.Dispose();
}
//Save Button
private void button2_Click(object sender, System.EventArgs e)
{
FileStream gifFile;
byte[]fortifiedGif = null;
// Create a directory to store the fortified GIF which also contains ISF
// and open the file for writing
Directory.CreateDirectory("c:\\Images");
gifFile = File.OpenWrite("c:\\Images\\display.gif");
// Generate the fortified GIF represenation of the ink
fortifiedGif = myInkCollector.Ink.Save(PersistenceFormat.Gif);
// Write and close the gif file
gifFile.Write(fortifiedGif,0,fortifiedGif.Length);
gifFile.Close();
}
//Browse Buttton
private void button3_Click(object sender, System.EventArgs e)
{
OpenFileDialog openFileDialog1 = new OpenFileDialog();
openFileDialog1.Filter = "Image Files(*.gif) |*.gif | All Files(*.*) | *.*";
openFileDialog1.ShowDialog(this);
//the variable myPic contains the string of the full File Name,it includes the full path.
string mypic = openFileDialog1.FileName;
DataSet data = new DataSet();
// add a table 'Images' to the dataset
data.Tables.Add("Images");
// add two fields
data.Tables[0].Columns.Add("Country", System.Type.GetType("System.String"));
data.Tables[0].Columns.Add("img", System.Type.GetType("System.Byte[]"));
AddImageRow(data.Tables[0],mypic,mypic);
// create a report
showingimage cr = new showingimage();
cr.SetDataSource(data);
// pass a reportdocument to the viewer
crystalReportViewer1.ReportSource = cr;
}
this is my coding part..please solve it ..
i am doing my project on omage compression... we are working on jpeg compression method.... it involves right from subsampling to the huffman coding.... my question is after i do huffnman coding how do i save an image in jpeg format..... as image consista of three matrix y,cb,cr..... how should i save an image making use of these three matrix...... plz anyone help us
vishal | http://www.nullskull.com/q/48954/save-a-image-in--jpeg-format.aspx | CC-MAIN-2014-15 | refinedweb | 1,205 | 62.04 |
Search Criteria
Package Details: rhythmbox-plugin-alternative-toolbar-git 0.r296.3fe2958-1
Dependencies (6)
- python-gobject (python-gobject-git)
- python-lxml
- rhythmbox (rhythmbox-git)
- gettext (gettext-git) (make)
- git (git-git) (make)
- intltool (make)
Latest Comments
1 2 Next › Last »
zed123 commented on 2019-10-07 19:39
I can't build the package:
checking for module Keybinder (3.0) in gi.repository... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/python3.7/site-packages/gi/init.py", line 129, in require_version raise ValueError('Namespace %s not available' % namespace) ValueError: Namespace Keybinder not available not found configure: error: You need the gobject-introspection binding Keybinder
libkeybinder3 needs to be installed before building the package
electricprism commented on 2019-01-21 18:39
@sleepforlife - Looks like a fix may be out
@mirandir - Assuming the fix works, maybe it's time to bump the package minor version to trigger a update for users of this plugin.
sleepforlife commented on 2019-01-08 21:00
I've just updated rhythmbox in my arch linux installation to 3.4.3-1 and the plugin doesn't work as expected,
mirandir commented on 2016-10-03 21:08
@BlkChockr: you should report the problem upstream.
c4tz commented on 2016-10-03 20:20
Hey,
this plugin makes my Rhythmbox (3.4.1) crash some times.
It says:
(rhythmbox:2607): Gtk-CRITICAL **: gtk_tree_view_update_button_position: assertion 'column_el != NULL' failed
It seems to choke on my Playlists or something...
mirandir commented on 2016-08-22 07:26
I have added a temporary patch for Rhythmbox 3.4.
sleepforlife commented on 2016-08-17 12:49
not working with rhythmbox 3.4
gavsiu commented on 2016-06-18 00:25
I tried it again on a different Arch install and it works. base-devel was installed both times.
My issue now is I tried enable modern look while using Bspwm. I'm guessing my WM is hiding the menu button somewhere and I have no way of reverting the change. Reinstalling the plugin does nothing. Where is the config?
I've tried uninstalling both the plugin and rhythmbox. Sudo find everything named *rhythmbox* and delete, yet it comes back when I reinstall.
Finally found it:
gsettings reset-recursively org.gnome.rhythmbox.plugins
electricprism commented on 2016-05-21 09:38
This should be the default UI, it's a significant improvement over the standard UX
mirandir commented on 2016-04-29 21:06
@gavsiu: check if base-devel is installed. | https://aur.tuna.tsinghua.edu.cn/packages/rhythmbox-plugin-alternative-toolbar-git/ | CC-MAIN-2020-24 | refinedweb | 417 | 56.55 |
Most programs that have a user interface of some kind need to handle user input. In the programs that you have been writing, you have been using std::cin to ask the user to enter text input. Because text input is so free-form (the user can enter anything), it’s very easy for the user to enter input that is not expected.
As you write programs, you should always consider how users will (unintentionally or otherwise) misuse your programs. A well-written program will anticipate how users will misuse it, and either handle those cases gracefully or prevent them from happening in the first place (if possible). A program that handles error cases well is said to be robust.
In this lesson, we’ll take a look specifically at ways the user can enter invalid text input via std::cin, and show you some different ways to handle those cases.
std::cin, buffers, and extraction
In order to discuss how std::cin and operator>> can fail, it first helps to know a little bit about how they work.
When we use operator>> to get user input and put it into a variable, this is called an “extraction”. The >> operator is accordingly called the extraction operator when used in this context.
When the user enters input in response to an extraction operation, that data is placed in a buffer inside of std::cin. A buffer (also called a data buffer) is simply a piece of memory set aside for storing data temporarily while it’s moved from one place to another. In this case, the buffer is used to hold user input while it’s waiting to be extracted to variables.
When the extraction operator is used, the following procedure happens:
Extraction succeeds if at least one character is extracted from the input buffer. Any unextracted input is left in the input buffer for future extractions. For example:
If the user enters “5a”, 5 will be extracted, converted to an integer, and assigned to variable x. “a\n” will be left in the input stream for the next extraction.
Extraction fails if the input data does not match the type of the variable being extracted to. For example:
If the user were to enter ‘b’, extraction would fail because ‘b’ can not be extracted to an integer variable.
Validating input
The process of checking whether user input conforms to what the program is expecting is called input validation.
There are three basic ways to do input validation:
Some graphical user interfaces and advanced text interfaces will let you validate input as the user enters it (character by character). Generally speaking, the programmer provides a validation function that accepts the input the user has entered so far, and returns true if the input is valid, and false otherwise. This function is called every time the user presses a key. If the validation function returns true, the key the user just pressed is accepted. If the validation function returns false, the character the user just input is discarded (and not shown on the screen). Using this method, you can ensure that any input the user enters is guaranteed to be valid, because any invalid keystrokes are discovered and discarded immediately. Unfortunately, std::cin does not support this style of validation.
Since strings do not have any restrictions on what characters can be entered, extraction is guaranteed to succeed (though remember that std::cin stops extracting at the first non-leading whitespace character). Once a string is entered, the program can then parse the string to see if it is valid or not. However, parsing strings and converting string input to other types (e.g. numbers) can be challenging, so this is only done in rare cases.
Most often, we let std::cin and the extraction operator do the hard work. Under this method, we let the user enter whatever they want, have std::cin and operator>> try to extract it, and deal with the fallout if it fails. This is the easiest method, and the one we’ll talk more about below.
A sample program
Consider the following calculator program that has no error handling:
This simple program asks the user to enter two numbers and a mathematical operator.
Enter a double value: 5
Enter one of the following: +, -, *, or /: *
Enter a double value: 7
5 * 7 is 35
Now, consider where invalid user input might break this program.
First, we ask the user to enter some numbers. What if they enter something other than a number (e.g. ‘q’)? In this case, extraction will fail.
Second, we ask the user to enter one of four possible symbols. What if they enter a character other than one of the symbols we’re expecting? We’ll be able to extract the input, but we don’t currently handle what happens afterward.
Third, what if we ask the user to enter a symbol and they enter a string like “*q hello”. Although we can extract the ‘*’ character we need, there’s additional input left in the buffer that could cause problems down the road.
Types of invalid text input
We can generally separate input text errors into four types:
Thus, to make our programs robust, whenever we ask the user for input, we ideally should determine whether each of the above can possibly occur, and if so, write code to handle those cases.
Let’s dig into each of these cases, and how to handle them using std::cin.
Error case 1: Extraction succeeds but input is meaningless
This is the simplest case. Consider the following execution of the above program:
Enter a double value: 5
Enter one of the following: +, -, *, or /: k
Enter a double value: 7
In this case, we asked the user to enter one of four symbols, but they entered ‘k’ instead. ‘k’ is a valid character, so std::cin happily extracts it to variable op, and this gets returned to main. But our program wasn’t expecting this to happen, so it doesn’t properly deal with this case (and thus never outputs anything).
The solution here is simple: do input validation. This usually consists of 3 steps:
1) Check whether the user’s input was what you were expecting.
2) If so, return the value to the caller.
3) If not, tell the user something went wrong and have them try again.
Here’s an updated getOperator() function that does input validation.
As you can see, we’re using a while loop to continuously loop until the user provides valid input. If they don’t, we ask them to try again until they either give us valid input, shutdown the program, or destroy their computer.
Error case 2: Extraction succeeds but with extraneous input
Consider the following execution of the above program:
Enter a double value: 5*7
What do you think happens next?
Enter a double value: 5*7
Enter one of the following: +, -, *, or /: Enter a double value: 5 * 7 is 35
The program prints the right answer, but the formatting is all messed up. Let’s take a closer look at why.
When the user enters “5*7” as input, that input goes into the buffer. Then operator>> extracts the 5 to variable x, leaving “*7\n” in the buffer. Next, the program prints “Enter one of the following: +, -, *, or /:”. However, when the extraction operator was called, it sees “*7\n” waiting in the buffer to be extracted, so it uses that instead of asking the user for more input. Consequently, it extracts the ‘*’ character, leaving “7\n” in the buffer.
After asking the user to enter another double value, the “7” in the buffer gets extracted without asking the user. Since the user never had an opportunity to enter additional data and hit enter (causing a newline), the output prompts all get run together on the same line, even though the output is correct.
Although the above problem works, the execution is messy. It would be better if any extraneous characters entered were simply ignored. Fortunately, that’s easy to do:
Since the last character the user entered must be a ‘\n’, we can tell std::cin to ignore buffered characters until it finds a newline character (which is removed as well).
Let’s update our getDouble() function to ignore any extraneous input:
Now our program will work as expected, even if we enter “5*7” for the first input -- the 5 will be extracted, and the rest of the characters will be removed from the input buffer. Since the input buffer is now empty, the user will be properly asked for input the next time an extraction operation is performed!
Error case 3: Extraction fails
Now consider the following execution of the calculator program:
Enter a double value: a
You shouldn’t be surprised that the program doesn’t perform as expected, but how it fails is interesting:
Enter a double value: a
Enter one of the following: +, -, *, or /: Enter a double value:
and the program suddenly ends.
This looks pretty similar to the extraneous input case, but it’s a little different. Let’s take a closer look.
When the user enters ‘a’, that character is placed in the buffer. Then operator>> tries to extract ‘a’ to variable x, which is of type double. Since ‘a’ can’t be converted to a double, operator>> can’t do the extraction. Two things happen at this point: ‘a’ is left in the buffer, and std::cin goes into “failure mode”.
Once in ‘failure mode’, future requests for input extraction will silently fail. Thus in our calculator program, the output prompts still print, but any requests for further extraction are ignored. The program simply runs to the end and then terminates (without printing a result, because we never read in a valid mathematical operation).
Fortunately, we can detect whether an extraction has failed and fix it:
That’s it!
Let’s integrate that into our getDouble() function: type.
Error case 4: Extraction succeeds but the user overflows a numeric value
Consider the following simple example:
What happens if the user enters a number that is too large (e.g. 40000)?
Enter a number between -32768 and 32767: 40000
Enter another number between -32768 and 32767: The sum.
Putting it all together
Here’s our example calculator with full error checking:
Conclusion
As you write your programs, consider how users will misuse your program, especially around text input. For each point of text input, consider:
You can use if statements and boolean logic to test whether input is expected and meaningful.
The following code will.
Ah! I see. I guess there isn't a neater way :(
Thanks :)
Hi,
Love the site. Anyway, I'm just trying to understand std::cin a bit better. Could you take a look at the following version of the getDouble() function?
#include <limits>
#include <iostream>
double getDouble() {
double a;
do {
std::cin.clear();
std::cout << "Enter a number: ";
std::cin >> a;
std::cin.ignore(std::numeric_limits<std::streamsize>::max(), '\n');
} while (std::cin.fail());
return a;
}
This works if you enter an appropriate double value, but goes into an infinite loop if you don't. The question is why?
extraction fails
ignore fails, because the stream is in a bad state
fail() returns true, the loop loops
clear() clears the error, the stream is in a good state
cin tries to extract the same character that made it fail before
Awesome! I feel more confident now! I built a calculator on my own during the "operators" lessons, but was having trouble figuring out error checking on the double types inputs. Before this lesson, I did figure out how to check the operator on my own, this is what I did to check the operator before this lesson! Thanks a ton:
here is my calculator modified again from what I learn in this lesson. I am curious if the for loop I used is generally accepted as an alternative to a while loop?
I also am curious if the example given for the calculator with full error correction is missing a divide by zero error ? it seems to me that
4 / 0 input would error out?
I am wondering about std::cout << " \n \r " type commands for taking control of console output but am unsure what that type of command it is called, I thought I saw a list but cant seem to find it in the tutorial.
Hi Arthur!
* Line 23: Should be
It does the same, but is easier to understand.
* @main
* Line 127: Looks like a null statement, merge it with line 126.
*
awesome thanks for taking the time that is way more efficient. with what you did @ main here I am wondering if the same should be applied to initializing other variables , merging lines 9-10, lines 32-33, lines 70-73? as well as a lot of changes in the ball drop ch5 quiz. I see the URL is on the topic of formatting in code::blocks, I find that formatting questions tend to be shut down as 'opinion' so when I try to find suggestion it's either a closed thread or a google bible, which translates as there is no right way or there is only one way lol. I still need to learn code::blocks, will bookmark that thread for reference and tinker with the suggestion there and looking at your example try a smaller tab setting. when I can actually read more complicated programs I'll spend some time looking at opensource examples, that will give me a better understanding of what reads well and why.
I guess 9-10, 32-33, 70-73 cant be merged the variables cant be initialized with a call for input.
> I guess 9-10, 32-33, 70-73 cant be merged
Correct
I don't care how you format your code for the most part. As you said, it's personal preference.
What I'm asking for is consistent formatting. Every formatting convention requires consistent formatting. There's no point in having a convention when you don't follow it half of the time. The auto-formatter will take care of this for you. Set it up to your liking and add a keyboard shortcut.
I'm talking about things like line 94-116, 71-72, 171.
I see what you mean, especially for the two do while loops , with misplaced brackets and a while floating off into the distance :-D , I will look in to how to setup/use auto formatting.
Hello! Just a quick note, I think in Error Case 3 in the code section under where you say "Let’s integrate that into our getDouble() function:", it's missing another std::cin.ignore(32767,'\n'); statement in between lines 15 and 16. Looks like it's back in there in the final example.
Fixed. Thanks for pointing out the omission!
Hi your tutorials have been very helpful.Thanks for the good c++ tutorials.I would like if someone could look at my code for the calculator and say what they think and feedback is much appreciated.
calculator.cpp
calculator.hpp
Hi Faruk!
* Initialize your variables with uniform initialization
* Don't pass 32767 to @std::cin.ignore. Pass @std::numeric_limits<std::streamsize>::max()
* Misleading indention. Line 40ff is unreachable. Use curly brackets and the auto-formatting feature of your editor.
* Use ++prefix unless you need postfix++
Thanks for the advice.
Hi Alex,
i believe that there is a linguistic mistake in this chapter as following the explanation of the extraction operator ">>" you have put
"Extraction succeeds if at least one character can be extracted from the input buffer. Any unextracted input is left in the input buffer for future extractions. For example:"
now this is a bit misleading as the term at least would imply that if i had declared an integer variable then the user input "abc1" while the characters abc cannot be extracted and assigned to an integer variable the "1" can.
I have just quickly tested this with std::cin on my compiler and it doesnt seem to pick up the 1 but silently fail, so i presume that it is the linguistics here that are incorrect (or do not make it clear at least to myself a native english speaker with a degree in english language so im sure im not alone!)
loving the tutorials so far :)
I changed "can be" to "is", since in the case of "abc1" as input for an integer, no characters are extracted. Hopefully that's enough to clarify.
I'm on c++ 11 or above.
When running the program Alex has mentioned in Error Case 4,
I get the following output (which is dissimilar to Alex's):
OUTPUT1
Enter a number between -32768 and 32767: 32769
Enter another number between -32768 and 32767: The sum is: 32767
Program ended with exit code: 0
Please help to understand this behavior.
When I edit the program to:
[code]
int main()
{
std::int8_t x { 0 }; // x is 16 bits, holds from -32768 to 32767
std::cout << "Enter a number between -128 and 127: ";
std::cin >> x;
std::int16_t y { 0 }; // y is 16 bits, holds from -32768 to 32767
std::cout << "Enter another number between -32768 and 32767: ";
std::cin >> y;
std::cout << "The sum is: " << x + y << '\n';
return 0;
}
I get the following output:
OUTPUT2
Enter a number between -128 and 127: 130
Enter another number between -32768 and 32767: The sum is: 79
Program ended with exit code: 0
OUTPUT1 and OUTPUT2 seem to be having different kind of logic.
In OUTPUT1, variable x was assigned the maximum value that could be assigned to an int16_t variable when cin extracted an "overflow" value.
But the same is not seen in OUTPUT2.
Shouldn't this be consistent irrespective of the variable type?
1) It looks like the way std::cin and operator<< handle invalid inputs changed in C++11. Now instead of leaving the variable alone, it always initializes it with some value. I've updated the lesson text.
2) int8_t is typically a typedef for char, and chars tend to have different handling than int. Avoid int8_t.
For the above program, if the input given after "Enter a number" is anything between "5a" to "5f" the output is as shown below :
Enter a number
5f
Enter a character
0
a
Program ended with exit code: 0
But if the input given after "Enter a number" is anything more than "5f" like "5g" or "5h" the output is correct as shown below:
Enter a number
5g
Enter a character
5
g
Program ended with exit code: 0
I'm not sure why this is so. NOTE: the issue is seen for "5i" too.
I have this code :
I get this output:
Enter a number
d5
Enter an alphabet
0
a
Program ended with exit code: 0
Question:
Even if I have initialized variable "integer" to "1". When I enter "d5" as the input for the first cin, how does "0" get assigned to variable "integer"?
I now understand that a failed extraction leads to "zero-initialization" which is zero assignment in this case, due to which I get the output posted in my previous comment.
When using either while(1) or while(true), I get a warning that the condition is constant. Obviously because by design the loop is supposed to be infinite for this sort of thing.. but my compiler(VC2013) isn't having any of that. How to get around this?
Hi Rasikko!
is valid and commonly used. Warnings can be disabled in VS, see
Thanks for the link. I learned of a pragma function that allows me to disable specific warnings.
Hi Alex,
In section 'Error case 1: Extraction succeeds but input is meaningless':
"2) If so, return the value to the user."
Did you mean to say:
"2) If so, return the value to the caller."?
Yup. Fixed. Thanks!
Do these statements change the range of values the variable can store.
int16_t a;
int8_t b;
int32_t c;
It is good if they do. I have not tried these declarations before
Yes they do. The number after "int" is the number of bits the type can store.
@std::int8_t can store 2^8 values
@std::int16_t can store 2^16 values
@std::int32_t can store 2^32 values
Note however, that these types aren't guaranteed to be implemented by the compiler. See the "Types" list over here for alternatives
Thanks a lot
Selam Dear Mr Alex/Nas
Depending on the quiz at section 6.9a I was trying to write a full code that fulfills all the lows and dealing with invalid text input. As you see the below code i expected to have an out put that loops the question again and again if the input is invalid or fails. and print the how much names i am gonna register if not fails, however what i have seen is different from my expectation.To save your time i don't write the out put c/s you already know how it outputs. So Please tell where my mistake was started? It takes me a lot of time to fix and i cant so far.
God bless you both!!!!!!!!
Hi!
* Don't use "using namespace"
* Initialize your variables with uniform initialization
* Line 13 should be moved behind into an else block, because it won't work if extraction failed.
* Line 21: @std::cin.fail() cannot return true here, because you cleared all errors beforehand
Dear Mr. Nas!
Thank you so much!
I have learnt more from your answers. It works with do{} while(!length);
Stay blessed Sir.
This reads weird:
When we use operator>> to get user input and put it into a variable, this is called an “extraction”. operator>> is accordingly called the extraction operator when used in this context.
Consider:
When we use the >> operator to get user input and put it into a variable, this is called an “extraction”. The >> operator is accordingly called the extraction operator when used in this context.
Fixed. Thanks!
Hi Alex,
Just a minor fix:
The indent of the curly bracket in line 14 of the function under "Types of invalid text input - Error case 1: Extraction succeeds but input is meaningless" and line 46 of the code under "Putting it all together" should be one tab less. :)
Fixed! Thanks!
Dear Mr Alex/Nas
My code below is working to me very nicely since i entered '+' operator (NOTE:-I did that for demo purpose only)
But please i need you to revise my usage for while statement and waiting for your regretful comment if there is a batter or shortest way than mine. Please focus only on a while statement. (to save your time)
stay bless more and more!!!!!!!!!!!!
There's nothing wrong with your loop.
* Line 7, 26, 30: Initialize your variables with uniform initialization
* Line 13, 18, 28, 32: Use @std::numeric_limits<std::streamsize>::max(). See the documentation of @std::basic_istream::ignore
* Line 19: Double line feed
* Line 38: Don't use @system. If you want to pause the program for whatever reason, use @std::cin.get
Dear Mr.Nas!
I have no words for you both! I really blessed to have you!
I have taken all your great comments. I am enjoying programming b/s of you!!!!
Always my prayer great God to be with you!!!!
Dear Mr Alex/NAS!
Kindly and very thank fully ,This is my last question for this section.
In the above example ("putting it all together") in the 1st function double get Double() What was the reason for declaring std::cin.ignore(32767,'\n'); after "else" (line 21). B/s i think it increases the number of errors and "try again" statements by one.
For example in the current function definition above,if the user enters double number 3.0 but if there was extraneous input at the buffer (Eg. '\n') the input fails and the function asks the user to input again.
But if we declare std::cin.ignore(32767,'\n');as an input validation after the cin>>x; (line ten), it deletes extraneous input ('\n') and pass the user's double value to be extracted that enables the if(std::cin.fail) statement to be false and execute what we want at the 1st time the user inputs.
Would you explain more on what was preferable cin.ignore() usage from using it after else(line 21) or after Cin>> extraction(line ten).
Thank you so much in advance for your usual and constant kindness my dears!!!!!!
@std::basic_istream::ignore cannot clear the stream when it's in a bad state.
If you called it on a bad stream before calling @std::basic_istream::clear it wouldn't do anything. But you don't want to call @std::basic_istream::clear on a stream that is in a good state. Alex' code is correct.
DEar Mr Nas!
I becoming to love questioning you b/s i have gotten so beloved answers whenever i asked you.
Always i am thank full for your perpetual kindness, Sir!
Great tutorial. Thank you so much for your time and effort! Excellent information.
In this particular article, I'm not sure I agree with the statement, "However, parsing strings and converting ... is only done in rare cases." It's probably true, but shouldn't be (necessarily). Playing the "semantics police" here... there are some of us who believe that if you ask for numeric input and the user enters non-digit characters, that should be flagged as a format error and cause a reprompt. For example:
This is "probably" a typo... should the value be 1245, 12345 (# = <shift>3), or some other value? Rather than allow 12 to be successfully extracted and "#45" discarded, the entire input should be rejected.
Checking for this was a simple task in straight 'C'... the 2nd parameter of strtol(3) (i.e. **endp) allows the caller to examine the character that caused the numeric conversion to terminate. If it isn't an ASCII NUL ('\0'), then it is a format error (assuming you've already stripped off trailing whitespace). Additionally, the 3rd parameter of strtol(3) (i.e. the radix) allows you to restrict the notion of a valid digit. For example, if I ask for a decimal value, I don't want to read something like 0x5D6. I merely set the radix to 10 and go from there.
[Note: Of course, when I say that it's a "simple" task, I'm ignoring the fact that you have to use fgets(3) to read in a line of input, strip off trailing whitespace, etc. Which is why I don't disagree when you say parsing strings can be challenging. :-)]
So... is there a simple way to accomplish these sorts of format checks in C++ "on the fly", without using getline() first and then parsing the string directly? I don't see how (consider the input string "123 456" for example, which has intermediate whitespace)... but then, I wouldn't be here if I was a C++ guru, right? :-)) Any insights appreciated.
Hi Bullwinkle!
@std::strtol and it's variants are still present in cpp. But they aren't required when dealing with streams, because those have built-in functionality for parsing.
> "#45" discarded
It's not discarded, it remains in the stream, allowing you to check for what you described using @std::istream::peek.
References
std::basic_istream::peek -
I just wanted to ask, I usually see people use
for cin.ignore. Is that better, or is it the same as this?
Hi Arush!
What you mean is
Yes, that's not only better, but it's the only correct way. Passing this value to @std::cin.ignore causes it to skip all input until it hits the delimiter (Usually a line feed). Passing any other value will stop skipping characters after that value has been reached and there could potentially be more input in the stream, causing your program to malfunction.
Alex uses 32767, because it's easier, you should use
References
std::basic_istream::ignore -
When asking the user for input, the following code results in having to make an input at least twice.
Using the debugger it seems that the line
is causing the problem. After it is executed i have to make an input. Then i have to make an input again for the actual cin >> number.
Hi Donlod!
@std::cin.clear clears the error flag. There's no reason to call it when no error occurred.
@std::cin.ignore clears the input buffer. There's no reason to call it when there's no trailing data in the buffer.
Change these things and your problem is gone.
I can't find it in the documentation, but it seems as if calling functions that read from the input buffer when the input buffer is empty wait for input to be made.
Thanks for the reply.
So is it possible to do this the "clean way"?
I came up with two possible solutions:
But if i remember correctly, we should avoid using break, continue in loops, right? Also i think the code is more self-explanatory when actually having a clearer loop condition.
And this:
Here i do not like the fact to check twice for cin.fail().
> we should avoid using break, continue in loops, right?
I can't find a rule for this and I don't see anything wrong with using break/continue.
Your first solution is the better one, because, as you already said, you're checking twice in the second solution. There are three issues with it though
* You're using an int for the while-loop's condition
* You're using a magic number
* You have duplicate code
I don't think there's a nicer solution to this, but since this code is independent of the input extraction itself we can write a wrapper function for it so we don't need to repeat this all over the place.
Would seem better to rename @cleanStream to something like notCleanStream, considering you are returning true if extraction fails. If extraction fails you do not have a cleanStream. Either that or change the function to return false if the extraction failed so the while loop reads as
Thoughts?
It's the verb "clean", not the adjective.
Hi guys,
I am having an issue with extraction handling.
My issue is that it is not working.
this is a snip of what I am trying and it does not handle my failures. Anything I type, gets passed.
Hi Joe!
* Initialize your variables.
* Don't use goto.
Anything you can type can be converted to a char.
Letters? Sure
Numbers? Sure, a char is a number.
Special characters? Yep, those fit in a char
Strings? The first character can be extracted.
If you only want to allow a specific subset of characters you'll need to filter them manually. eg. for only allowing lower case:
Have a look at the ascii table to see the order of characters
Thank you again nascardriver. I really appreciate the help and input.
Tell me, what is the item with which you initialized char c with? \x00
\x interprets the next two characters as a hexadecimal ascii code. So '\x00' is just 0, you could use 0, I use '\x00' when I use a char to hold a character and 0 when it holds a number, do what you like.
Nevermind, I figured it out. Thank you again.
How about this?
I know its clunky, but I have more to learn then I can cut it to better size.
actually, this works
Keep in mind that 32767 doesn't have any special effect to @std::cin.ignore. In a real program std::numeric_limits<std::streamsize>::max() should be used, this causes @std::cin.ignore to ignore everything up to the next newline.
Groovy,
again thank you nascardriver.
"Error Case 3", code line 15:
it wasn't clear to me why x was "good" after this failure.
Hi Peter!
Hi nascardriver! (Are you really a nascardriver?)
I think the problem for me was the comment wording and the way it was laid out. Perhaps the following will be clearer to some people:.
A great question. It would be nice to include some of this information about documentation, as well as a little summary table of these functions/methods within the lesson. | https://www.learncpp.com/cpp-tutorial/5-10-stdcin-extraction-and-dealing-with-invalid-text-input/comment-page-2/ | CC-MAIN-2019-13 | refinedweb | 5,415 | 63.19 |
In this tutorial with Basemap, you are shown how to actually plot a plot, as well as choose a zoom level on your projection. As you can see, plotting lat and long coordinates is fairly simple if you envision them as X and Y on a plane. The only confusing part is that X, Y translates to Lon, Lat... which is the reverse of how they are normally reported.
From the video, here is the sample code:
from mpl_toolkits.basemap import Basemap import matplotlib.pyplot as plt def mapTut(): m = Basemap(projection='mill',llcrnrlat=20,urcrnrlat=50,\ llcrnrlon=-130,urcrnrlon=-60,resolution='c') m.drawcoastlines() m.drawcountries() m.drawstates() m.fillcontinents(color='#04BAE3',lake_color='#FFFFFF') m.drawmapboundary(fill_color='#FFFFFF') # Houston, Texas lat,lon = 29.7630556,-95.3630556 x,y = m(lon,lat) m.plot(x,y, 'ro') lon, lat = -104.237, 40.125 # Location of Boulder xpt,ypt = m(lon,lat) m.plot(xpt,ypt, 'go') plt.title("Geo Plotting") plt.show() mapTut()The above code will generate a map in Matplotlib and Basemap with coordinates plotted for Houston TX and Boulder CO. | https://pythonprogramming.net/plotting-maps-python-basemap/ | CC-MAIN-2019-26 | refinedweb | 183 | 53.07 |
Hi, On Mon, Mar 7, 2011 at 1:31 PM, Stefano Sabatini <stefano.sabatini-lala at poste.it> wrote: > On date Monday 2011-03-07 13:17:19 -0500, Ronald S. Bultje encoded: > [...] >> >> +#if FF_API_OLD_AVIO >> >> ?/** >> >> - * Return the maximum packet size associated to packetized buffered file >> >> - * handle. If the file is not packetized (stream like http or file on >> >> - * disk), then 0 is returned. >> >> - * >> >> - * @param s buffered file handle >> >> - * @return maximum packet size in bytes >> >> + * @deprecated use AVIOContext.max_packet_size directly. >> >> ? */ >> >> -int url_fget_max_packet_size(AVIOContext *s); >> >> +attribute_deprecated int url_fget_max_packet_size(AVIOContext *s); >> >> +#endif >> > >> > Removing docs for deprecated functions is not a good idea (especially >> > for people updating their code to the new API). >> >> I'd say it's OK, as long as the newly recommended API is documented in >> the same or a better way. Is the max_packet_size variable in >> AVIOContext documented adequately? > > It's not documented at all. Even in case it was, deprecated function > docs should keep a pointer to the replacement (when it isn't obvious). It does point to its replacement. I fully agree the replacement should be documented. A patch for that would be nice. Ronald | http://ffmpeg.org/pipermail/ffmpeg-devel/2011-March/108678.html | CC-MAIN-2016-26 | refinedweb | 187 | 50.53 |
Reading XML Files
In this article I will explain about How to Read XML Files with a simple example.
Introduction
The main reason for writing this simple article is so many guys are asking doubts how to read Xml ,how to Write Xml in DotnetSpider Questions section. I Thought this article Will helpful for beginners. You can also find an article about how to write Xml document at
System.Xml namespace contains the XmlReader and XmlTextReader.
The XmlTextReader class is derived from XmlReader class. The XmlTextReader class can be used to read the XML documents. The read function of this document reads the document until end of its nodes.
Using the XmlTextReader class you get a forward only stream of XML data j It is then possible to handle each element as you read it without holding the entire DOM in memory.
XmlTextReader provides direct parsing and tokenizing of XML and implements the XML 1.0 specifications
This article explains how to read an Xml file.
Adding NameSpace as Reference
The first step in the process of reading Xml file is to add System.Xml namespace as reference to our project ,since System.Xml namespace contains the XmlReader and XmlTextReader.
using System.Xml;
let us assume there is an Xml file in c:\Dir\XmlExample.Xml
Open an Xml Document
Create an instance of an XmlTextReader object, and populate it with the XML file. Typically, the XmlTextReader class is used if you need to access the XML as raw data without the overhead of a DOM; thus, the XmlTextReader class provides a faster mechanism for reading XML.
XmlTextReader MyReader = new XmlTextReader("c:\\dir\\XmlExample.Xml");
the above code opens Xml file
Reading Data
The Read method of the XmlTextReader class read the data. See the code
while (MyReader.Read())
{
Response.Write(MyReader.Name);
}
that's all now we are ready to read our Xml file from the above specified directory
Read the XML File into DataSet
You can use the ReadXml method to read XML schema and data into a DataSet. XML data can be read directly from a file, a Stream object, an XmlWriter object, or a TextWriter object.
The code simply looks like the following
string MyXmlFile = @"c:\\Dir\\XmlExample2.xml";
DataSet ds = new DataSet();
System.IO.FileStream MyReadXml = new System.IO.FileStream(MyXmlFile,System.IO.FileMode.Open);
ds.ReadXml(MyReadXml);
DataGrid1.DataSource = ds;
DataGrid1.DataBind();
Thus we can read Xml Data into Dataset .
Summary
This article explained
1. How to read Xml File with example
2. How to read Xml file into DataSet with example.
Hi! Its Nice yar...
Prabu.T | https://www.dotnetspider.com/resources/1646-Reading-XML-Files.aspx | CC-MAIN-2021-31 | refinedweb | 434 | 66.84 |
We need to knock out many more libc functions before we can start with our C++ runtime bringup. Today we'll tackle the
mem* funtions:
memcmp
memcpy
memmove
memset
These functions are vital for our system and are used throughout the standard libaries (e.g.
calloc,
realloc). These functions are also used during the startup process, when we move any relocatable sections and initialize variables in the
bss section to 0.
The implementations I have chosen can be optimized further depending on your platform and time commitment. I have chosen portable implementations that will generally work well.
Many of these functions already have multiple implementations out in the wild. The strategy I have highlighted here, and one I recommend, is to find a good implementation and use it on your projects. There truly is no need to invent the wheel.
memcmp
This
memcmp implementation is my own (probably hacked together from various implementations I've seen before). This version short circuits if the memory is the same (it totally happens), sparing you the need to compare the buffers.
If the buffers are different, we simply iterate through the memory until we check each byte or we find a difference.
int memcmp(const void *p1, const void *p2, size_t n) { size_t i; /** * p1 and p2 are the same memory? easy peasy! bail out */ if (p1 == p2) { return 0; } // This for loop does the comparing and pointer moving... for (i = 0; (i < n) && (*(uint8_t *)p1 == *(uint8_t *)p2); i++, p1 = 1 + (uint8_t *)p1, p2 = 1 + (uint8_t *)p2); //if i == length, then we have passed the test return (i == n) ? 0 : (*(uint8_t *)p1 - *(uint8_t *)p2); }
You could always go with a simple implementation, like musl libc. Note that this version does not short-circuit.
int memcmp(const void *vl, const void *vr, size_t n) { const unsigned char *l=vl, *r=vr; for (; n && *l == *r; n--, l++, r++); return n ? *l-*r : 0; }
memcpy
memcpy is certainly a function with many optimized versions. I have chosen this open source
memcpy implementation and directly imported it into my repository. I like this version because it can be made portable very easily: change the
unsigned long types to
uintptr_t.
Another important aspect of this version is highlighted in a code comment:
/* * Copy a block of memory, handling overlap. * This is the routine that actually implements * (the portable versions of) bcopy, memcpy, and memmove. */
Because this version of
memcpy handles overlap, we can actually use this implementation for
memmove as well.
memcpy is an example of a function which can be optimized particularly well for specific platforms. If performance is a problem, some time searching for a platform-specific implementation that may better suit your needs.
memmove
As I mentioned above, our
memcpy implementation handles overlapping memory regions. This means easiest way to implement
memmove is to simply call
memcpy. I have seen this implementation in many places, most notably with BSD and Apple's open source libraries.
void * memmove(void *s1, const void *s2, size_t n) { return memcpy(s1, s2, n); }
If your chosen
memcpy implementation does NOT handle overlapping regions, you will need to actually implement
memmove. Here is an example standalone
memmove function from musl libc.
memset
For
memset, I have chosen to use a modified musl
memset implementation. I stripped out the
#ifdef __GNUC__ portion of the musl
memset and kept the "pure C fallback" portion of the function.
I like this version because of the head/tail filling. Once the unaligned portions are filled, we can use more efficient aligned access functions.
void * __attribute__((weak)) memset(void * dest, int c, size_t n) { unsigned char *s = dest; size_t k; /* Fill head and tail with minimal branching. Each * conditional ensures that all the subsequently used * offsets are well-defined and in the dest region. */ if (!n) return dest; s[0] = s[n-1] = c; if (n <= 2) return dest; s[1] = s[n-2] = c; s[2] = s[n-3] = c; if (n <= 6) return dest; s[3] = s[n-4] = c; if (n <= 8) return dest; /* Advance pointer to align it at a 4-byte boundary, * and truncate n to a multiple of 4. The previous code * already took care of any head/tail that get cut off * by the alignment. */ k = -(uintptr_t)s & 3; s += k; n -= k; n &= -4; n /= 4; uint32_t *ws = (uint32_t *)s; uint32_t wc = c & 0xFF; wc |= ((wc << 8) | (wc << 16) | (wc << 24)); /* Pure C fallback with no aliasing violations. */ for (; n; n--, ws++) *ws = wc; return dest; }
Putting it All Together
I have started a
libc example implementation in the embedded-resources repository. You can run
make or
make libc at the top level, or simply run
make in
examples/libc.
You can find my example
memcmp,
memcpy,
memmove, and
memset implementations in the
string directory.
libc currently compiles as a library (libc.a).
Weak Symbols
If you check the source on github, you will notice that these functions are marked with
__attribute__((weak)):
void * __attribute__((weak)) memmove(void *s1, const void *s2, size_t n) { return memcpy(s1, s2, n); }
A weak symbol can be overridden by another function definition.
Weak symbols allow you to keep a generic implementation that is portable across platforms. If you have an optimized version valid only for specific platforms, you can override the default implementation and get the optimization benefits for that platform. | https://embeddedartistry.com/blog/2017/3/7/implementing-memset-memcpy-and-memmove | CC-MAIN-2017-43 | refinedweb | 892 | 61.16 |
ReactJS is no doubt one of the trendiest JavaScript libraries released recently and as such is seeing wide adoption.
React support was introduced in WebStorm 10 and has undergone continuous improvement since then. This post has been updated with some of the features introduced in WebStorm 2016.2 and further updates. In this blog post we’d like to show how WebStorm can help you write code with React.
More on using React in WebStorm:
- Working with ReactJS in WebStorm: Linting, refactoring and compiling
- Debugging React apps created with Create React App
- Developing mobile apps with React Native
React introduces JSX, an XML-like syntax that you can use inside your JavaScript code, but you can also use React in pure JavaScript.
If you’re using JSX, WebStorm will suggest switching language version to React JSX so that it may understand JSX syntax in .js files. That’s it, now you can write JSX code and enjoy code completion for JSX tags, navigation and code analysis.
You can also switch language version to React JSX manually in Preferences | Languages & Frameworks | JavaScript.
NB: Once you have react.js library file somewhere in your project, WebStorm will provide you code completion for React methods and React-specific attributes. By default, the code completion popup displays automatically as you type. For example:
From your code you can jump to the method definition in the library with Cmd-click (Ctrl+click).
To enhance code completion we recommend that you add a TypeScript definition file for React with
npm install --save @types/react
Component names
WebStorm can also provide code completion for HTML tags and component names that you have defined inside methods in JavaScript or inside other components.
Completion also works for imported components with ES6 style syntax:
From there you can also jump to the component definition with Cmd-click (Ctrl+click on Windows and Linux) on component name or see a definition in a popup with Cmd-Y (Ctrl+Shift+I).
Attributes and events
In JSX tags, the IDE provides coding assistance for React-specific attributes such as className or classID and non-DOM attributes like key or ref. Moreover, for class names you can autocomplete classes defined in the project’s CSS files.
All React events like onClick or onChange can be also autocompleted together with
={}.
Of course there is also code completion for JavaScript expressions inside the curly braces. That includes all methods and functions that you have defined:
Component properties
WebStorm 2016.2 can provide code completion and resolve for component properties defined using propTypes.
When you autocomplete component name, all its required properties will be added automatically. If the component usage misses some of the required properties, WebStorm will warn you about that.
Emmet in JSX
With Emmet support in WebStorm, you can generate HTML markup really fast. You type an abbreviation that expands to HTML code when you press Tab. You can also use Emmet in JSX code, and that brings us to some special React twists. For example, the abbreviation MyComponent.my-class would expand in JSX into tag with className=”my-class” and not to class=”my-class” like it would in HTML.
Live templates
Live templates work very similar to Emmet – type a special abbreviation and it will expand into a code snippet. WebStorm has a predefined set of templates for JavaScript and HTML, and you can also create your custom templates for React in Preferences | Editor | Live templates.
As an example let’s create a live template for creating a new React component:
Let’s set the abbreviation to rC. With $variable_name$ syntax, we can set the edit points for variable and function names (we have multiple edit points in one template), and with $END$ we specify a location of the cursor at the end.
We also need to specify the kind of files in which this template can be invoked; in our case it will be JSX.
Now when you type rC and press Tab, the code snippet will expand. Type the component name and press Tab again to jump to the end edit location:
Another way to go is to import a set of templates created by community members for development with React in WebStorm. See GitHub for details on the installation process.
In a follow-up blog post we’ll talk more about the available refactoring options, code quality analysis, and compiling code. Stay tuned!
Develop with pleasure!
– JetBrains WebStorm Team
This is great news! I’ve recently started working with React and I love all the extra support I’m getting with it.
I have one question (they are really both the same)
I’m using browserfy to allow me use React in the browser. Thus,
React.jsis sitting in my
node_modulesfolder.
At work, we use a buildstep which concats React onto the top of our js file (and then minifies it). As such, I don’t ever have
React.jsactually in the project.
Will WebStorm support code completion in these cases? I’ve tried using the above guide, and it doesn’t look like it does currently.
Settings -> Languages&Frameworks -> Javascript -> Libraries maybe?
Like I’ve mentioned in the blog post, you can add TypeScript definition file for React as JavaScript library in Preferences | Languages and Frameworks | JavaScript and WebStorm will provide you with code completion for React APIs.
I would also recommend to add react.js library file itself as JavaScript library via the same Preferences | Languages and Frameworks | JavaScript | Libraries configuration. You can read about that here:
Hi Ekaterina – The thing is – as I mentioned in the comment I’ve done the steps in this blog post and I’m not getting code completion. I’ve noticed that if I follow the
let MyComponent = React.createClass({...});
module.exports = MyComponent
es6 pattern it works, but I’m not using that pattern. I’m using
export default React.createClass({...});
pattern, and then including each component into parent components with
include MyComponent from './components/MyComponent'
export default React.createClass({
...
render: function() {
...
return (
...
....
)
}
});
Sorry about the terrible formatting – I tried to format it well but the formatting got removed after posting.
Thanks for letting us know. I will look at that closer and will let you know what can be done.
Thanks!
Same issue, i’m importing react from node modules but get no completion. It’s a pretty common workflow so i’m surprised webstorm doesn’t support it
What WebStorm version do you use? node_modules folder is not excluded from the project, right?
same issue, the code assistance can’t work, the version of WebStorm is 2016.3.4 on macOS sierra 10.12.3
What exactly doesn’t work as you expect? Do you have react in your project’s package.json file?
sorry,i have not fount ‘react’ library in the TypeScript community stubs…
Please try typing “react” in the list of stubs to search for it, it should be there.
Hi,
I tried.. Could find react under TS community stubs.
Correction. Couldn’t find react.
Which WebStorm version do you use?
As a note, WebStorm still cannot handle idiomatic React code:
import React, {Component} from ‘react’;
…
class Boxy extends Component
Instead, use:
…
class Boxy extends React.Component
until this is someday fixed
Can you please provide more details on what exactly should be fixed. Thank you!
I can confirm this is still not supported as of today:
import React, { Component } from ‘react’;
WebStorm gives the following warning:
Cannot resolve symbol ‘Component’
I’ve replied to you on the issue tracker. Please reply to me there what WebStorm and React versions you use.
Great, thanks.
Emmet in JSX doesn’t always work right, at least in PhpStorm 9.0.2. ‘label’ expands to ” – notice
forinstead of
htmlFor. ‘img’ expands to ” – notice lack of
/>at the end. All these is html5-isms, they aren’t valid in JSX.
ah, shoot. comment engine discarded
and
img src="" alt=""because of wrong quotes.
are you kidding me? :)
– will it post like this?
ok, there’s also a bug in your comment posing, backticks create inline code blocks but html tags in their content evaporate after posting. no preview either to see what’s gonna happen. can you change it to markdown please?
Thanks for your feedback.
I’ve created 2 issues, please follow the updates on them: and
I get nice automcompletion when manually adding typescript stubs and/or distributed version of react.js to Preferences | Languages and Frameworks | JavaScript | Libraries.
BUT: Once I import react via “import React from ‘react'” autocompletion is broken as Webpack tries to do autocompletion using the npm import which does not work well…
Is there a way to change this behavior? I’d like to have better react autocompletion when using es6 imports.
WebStorm doesn’t resolve Webpack imports now, please see the corresponding issue: import React from ‘react’ would be now resolved either to node_modules or to react file on the same level in project structure. However, I believe, if you still have a react.js in you project, coding assistance for React methods should work anyway.
ES6 imports also breaks autocompletion for me regardless of whether I have react and/or react.d.ts in my libraries.
CORRECTION:
It looks like this works if I create a new file but not with existing files.
Please disregard my last reply. The only valid case is the first comment. Sorry for the confusion
I’m using simple es6 style imports. They are not webpack specific.
Webstorm *does* resolve them correctly (CTRL-Click navigates to corresponding file in node_modules folder).
But: Webstorm tries to use resolved file in node_modules for autocompletion instead of configured typescript stubs or react.js.
And this doesn’t work well.
That might usually be the desired behavior but in this case I get much better completions when I don’t import React from node_modules but rely on typescript stubs/react.js library configured in settings.
Thank you for your comment. That’s a valid point, we’ll think what we can do. My suggestion would be to exclude node_modules from the project. That way WebStorm would fully rely on react.d.ts file for completion.
That didn’t helped either.
I’ve disabled the “node_modules” library and excluded the node_modules directory.
Nevertheless typescript still does not use TypeScript react stub library if React is imported with an es6 import statement.
This even happens if “React” is importet from a non existing module. Like:
Hi.
Unfortunately WebStorm cannot resolve js es6 imports to typescript files so there is no way to get the typescript autocompletion for import * as React from ‘react’. I guess we will fix the problem soon.
There appears to be a bug in PHPStorm that prevents the react library from being visible in the downloads. In the TypeScript community stubs, it only shows libraries from A to P. In other words, you can scroll down through the ABC’s down to libraries beginning with P, but no React is visible. This is sort of the classic complexity and confusion that seems to be standard in JetBrains stuff unfortunately….
That issue has been addressed several months ago (), please use the latest version of PhpStorm.
Do you know if CJSX (CoffeeScript JSX) support is on the roadmap? I am currently forced to use Atom for writing CJSX since it seems to be the only editor with full support for it. I still use WebStorm for all of my backend, tests, etc. because WebStorm has great CoffeeScript support. I tried a WebStorm plugin for CJSX but it was not well done.
Hello Larry, at the moment we don’t have any precise plans to support JSX in CoffeeScript, sorry. You can vote for the corresponding issue and follow the updates on it:
Great article! Thanks. The LiveTemplates I’m finding especially useful. Using IntelliJ here, but there is little difference.
One thing is annoying me a bit though, wonder if you have a similar issue:
IntelliJ is complaining about unused properties on my React classes (render, componentDidUpdate, etc). I’d like to squelch these warnings, and if it’s a React class, have IntelliJ give a pass to the standard property list of React classes. Any ideas?
Thanks! Yeah, Live templates are really useful.
We have a similar request, hope we’ll address it soon, will ping my colleagues:
Hi,
I don’t use typescript.
The simple “import React from ‘react’;” don’t work even when I add react.js to the libraries.
I have “Default export is not declared in imported module”
Thank you
Hello, we don’t use TypeScript and TSX in this blog post.
This warning shows that React is not exported using ES6 module syntax. That doesn’t mean that the import is not working, it’s just a warning.
Unfortunately, you can’t disable this warning in WebStorm 11 (fixed in WebStorm 12 EAP). Sorry for the inconvenience.
I can not find ‘react’ library in the TypeScript community stubs…
What WebStorm version do you use?
Please make sure that the library is not downloaded and listed in the library list yet.
Is there a way to make WebStorm more JSX-aware and suppress warnings like:
‘this.props’ and ‘this.state’ flagged as unresolved variables inside React,createClass method definitions,
‘key’ flagged as unallowed attribute key inside a JSX component tag
Thank you for your feedback! That are known issues that we hope to fix some time soon.
Not yet :(
Please try WebStorm 2016.2 Beta:
Seems like this is not solved in 2016.2?
Not yet in 2016.3
Please send a sample code to reproduce the problem to
Thank you!
I would like to see this feature as well.
These issues have been addressed in WebStorm 2016.2:
Please give it a try!
I am running WebStorm 2016.2.3 Build #WS-162.1812.21.
Upon further investigation, I think what I’m seeing is that it correctly handles “props” and “state”, but not “params”.
Thanks.
Do you mean this.props.params from the react-router?
Yeah, there is. You can define all props in a propTypes-object, see. Atleast works for me.
jscs -fix does not work in jsx files, due to .jsx files not matching the wildcard *.js that you use for checking if jscs rightclick menu should be shown
Thanks for your comment. I’ve reported an issue:
Please follow it for the updates.
Hi,
I also want to ‘this.props’ and ‘this.state’ ‘unresolved variable’ issue to be solved. Is there an issue we can vote? Thanks for the great work by the way.
Hi, thank you! Here’s a related issue that you can vote for and follow for updates:
Is it possible to autocomplete React component methods, like: componentDidMount?
What component style are you using? ES5
React.createClassor ES6
extends React.Component?
I’m seeing related with the ES6 extends method.
import React,{Component,PropTypes} from ‘react’;
class MyClass extends Component {
componentDidMount() {}
}
componentDidMount is listed as unused
Sorry, it’s a known issue, we hope to get it fixed soon:
It’s been over a year, and I don’t see this fixed in version 2017.2. Any update and when this will be supported for the “ES6 extends React.Component” style?
Hello Michael! The issue has been fixed over a year ago. I have double-checked it in WebStorm 2017.2 and all seems fine – the lifecycle properties are not marked as unused. What JS language level do you have in your project (Preferences | Languages & Frameworks | JavaScript) – ES6 or React JSX?
Hey,
We try to create a large react project, using es6 style ‘extends’ components,
Unfortunately it seems like all coding assistance only works for React.CreateClass() style, but no for es6 style:
import React from “react”;
class MyClass extends React.Component {
render() {
return
{
hi
}
}
}
export default MyClass
Inside the class scope declaration I don’t get any code assistance (I do get only when usinging ‘this.’ prefix or inside methods), I cant use command+P for class methods etc..
Is this a known issue or not supported yet?
Completion for lifecycle methods is not yet supported when using
extends React.Component, here’s a related issue: We are planning to work on that soon.
The React Component methods that you can override in your class are available via Generate… – Override methods (when react.d.ts is added as a JavaScript library).
I already switch javascript type to jsx harmony .And in jsx file,there was no error.But still no type assistant of react appear.what’s the problem?
Do you have React installed in node_modules or anywhere else in the project?
One peeve I have with writing Jsx with Webstorm, is (I know it can be changed) when typing JSX attributes, it defaults to double quotes. Would be awesome if webstorm could automatically recognize that it’s JSX and use curly braces instead.
That’s been fixed in WebStorm 2016.2: for React events curly braces are now added instead of quotes.
This doesn’t appear to be true, or it was a regression in 2016.2.1. Curly braces are not populated for className, double-quotes are.
JavaScript language version: React JSX
In React className by default is a string attribute and className=“” is a valid code, that’s why we add quotes for it. Here’s an example from official React documentation:
Is it possible to switch to single quotes in settings? We have convention in team to use single quotes for string attributes in JSX, but from the quick scan I didn’t find relevant setting.
Yes, you can do that in WebStorm 2016.2 in Preferences | Editor | Code style | HTML – Generated quote marks.
Thank you!
Is this also possible for auto imports inside a JSX file? Changed HTML & JavaScript Generated quote marks to Single, but still get double for imports.
Well, I’ve found it directly after posting.. :D
Had to check the ‘Enforce on format’ option under Code Style -> HTML
Good that it’s solved now.
Is it possible to use curly braces for className autocomplete instead? I use a React CSS library that requires className to be a JS object instead, so I never use the default string format.
(i.e. )
WebStorm automatically adds curly braces only for event attributes like onChange – React requires them to be JavaScript objects. className can be a string. You can disable auto adding quotes in Preferences | Editor | General | Smart keys – Add quotes for attribute value on typing “=” or attribute completion. Please comment on this issue if you think that default behaviour should be changed:
Hi, everyone
I’ve one question about the support on ReactJS in Webstorm (in current version by date on this message). How can I debug ReactJS file/project same JavaScript? Sometime I have to attach somewhere break point, but I can’t do this? Тhere it has it as support?
Hi,
Unfortunately, WebStorm doesn’t provide any specific support for debugging JSX. Please vote for and follow this feature request:
Hi,
Is there a way to disable backtick auto-formatting in phpstorm? Phpstorm adds backticks randomly which comments parts of the file. This makes it hard to use ES6 template strings.
Thanks!
Hi! Sorry, what do you mean by backtick auto-formatting? If you’d like to disable adding a pair quote, uncheck this option in Preferences | Editor | General | Smart keys.
all u need:
Hi my team and I are about to start a new large react project for which we are going to use flow as well.
As I started prototyping I first had the language level to “React JSX” but then since I wanted to introduce flow to the game, I changed it to flow. The problem now is that react autocomplete does not work with my components.
Is there is workaround to handle this or should I open an issue?
Hi George,
it would be better to file an issue with a small code sample, because it seems to be working for me
when i use React JSX and setting javaScript language version React.js . but it not work and /usr/local/bin/babel src/index.js –out-dir dist –source-maps –presets es2015
SyntaxError: src/index.js: Unexpected token (20:16) ….so what’s wrong?
You need babel-react-preset when using Babel with JSX:
I’ve FINALLY upgraded my WebStorm and I have to say I’m enjoying the React features a lot.
Great to hear that :)
> When you autocomplete component name, all its required properties will be added automatically. If the component usage misses some of the required properties, WebStorm will warn you about that.
I am using Redux with React, some of my required props will be provided by the connect HOC (). Webstorm is not smart enough to understand this use case, so I have to entirely disable this warning via Preferences > Editor > Inspections > HTML > Missing Required Attributes
Sorry, WebStorm doesn’t support properties passed to the component using Redux. Please follow this issue for updates:
Can I enable eslint-compliant code style for JSX? E.g. I currently use eslint rule jsx-curly-spacing”: [2, “always”] to have extra space inside curly braces, yet WebStorm doesn’t reformat my JSX code this way.
This code style option is not currently supported in WebStorm, but we plan to fix that. Here’s an issue you can follow:
As a workaround, since this ESLint rule is fixable, you can use ESLint: Fix current file action (hit Alt-Enter on the highlighted error from ESLint in the editor and select ESLint: Fix current file).
Hi! I have just reinstalled my webstorm and now it will not let me use expressions enclosed in curly braces {} as attribute values.
I works for some projects, but does not work with new projects.
I am writing react/JSX in all of the projects and they all appeat to have the same settings.
Any hints as to where to look?
I have an HTML file and a tag inside it. The problem arises when, in teh script, I have something like:
<element attr={…
it doesn't even matter what is next. I will not allow me to write anything, nor use the "backspace" key.
Sorry, I’m not sure I understand correctly: do you have this issue in your React code in JS files? What WebStorm version do you use?
Thank you for replying.
I use the latest Webstorm (2017.1.1).
I have a project with one HTML file and a tag inside it. In here I write JSX.
At some point I need to write this:
Inside the curly braces I can not write a thing. I would just ignore my typing.
I checked my Settings (File>Settings>Language&Frameworks>Javascript) and I use ECMAScript6. I also tried with JSX, but I get the same.
The issue is that in some projects it works properly and others not. I tried comparing settings. I even exported settings from a project and importing to another and nothing. I don’t know where/what to look for
Update:
I thought about what I changed and that is: I updated Webstorm to 2017.1.1
I just uninstalled that and reinstalled 2017.1 and it works as expected. I am not 100% sure that this was the problem, but this was my solution. later today I wil try to update again and see if I get the issue back.
Please send us the content of your IDE log folder (menu Help – Show log) to as well as an example of the HTML file with JSX in it. Thank you!
Update: it seems that your issue is similar to this one:
The fix will be available in WebStorm 2017.1.3 bug-fix update.
Thank you!
Hi,
I’m using Webstorm (version 2017.1.3) on a React project. I currently get a warning from the IDE to convert React component render function to a static function. Here is my code:
class App extends React.Component {
render () { // warning: can be static
return React & Application
}
}
Is there a way to configure WebStorm to prevent this warning message? Thanks.
Known issue, please follow WEB-19028 for updates. I can suggest to either disable this inspection or suppress it for current method: hit
Alt+Enteron
render(), then hit
Right, choose Disable Inspection or Suppress for statement
Can we debug whole react.js application including route.js and all in webstrom? I am currently working webstrom 2016.2.4. Please help me out.
By route.js do you mean react-router? If possible provide a sample project. WebStorm JavaScript debugger should be able to debug the whole app running in the browser. Note that we highly recommended using the latest WebStorm version, it’s 2017.1.3.
Hi, good post, but there is a broken link:
(Developing mobile apps with React Native)
Thank you.
Brice
Thanks for noticing. Fixed that.
Hey!
Thanks for this great feature!
I have a question about inheritance of propTypes. To be clear about that I give you an example.
I have a component Text and LabeledText.
LabeledText is suppossed to have all propTypes of Text. I write this like:
class LabeledText extends React.Compoment {
propTypes: { …Text.propTypes, label: PropTypes.string }
}
Unfortunately, autocompletion doesnt work for this. Is anything planned in this direction or is there a better way to achieve this behavior?
Thanks in advance and best regards,
David
Hello,
unfortunately, WebStorm doesn’t support this case now. Here’s a related issue on our tracker: Please vote for it and follow the updates. We’ll try to add the support in the future, but there is not ETA.
The feature is pretty cool
In 2017.2.4 and now 2017.3
All react attributes are defaulting to {} braces and it is driving me crazy. The above animated example with className now ends up as className={}. Working with a an input field is doing the same that makes no sense at all.
Is there a way to turn this off? Or how far back to I have to revert the installation to get it to work like it did before? (meaning just adding quotes)
In the latest WebStorm 2017.3 EAP for
classNamewe don’t add either {} or “”, so that you can type what you need in this particular case. We think that it’s the best solution for everyone: for those who often use {} for className, those who use quotes and those who use both. What do you think?
that only address className. why is every other JSX/HTML attribute now using {} now?
again with my input example.. If I type input then hit tab it creates the input tag and type attribute with quotes, but when I add the next attribute, label in this case, if I tab to select it from the pop up list, it adds label={}. How can I turn off braces for ALL ATTRIBUTES. I need quotes more often than I need braces and replacing them is a pain. Or as previously asked what is the last release before this “feature” was added. I will just stick with that version.
2017.1.4 seems to be the most stable for me. Fixes the “” vs {} issue and another problem another problem where individual imports were not recognized. Does jetbrains have a QA team????
Can you please provide more details on the problems with the individual imports. We are not aware of it. Thank you!
Java Script setting: JSX in ECMAScript 6
import {Link} from ‘react-router’;
Works in 2017.1.x but not in 2017.2+
My problem with “” vs {} well documented above.
2017.1.4 works like I expect and I will not be upgrading from there.
What react-router version do you have? Is it listed as a project dependency in package.json? I’ve tested it in WebStorm 2017.1 and 2017.2 and for me it’s resolved correctly with react-router 4.2. Thanks!
This is driving mental. How do I turn this feature off?
In WebStorm 2017.3 (now available as a Release candidate) you can disable adding {} and “” in Preferences | Editor | Smart keys.
React components like: , don’t work with Webstorm autocomplete. Infact, classes import and complent properties having autocomplete feature. Any suggestion?
Sorry, but I can’t see the code sample in your comment. Can you please wrap the code with the code tag? Or make a screenshot with the code sample and share a link. Please also let us know what WebStorm version do you use. Thanks!
I am using two packages, which both export two completely different Link components.
One is a router, the other is a draft.js wysiwyg.
For some reason, WebStorm always assumes the component used is the draft.js one, and will autocomplete with its props, and warn when its required props are not used.
This happens even when I import the correct Link component via autocompletion, both via import and the render function. When navigating to the component via ctrl-click, it opens the correct component, so it knows what component I am using.
Invalidating caches and restarting does not fix the issue.
Is there a way to specify which component to look at?
Hi!
Can you please provide a bit more details on the modules you’re using and the import statements you have in your file.
So far I tried to install “draft-js”: “^0.10.4”, but I haven’t found a Link component there.
And for react-router, the Link component is only available in the react-router-dom package. Is that what you have?
react-router v3 exports it directly. [It’s explained here in the docs]()
The other package is called react-draft-wysiwyg. [Link]()
I’m not sure what its Link component is used for, but it doesn’t seem to be exported through the main package.
Nonetheless, I get autocompletion for importing it, both from the package itself, and directly from from its folder, like so:
[code]import Link from ‘react-draft-wysiwyg/src/controls/Link’;[/code]
Whoops, thought this was markdown :)
But you get the idea
Thanks for the info.
For me, if I have
import {Link} from 'react-router-dom'on the top of the file the completion for Link’s props works as expected (doesn’t show any props from the react-draft-wysiwyg module).
If there’s no import statements, the Link is correctly highlighted as unimported and in my opinion the import suggestions both from react-router-dom and react-draft-wysiwyg are valid.
I wasn’t able to reproduce the case when wrong props are shown.
Can you please share the whole file? Or at least the list of import statements you have in it. Thank you!
If you want you can create an issue on our tracker or contact our tech support directly.
> If there’s no import statements, the Link is correctly highlighted as unimported and in my opinion the import suggestions both from react-router-dom and react-draft-wysiwyg are valid.
Oh absolutely, I agree.
This error doesn’t seem to be reproducible. I have created a new project for the example, but it doesn’t happen in it.
I’ll open an issue if I can figure out how to reproduce the issue.
Thank you for your help, by the way. It’s cool of you to follow the comments on this blog.
You’re welcome! Let us know if you stumble upon this issue again.
Hello! First off, thanks for these great features that help developers be more productive with React. :)
The article mentions that “WebStorm 2016.2 can provide code completion and resolve for component properties defined using propTypes.”
Do you know if it would be possible to add support for code completion for TypeScript declaration files added via “JavaScript libraries”?
Here are some examples that show only getting completion when the component has propTypes.
Would be great if there could also be completion for the props defined in the TypeScript interface.
Thank you,
Brie
IntelliJ IDEA 2018.1.2 (Ultimate Edition)
Build #IU-181.4668.68, built on April 24, 2018
JRE: 1.8.0_152-release-1136-b29 x86_64
JVM: OpenJDK 64-Bit Server VM by JetBrains s.r.o
macOS 10.13.4
Doesn’t currently work, please vote for to be notified on any progress with it | https://blog.jetbrains.com/webstorm/2015/10/working-with-reactjs-in-webstorm-coding-assistance/?replytocom=247410 | CC-MAIN-2020-24 | refinedweb | 5,400 | 66.64 |
We.
Platform libraries
To enable access to underlying operating system interfaces, Kotlin/Native provides a set of platform-specific libraries available to any program targeting a particular platform. Previously, you needed to use the cinterop tool to generate the libraries yourself, and now they’re available out of the box.
The following program demonstrates the use of the new platform libraries in the v0.4 release.
It will read the contents of a file into a Kotlin
ByteArray and then will print it to standard output. So now the full power of operating system interfaces is at your hands without the need of explicit interop stubs generation.
Note that we can use Kotlin objects for storing a result of platform call
fread, see below in ‘Object Pinning’ section.
iOS and macOS framework interoperability
On Apple platforms, unlike most other platforms, access to system frameworks is provided in the form of Objective-C APIs. To support that, the Kotlin/Native team has implemented an Objective-C interoperability layer. For example, the following code, written in pure Kotlin/Native, will read an application’s resource on iOS:
And the following complete program will render a top-level window on macOS. Note that the window title is in Cyrillic, to show that we perform correct character set conversion in the interoperability layer.
See Kotlin/Native fullstack application iOS client for an example of a complete application for iOS. If you own an Apple or Android device, feel free to see this application in action on App Store and on Google Play.
Object pinning
To simplify using Kotlin objects with C APIs we provide new APIs for typed arrays (
ByteArray,
IntArray,
FloatArray etc.), namely
refTo(),
pin() and
unpin(). See Pinning.kt. They allow to ensure an object is locked in memory, and its data has a stable address, thus allowing to use Kotlin object data directly from C APIs and vice versa. For example, in
readFileData() function above we pass pointer to data in
ByteBuffer to
fread() call.
Improved debugging
Debugging with release v0.4 adds improved inspections, so most variables could be inspected in runtime. For example, if we take the program to read a file from the first section, and compile it with the debugging support (note
-g switch), we can perform symbolic debugging and variables inspections.
WebAssembly
Kotlin/Native v0.4 have an experimental support of WebAssembly (
-target wasm32). WebAssembly target is mostly intended to showcase the technology, as it is still not fully production ready due to browser support limitations (mainly around more seamless DOM/HTML5 APIs access and performance of virtual machines executing WASM). However, we’re very interested in feedback from our users, to know how much interest in WASM support is there.
IDE Support
Last, but not the least, we have recently announced an experimental CLion plugin supporting Kotlin/Native, see this post for more details.
Getting the bits
Binaries could be downloaded below:
* x86-64 Linux hosts
* x86-64 MacOS hosts
* x86-64 Windows hosts
Feedback
Please report bugs and issues in the Kotlin bug tracker. Questions are welcome on #kotlin-native channel on Slack.
Yay for WASM!
Yay!!
How does your GC interact with ARC and workaround the native/Java dependency loop problem?
At Codename One we solved most of this by avoiding ARC altogether and using a lightweight architecture but historically this has been a really tough problem once you open up the native access to 3rd party developers.
Memory management is cooperating with Objectve-C runtime memory mgmt, see (function runDeallocationHooks())
Very interested in WASM. I’d probably start using Kotlin if it fully supported.
My main interest in Kotlin/Native is definitely WASM – I hope you guys get it production ready sooner rather than later. The world is fed up with JS.
+1 for full WASM support
WASM all the way!!
Will check out the wasm support, but the idea definitely interests me.
WASM FTW!
WASM is the future and its going to be THE thing on the web for the forseable future. Its great that you are starting support now even though as you say the browsers are missing some important features. But they will come and then its good to be ready.
Great to hear about Kotlin+WASM – might become really interesting for Ethereum – and Kotlin+Ethereum is already a match made in heaven – was recently talking at DevCon3 about it in the WALLETH deep dive – unfortunately the video is not yet released – but you can find the slides here:
How about the memory management on linux ?
Kotlin/Native has same memory mgmt on all platforms – currently it’s reference counting + cycle collector on top of the memory provided by system allocator (malloc).
If you are serious with Kotlin native then the focus should be webassembly. You are not going to be able to take much marketshare from C++ if you plan to focus on that.
Think instead of having to learn java, javascript, angular/react, html, CSS, you would need to know Kotlin, html, CSS. This makes it simpler for development.
This would also reduce code and testing since a lot of code is shared between client and server.
To make story short, of course you should go for WASM.
WASM +1
WASM has enormous potential when combined w/ Kotlin. Given C++ coroutines only recently got added to clang, and clang llvm is still a work in progress for webassembly, I would LOVE to use Kotlin/WebAssembly now!
For ios w/ ARKit, where are the kotlin stubs?
ARKit is part of iOS 11, while pregenerated platform libs are targeting iOS 10. However, you could write your own .def file for it, take a look at
Any ETA on IOS 11?
I go for WASM. Unfortunately the compiler crashes with an error… (WIN 10/64 cmd)
Are there any infos on the environment to use, e.g. JDK Version etc?
Windows hosts not yet supported, please use OSX for now.
Absolutely WASM++ for me too!
YAY FOR WASM!!!!
wasm is very important!
native is also very important!!! so many applications can be written by kotlin which need to be written using c++ before. server, client, browser, pc and mobile, iot etc, will all use same excellent language, share same code.
support kotlin/native!
kotlin is excellent,
can
val shoppingList = arrayOf(“catfish”, “water”, “tulips”, “blue paint”)
be written
val shoppingList = [“catfish”, “water”, “tulips”, “blue paint”]
val occupations = mutableMapOf(
“Malcolm” to “Captain”,
“Kaylee” to “Mechanic”
)
val occupations = [
“Malcolm”: “Captain”,
“Kaylee”: “Mechanic”
]
thanks!
Discussed in here:, unrelated directly to Kotlin/Native.
WASM please.
Very interested in WASM support.
Hey! Posted an issue on github as well, but perhaps the writer or some readers have input. I cannot get the uikit sample to run, using xcode 9 beta 2 – I get the error message “A problem occurred evaluating root project ‘uikit’.
Any clues?
Is there planning something like borrow and ownership checking mechanism (uniqueness/linear typing maybe), control of scopes/namespaces lifetimes (also suitable for RAII), as in Rust, for example, for static checking and control of pointers/references for memory and thread safety in Kotlin/Native?
We’re actively discussing those features since inception of the Kotlin/Native project, and fully aware of importance of proper ownership mechanism. However, our current plan is to offload this analysis from programmers, and use compiler for global lifetime analysis. Thread safety story is already pretty good in current Kotlin/Native, as we do not share object heap between threads, and explicitly transfer object subgraphs ownership, so no concurrent mutation is ever possible. | https://blog.jetbrains.com/kotlin/2017/11/kotlinnative-v0-4-released-objective-c-interop-webassembly-and-more/ | CC-MAIN-2019-35 | refinedweb | 1,256 | 54.12 |
Minutes: Minutes (text): Log: -------- 16:59:31 <jds2001> #startmeeting FESCo meeting 7/31/09 16:59:33 <jds2001> #chair dgilmore jwb notting nirik sharkcz jds2001 j-rod skvidal Kevin_Kofler 16:59:40 * nirik is here. 16:59:51 * pingou around 16:59:53 * sharkcz here 17:00:05 * jwb is here 17:00:14 <Kevin_Kofler> Present. 17:01:00 <jds2001> sorry for the false alarm re: lunch 17:01:02 * dgilmore is here 17:01:07 * skvidal is here 17:01:09 <jds2001> got the food to go :) 17:01:17 <jds2001> anyhow, let's get started 17:01:35 <jds2001> #topic mikeb as sponsor 17:01:40 <jds2001> .fesco 211 17:01:52 <jds2001> looks like he withdrew his request. 17:01:52 <dgilmore> +1 17:01:52 * notting is here 17:02:26 <nirik> I disagree that we should base this on quantity. I think that him doing more reviews to gain visibility and prove his understanding would be good too tho. 17:02:29 <jds2001> but that being said, I'd still be +1 17:02:32 <dgilmore> because of the nosie made about it 17:02:47 <jds2001> yes, same here, i dont think that quantity is a good indicator. 17:03:14 <jwb> he withdrew his request 17:03:22 <dgilmore> he did 17:03:25 <jds2001> he also mentioned a desire to work on the backlog 17:03:26 <dgilmore> move on 17:03:26 <nirik> so, lets ask him to reapply in a month or something... 17:03:31 <jds2001> yeah 17:03:33 <Kevin_Kofler> Well, feedback from the sponsors list was overwhelmingly negative. 17:03:45 <jds2001> based on quantity 17:03:49 <Kevin_Kofler> If you think sponsors are basing their feedback on the wrong criteria, we need to get the criteria fixed. 17:03:52 <jds2001> based on flawed criteria. 17:04:17 <tibbs> Please don't ask our opinions if you don't like the answers you receive. 17:04:17 <Kevin_Kofler> Or rather, clearly defined, because it seems they aren't. 17:04:36 <jds2001> they aren't. And they shouldn't be. 17:04:57 <jds2001> it's a subjective thing, really 17:05:01 <Kevin_Kofler> I agree with tibbs, it's ridiculous to ask for feedback from the sponsors and then to ignore it. 17:05:02 <tibbs> So you won't define criteria, but yet you can easily say that someone else's criteria are flawed? 17:05:02 * nirik notes there was only one -1 17:05:04 <jwb> jds2001, then you can't really say their criteria is flawed 17:05:10 <tibbs> That's really a poor method of argumentation. 17:05:30 <Kevin_Kofler> And saying they're basing their decision on flawed criteria doesn't make sense when there are no criteria defined at all. 17:05:42 <jds2001> tibbs: sorry, I meant that other things should be taken into account 17:05:49 <nirik> there was one -1 (from someone who thinks quantity is important) and a bunch of discussion about the process from various other people. 17:05:51 <Kevin_Kofler> So I stay with my -1 vote. 17:05:54 * jds2001 notes the feedback was based on quantiy of reviews. 17:06:07 <jds2001> solely, and nothing else. 17:06:17 <skvidal> umm 17:06:27 <skvidal> why are we discussing a withdrawn item? 17:06:29 <Kevin_Kofler> nirik: What you call "discussion" was "I agree" to the person saying -1. 17:06:30 <skvidal> let's move along 17:06:44 <jds2001> agreed. 17:06:49 <Kevin_Kofler> And I think I've seen more than one explicit -1 too, but I may be remembering wrong. 17:07:07 <jds2001> #topic Fedora packages as canonical upstreams 17:07:11 <nirik> Kevin_Kofler: I just reread the thread. Thats the only -1. ;) anyhow... 17:07:11 <jds2001> .fesco 210 17:07:29 <tibbs> This was proposed to FPC, but it's not really FPC's decision. 17:07:45 <jds2001> so there's some sticky situations here. 17:07:46 <Kevin_Kofler> -1 to removing the exception, it makes no sense. 17:07:58 <jds2001> If there is code, then the exception needs to be removed. 17:08:07 <jwb> jds2001, huh? 17:08:14 <Kevin_Kofler> There are plenty of upstreams with no tarballs, only some SCM, some SRPMs or other source packages etc. 17:08:16 <jds2001> If there's not, then it can stay (I particularly looked at basesystem) 17:08:18 <nirik> what would be valid as a upstream here? a fedorapeople link? 17:08:32 <jds2001> nirik: or a fedorahosted project 17:08:36 <jds2001> nothing extravagant 17:08:38 <Kevin_Kofler> There are also plenty of upstreams which just dump a tar.bz2 into some directory. 17:08:43 <jds2001> but basesystem has nothing. 17:08:45 <nirik> so this is saying that every package needs a Source0: that is a url? 17:08:51 <jwb> Kevin_Kofler, those aren't want this is about... 17:08:51 <Kevin_Kofler> If we do that, is that really more helpful than just having it in the SRPM? 17:08:55 <jds2001> nirik: thats how i read it. 17:08:56 <tibbs> We complain when suse makes you pull sources out of one of their packages. 17:09:04 <jwb> Kevin_Kofler, ah, i see where you are going 17:09:07 <Kevin_Kofler> jwb: Why should we be held to higher standards than other upstreams? 17:09:13 <jwb> right, got it now 17:09:28 <nirik> I see advantages of this, but it could cause issues too. 17:09:39 <jds2001> at the time the exception was drafted, we didnt have something like fedorahosted. 17:09:54 <jds2001> pointing to a git repo is imo valid. 17:10:02 <jds2001> or an ftp site, or whatever 17:10:07 <jwb> ok, i have 3 issues with this 17:10:13 <jwb> 1) it's scoped to RH when it shouldn't be 17:10:13 <nirik> not wanting to wade thru a flamefest, I would like to defer this till next week and ask for feedback on fedora-devel. Perhaps there are situations here that we need to deal with? 17:10:32 <notting> still, i'm not sure that making people open up FH projects just to have a place to dump tarballs is efficient 17:10:32 <jwb> 2) i agree with Kevin_Kofler that it doesn't really seem to be needed 17:10:40 <jwb> and 3) what notting just said 17:10:49 <jwb> so i'm -1 17:11:01 <tibbs> To be fair, I guess Rahul should be asked why he thought this was necessary. 17:11:03 <jds2001> notting: they could put it on fedorapeople. 17:11:14 <jwb> jds2001, how is that better? 17:11:17 <nirik> jds2001: but thats true of anything right? 17:11:25 <jds2001> right 17:11:25 <notting> jds2001: do we have an example package? 17:11:35 <tibbs> He submitted the original draft. FPC has no opinion on the issue. 17:11:44 <notting> obviously, any $foo-filesystem that's just a spec doesn't count 17:11:49 <jds2001> i looked at basesystem last night. It has nothing 17:12:01 <tibbs> basesystem needs to just go away anyway. 17:12:05 <jds2001> it would stand to lose from removing this exception actually 17:12:07 * j-rod finally starts paying attention 17:12:09 <Kevin_Kofler> kde-settings has a fedorahosted.org SVN these days, but no tarballs. 17:12:17 <jds2001> tibbs: i agree... 17:12:22 <jds2001> Kevin_Kofler: that's fine. 17:12:23 <nirik> there are a number of packages that don't use a url. For various reasons. 17:12:28 * skvidal tries to understand which direction to vote 17:12:36 <tibbs> I don't believe tarballs are in any way required in any case. 17:12:37 <jds2001> it's a tricky issue 17:12:38 <skvidal> are we voting to remove the exception? 17:12:42 <nirik> perhaps we should ask the submitter to rewrite? 17:12:44 <Kevin_Kofler> We export from SVN and make new-sources with the resulting tarball. 17:12:47 <jds2001> skvidal: i say defer 17:12:51 <jds2001> NEEDINFO 17:12:51 <skvidal> if I vote +1 am I in favor of the exception or in favor of removing the exception 17:12:59 <jds2001> Why is this necessary? 17:13:04 <skvidal> jds2001: agreed - and I don't think this is particularly pressing 17:13:07 <jds2001> skvidal: youre in favor of removing it. 17:13:08 <dgilmore> +1 to removing the exception, there is no excuse not to publish tarballs 17:13:19 <tibbs> jds2001: I can't answer why it's necessary, sorry. 17:13:29 <tibbs> I don't know why Rahul wants this to go away. 17:13:33 <notting> Kevin_Kofler: you publish tarballs at 17:13:34 <j-rod> +1, no exception, you're not that special. :) 17:13:59 <jwb> j-rod, for RH packages, or anything? 17:14:03 <j-rod> ("you" being nebulous, not anyone in particular...) 17:14:09 <jwb> it's flawed as written 17:14:14 <jwb> it needs clarification at best 17:14:20 <Kevin_Kofler> notting: Sure, but that's the case for the packages covered by that exception as well. :-) 17:14:25 <jds2001> it is, if there's an exception it shouldnt be limited to RH 17:14:33 <j-rod> I probably need to re-read, I only vaguely recall details 17:14:40 <Kevin_Kofler> You necessarily have something in the lookaside cache to build your SRPM. 17:14:43 <tibbs> I don't believe the current exception is limited to Red Hat. 17:14:57 <tibbs> Red Hat is used by way of example. 17:15:13 <tibbs> Rahul's draft is simply incorrect on that point, which I mentioned in the FEESo ticket. 17:15:36 <nirik> it would also be nice to have a list of affected packages or how to identify them (can they be identified)? 17:15:42 <jwb> tibbs, i know you noted it. i don't want to vote on something that has noted incorrect information 17:15:53 <jds2001> i tried last night, but it's hard :( 17:15:55 <nirik> anyhow, I am 0 on this now, I want more info on it's goals and if it's RH specific, etc. 17:16:12 <tibbs> nirik: I don't believe it's easy to find packages which do this. 17:16:21 <jds2001> anyhow, shall we move on? 17:16:30 <tibbs> Technically they need a comment about it, but I doubt they all do and I doubt there's anything you could grep for. 17:16:36 <nirik> right. Since no url could be us repackaging the source, or a vcs checkout, or other. 17:16:43 * notting is -1. i don't feel setting up a FH tarball repo is really any improvement over pointing people at cvs.fp.o/looksaide/<name> 17:17:24 <notting> though i'll laugh at the first package that actually puts that URL in their spec file 17:17:35 <jds2001> so let's defer. 17:17:38 <jds2001> notting: huh? 17:17:40 <j-rod> oh, hrm, if people can just fetch the tarball there... 17:18:03 <jds2001> notting: Source0 is supposed to be http://<wherever>/<whatever>/tarball.tar.gz 17:18:06 <tibbs> Well, the source has to be in the package regardless, so sure, you can grab it from the lookaside. 17:18:22 <tibbs> jds2001: Only when that actually makes sense. 17:18:24 * j-rod gets it :) 17:18:31 <notting> jds2001: using the lookaside URL as the URL tag in their spec file 17:18:36 <nirik> it could be that the intent here is to move projects to fedorahosted not just for the tar download, but for bugtracker, wiki, more active community, etc. 17:18:51 <jwb> jds2001, and that url can be cvs.fp.org/lookaside/pkgs/<package>/<tarball> 17:18:51 * nirik votes we ask mether to re-write and move on for now. 17:18:59 <jds2001> notting: oh yeah, I'd laugh at that to :) 17:19:08 <notting> nirik: yes, but forcing that on any project that may not have a current upstream is a bit out of our scope, i think 17:19:09 <jds2001> sounds good 17:19:15 <nirik> notting: agreed. 17:19:17 <jwb> jds2001, it illustrates how really silly this all is 17:19:29 <Kevin_Kofler> FWIW, another case where a URL doesn't make sense is kde-apps.org/kde-look.org where they have strange?url=s&with=parameters. You can't give such a URL and have it be a valid Source0... 17:19:34 <Kevin_Kofler> But that's already covered by the guidelines. 17:19:52 <jds2001> #agreed Proposal is deferred until additional information can be obtained. 17:20:04 <jds2001> I'll update the ticket with the concerns later 17:20:10 <j-rod> ok, so clarification on what upstreams this should actually apply to definitely needed 17:20:21 <jds2001> yep 17:20:29 <jds2001> #topic FPC report 17:20:33 <jds2001> .fesco 232 17:21:13 <jds2001> i need to read these.... 17:21:37 <notting> one at a time? 17:21:40 <jds2001> yeah 17:21:46 <jds2001> dos2unix im +1 on 17:21:51 <notting> 17:21:55 <sharkcz> +1 17:21:56 <nirik> +1 on dos2unix. No brainer. 17:22:00 * notting is +1 to this. 17:22:10 <Kevin_Kofler> +1 from me too, makes a lot of sense and FC3 is dead. 17:22:11 <skvidal> +1 dos2unix 17:22:13 <jwb> +1 17:22:45 <j-rod> +1 17:23:13 <skvidal> ? on the autoprovides/req filtering 17:23:18 <skvidal> the summary says 17:23:30 <skvidal> MUST: When filtering automatically generated RPM dependency information, the filtering system implemented by Fedora must be used, except where there is a compelling reason to deviate from it. 17:23:37 <notting> #agreed dos2unix draft is approved 17:23:43 <skvidal> how will a "compelling reason" be defined? 17:23:49 <jds2001> skvidal: that means the macros below 17:23:59 <jds2001> skvidal: reviewer discretion, I guess 17:24:02 <Kevin_Kofler> I'm confused by the "These filtering macros MUST only be used with packages which meet the following criteria: ..." part. 17:24:24 <Kevin_Kofler> There are several packages currently using Provides/Requires filtering which don't fulfill those criteria. 17:24:26 <Kevin_Kofler> For example xchat. 17:24:52 <tibbs> The issue is that you break multilib when you disable the internal dependency generator. 17:25:13 <Kevin_Kofler> Also some packages which should use such filtering, like the KDE packages with plugins which are mentioned in the rationale. 17:25:19 <notting> as i understand it, until we have a solution that honors multiarch coloring, this is not recommended for any package that may end up needing it, just to avoid things breaking by accident 17:25:36 <notting> you could do it in xchat, as long as you know enough to know that the subpackages that put things in %{_bindir} will never be multilib 17:25:50 <jwb> i need to step away for a few min 17:25:54 <notting> but that's hard to put in a guideline, so it's more conservative by default 17:26:12 <notting> now, my concern is that by limiting it to noarch, you'll have no takers :) 17:26:29 <tibbs> Perl needs this all the time. 17:26:37 <Kevin_Kofler> What do we do about the KDE packages with plugins? Just ignore the bogus Provides as we've always done until we have a solution for the multiarch coloring problem? 17:26:57 <tibbs> Any such solution is going to have to come from the RPM folks. 17:27:05 <tibbs> Or at least will require changes in RPM itself. 17:27:12 <Kevin_Kofler> Understood. 17:27:45 <notting> so, i think the 'compelling reason' needs to explicitly list 'package does not meet the criteria listed below'. as i don't want to make those people *stop* filtering 17:27:53 <Kevin_Kofler> "<notting> you could do it in xchat" -> But the guideline says I MUST NOT do it. :-( 17:28:30 <Kevin_Kofler> Well, it says the macros are banned, but what xchat does now is the same done by hand. 17:29:18 * nirik wonders if the first MUST there could just be fixed by rpm only adding provides to .so's in the library path... 17:29:29 <tibbs> So you somehow know that xchat will never ever be multilib? 17:29:54 <tibbs> If you can tell us how you know that, we can modify the draft. 17:30:01 <Kevin_Kofler> Well, there's no xchat-devel, but some people have asked for one to build plugins. 17:30:18 <Kevin_Kofler> So we could end up with a mess. 17:30:34 <tibbs> Input given to FPC didn't indicate that a -devel package was required for something to be multilib. 17:31:09 <notting> tibbs: i, like kkofler, am concerned about the 'must'-ness of the proposal - i don't want to ban\ the people who cannot use it now 17:31:14 <tibbs> I suppose we could say it's acceptable if the package can never be multilib, without stating just how you'd know this. 17:31:21 <Kevin_Kofler> It's trivial to remove the Provides filtering from xchat, but then we'll have xchat providing perl.so and python.so and xchat-tcl providing tcl.so. 17:31:53 <Kevin_Kofler> (That's why the filtering got added in the first place.) 17:32:27 * nirik has another case in 'heartbeat'. :( It's not currently filtering, but perhaps it should be. 17:32:45 <tibbs> People generally don't like it when those things are in guidelines, but I don't think more than a couple of people actually understand the multilib driteria. 17:32:50 <tibbs> criteria. 17:32:54 <nirik> can't we get rpm to not add those when the .so's are not in a 'standard' place? then people who need to add them in weird places could be the exception. 17:32:55 <Kevin_Kofler> I think there are several affected packages which probably should be doing filtering. 17:33:06 <Kevin_Kofler> E.g. pretty much any KDE packages including KParts or other KDE plugins. 17:33:09 <dgilmore> nirik: likely 17:33:29 <Kevin_Kofler> (That's most of the core KDE modules and a few third-party KDE applications.) 17:33:48 <tibbs> And you're certain that none of those packages will ever be multilib? 17:34:02 <Kevin_Kofler> Most definitely not. 17:34:06 <tibbs> If you're not, the work needs to be directed at rpm to fix the underlying issue. 17:34:11 <notting> nirik: ld.so.conf.d could contain a file which magically makes your libraries part of the ystem 17:34:12 <Kevin_Kofler> That's exactly how this solution is failing us. 17:34:27 <tibbs> So you know of a better solition? 17:34:29 <Kevin_Kofler> So on one hand, we MUST filter and on the other hand we MUST NOT filter as we break multilib. 17:34:48 <nirik> heartbeat is multilib. 17:35:17 <tibbs> The draft as presented avoids that issue. It is not as useful as the utopian solution that does not exist. If you wish to vote against it based on that, you have the option. 17:35:24 <Kevin_Kofler> nirik: Some of the affected KDE packages are, too, at least the -libs subpackages are. 17:35:51 <tibbs> Maybe FESCo has the power to get the rpm folks to offer a better solution. FPC doesn't. 17:36:04 <jds2001> tibbs: that'd be nice :) 17:36:11 <jds2001> but unlikely to happen :) 17:36:44 <tibbs> So are there any unanswered questions relating to this draft that I could help with? 17:36:45 <Kevin_Kofler> Has rdieter commented on the KDE plugins (in partly-multilib packages) issue during the FPC discussion? 17:36:45 <jds2001> We can certaintly request that the RPM folks tell us their preferred solution, though 17:37:52 <tibbs> I do not recall any discussion relating to KDE in the discussion. 17:38:09 <tibbs> redundandancy. 17:38:16 <notting> tibbs: is it possible to say 'mandatory to use this if your packages meet the criteria. if they do not, use this only if you Really Know What You're Doing"? 17:38:27 <notting> lame, i know. 17:38:32 <tibbs> It is certainly possible to do that. 17:38:38 <tibbs> Or what I suggested earlier. 17:39:12 <tibbs> But I believe that any allowance of this with multilib packages, or packages that could ever become multilib, can break the distro. 17:39:16 <Kevin_Kofler> The rationale mentions KDE plugins as an example of stuff which should be filtered though (which is how that issue caught my attention). 17:39:25 <notting> yeah, a 'as long as none of your subpackages will ever be mutilib' is a fine clarification 17:40:02 <notting> with that, i'm +1 17:40:02 <Kevin_Kofler> If you define every multilib conflict as "breaking the distro", our distro is very "broken". ;-) 17:40:14 <tibbs> The problem, as I understand it, is that a package can become multilib even though no changes are made to it via dependencies that are multilib. 17:40:15 <Kevin_Kofler> There are still plenty of unfixed multilib conflicts. :-( 17:40:29 * nirik grumbles again about slaying multilib, then shuts up since that will never happen. 17:41:11 <Kevin_Kofler> And as multilibs are not installed by default nowadays unless some 32-bit app requires them, most people don't notice most conflicts. 17:42:42 <nirik> I guess I am +1 with the clarification... but we should press rpm devs for more changes there to help this. 17:42:48 <skvidal> Kevin_Kofler: umm 17:42:52 <skvidal> Kevin_Kofler: that's incorrect 17:42:53 <Kevin_Kofler> nirik: Same here. 17:43:02 <skvidal> the default is to only install the "best" arch for the platform 17:43:08 <skvidal> that's been the default for quite some time 17:43:23 <Kevin_Kofler> skvidal: That's what I mean. 17:43:36 <skvidal> multilibs are not installed by default 17:43:40 <skvidal> if I am on x86_64 17:43:44 <skvidal> and I run yum install foo 17:43:45 <Kevin_Kofler> If you install e.g. wine or some proprietary app, it'll drag in the 32-bit libs, otherwise it won't. 17:43:56 <skvidal> and there exists foo.i386 and foo.x86_64 17:44:01 <skvidal> yum will not install foo.i386 17:44:04 * nirik thinks this is digressing. 17:44:07 <Kevin_Kofler> I know. 17:44:09 <skvidal> and wine is i386 only 17:44:21 <j-rod> +1 begrudgingly, a la nirik 17:44:37 <sharkcz> same here, +1 17:44:42 <jds2001> +1 i guess 17:45:23 <j-rod> that's at least 5 +1 17:45:36 * jds2001 was still counting, thanks j-rod :) 17:45:48 <Kevin_Kofler> As for killing multilibs, a proposal for next week: restrict multilibs to wine, redhat-lsb and their dependencies. Rationale: way too much stuff which will never be needed multilib is multilib. The people who really need that stuff should just use the i?86 repo and deal with the conflicts. 17:46:08 <nirik> Kevin_Kofler: write up a proposal. ;) 17:46:25 <jds2001> #agreed AutoProvidesAndRequiresFiltering is approved 17:46:31 <nirik> next? 17:46:36 <j-rod> yes please 17:46:40 <jds2001> +1 to the clarifications on no bundled libraries 17:46:42 <tibbs> With clarification; I will submit this to FPC on wednesday. 17:47:01 <j-rod> +1 on no bundled libs 17:47:07 <Kevin_Kofler> #info With clarification; tibbs will submit this to FPC on wednesday. 17:47:28 <nirik> +1 no bundled libs. Makes sense and is good to note in the guidelines with rationale, etc. 17:47:30 <notting> +1 on no bundled libs, although the formatting on the list items has gone wonky 17:47:34 <sharkcz> +1 17:47:43 <Kevin_Kofler> IMHO it should be clarified to allow significantly modified libraries. 17:48:02 <Kevin_Kofler> It's an unacceptable burden to force packages using such libraries to be ported to the system version. 17:48:03 <nirik> it would be nice to have some way to identify existing packages with them, but I suppose we will just need to file the bugs as we see them. 17:48:09 <Kevin_Kofler> Sometimes it's just even plain impossible. 17:48:15 <Kevin_Kofler> Changes are made to libraries for a reason. 17:48:22 <jwb> Kevin_Kofler, why can't those be packaged as forks? 17:48:26 <nirik> Kevin_Kofler: you mean forked libraries ? 17:48:34 <dgilmore> +1 i think there is no reason for any app to bundle libraries 17:48:36 <Kevin_Kofler> nirik: Significantly-forked ones. 17:48:50 <skvidal> ugh 17:48:53 <skvidal> stop forking the libraries 17:48:56 <jwb> Kevin_Kofler, so package the forked library for fedora... 17:49:05 <Kevin_Kofler> jwb: What's the point of libfoo-fork-for-app1, libfoo-fork-for-app2 etc. packages? 17:49:14 <Kevin_Kofler> (if nothing else uses the respective fork) 17:49:19 <jwb> maybe people will stop forking shit then? 17:49:21 <nirik> Kevin_Kofler: did you read the page? do you see why this is bad? 17:49:24 <dgilmore> Kevin_Kofler: to make it clear people are doing dumb things 17:49:27 <jwb> sorry, i redact my shit comment 17:49:45 * jwb hall monitors himself 17:50:01 <Kevin_Kofler> nirik: The problem is that it's not our (the packagers') decision to fork the libraries. 17:50:24 <jwb> Kevin_Kofler, it is their decision to package the package though. it's part of being a maintainer 17:50:25 <Kevin_Kofler> And application developers are not going to let us interfere with how they develop stuff. 17:50:29 <nirik> Kevin_Kofler: true, but we should work hard with upstream to fix things... not just blindly ship the forked internal copy. 17:50:31 * jds2001 sets jwb -v :D 17:50:35 <Kevin_Kofler> They'll just end up in some third-party repository or we'll miss out on them entirely. 17:51:04 <Kevin_Kofler> jwb: How do you "fix" such a package? 17:51:20 <j-rod> educate upstream 17:51:24 <Kevin_Kofler> If it requires a forked library because it really requires the changes in the library, how do you suggest a packager handles that. 17:51:26 <j-rod> get them to send patches to the lib 17:51:28 <nirik> you talk with both upstream for the package and the base library. 17:51:32 <Kevin_Kofler> j-rod: Upstream won't let themselves be educated. 17:51:39 <jwb> package the forked library as a fork, make the package use that. if they don't want to do that, try to work with upstream. if upstream doesn't care, then the burden is on them as the maintainer 17:51:41 <Kevin_Kofler> They'll just go to another repository or ignore Fedora entirely. 17:51:42 <j-rod> that's not universally true 17:51:49 <jds2001> Kevin_Kofler: i think you have a very narrow world view here. 17:51:52 <nirik> Kevin_Kofler: do you have a specific example? or are you just playing devils advocate here? 17:52:07 <Kevin_Kofler> Some packages with forked libs are already in Fedora. 17:52:13 <Kevin_Kofler> Audacity has a forked PortAudio. 17:52:20 <Kevin_Kofler> It will not work at all without the patches. 17:52:20 <jds2001> they are, and they need to be repaired. 17:52:24 <nirik> are bugs filed? 17:52:27 <jwb> sounds like an audit is required 17:52:32 <tibbs> I note that this draft includes only rationale. 17:52:38 <tibbs> It's not changing any policy at all. 17:52:44 <jds2001> yes, nothing is actually changing. 17:52:47 * nirik nods. 17:52:49 <Kevin_Kofler> nirik: I filed a bug, it was closed as NOTABUG as it cannot be fixed. 17:52:59 <jds2001> was effort made to resolve it? 17:53:04 <Kevin_Kofler> And PortAudio's upstream is almost dead. 17:53:04 <jds2001> with both upstreams? 17:53:05 <jwb> well, that is an incorrect resolution 17:53:17 <nirik> Kevin_Kofler: it should be reopened and blocking the bug in the page there. 17:53:19 <j-rod> maintainer fail 17:53:20 <Kevin_Kofler> jwb: Well, it could be CANTFIX too. 17:53:32 <nirik> then they should request an exception from fesco. 17:53:33 <jwb> also an incorrect resolution 17:53:50 <Kevin_Kofler> PortAudio didn't reply to my patch to support non-mmap ALSA at all either. 17:53:55 <Kevin_Kofler> Fedora is now carrying that as a patch. 17:53:59 <Kevin_Kofler> Upstream is basically dead. 17:54:08 <Kevin_Kofler> Good luck getting them to merge Audacity's invasive patch. 17:54:10 <jwb> the correct resolution for someone that is not following guidelines because it would incur additional work is CLOSED->LAZY 17:54:28 <Kevin_Kofler> jwb: It doesn't just incur additional work, it is IMPOSSIBLE. 17:54:35 <jwb> no it's not 17:54:37 <Kevin_Kofler> The application just plain WILL NOT WORK with the unpatched library. 17:54:38 <nirik> bug 471782 17:54:39 <buggbot> Bug medium, medium, ---, gemi, CLOSED NOTABUG, Please build audacity against the system portaudio 17:54:51 <jwb> Kevin_Kofler, so the maintainer packages the forked library and makes the package use that 17:54:55 <jwb> that is not IMPOSSIBLE 17:55:07 <Kevin_Kofler> The name, soname etc. will conflict. 17:55:09 <notting> portaudio is dead upstream? so, why not just drop it 17:55:15 * jds2001 doesnt get the point of the discussion about adding a few paragraphs of reasoning.\ 17:55:21 <jwb> Kevin_Kofler, not if properly forked 17:55:24 <Kevin_Kofler> Unless they make it static only, then only -devel will conflict, but that's also against the guidelines. 17:55:25 <jds2001> we are WAY off topic. 17:55:42 <Kevin_Kofler> The point is that libraries forked by applications are NEVER properly forked. 17:55:47 <Kevin_Kofler> They won't change the name or anything. 17:55:55 <Kevin_Kofler> They just patch it and link it in statically in one way or the other. 17:55:57 <nirik> there will be times when nothing can be done. Then the package should be removed or a exception granted by fesco 17:56:15 <nirik> but all other avenues should be followed first. 17:56:22 <jds2001> Shall we vote on adding the few paragraphs? 17:56:31 <j-rod> upstream portaudio seems to not be dead from what I can see 17:56:34 <jds2001> +1 17:56:44 <nirik> +1, makes sense I agree with it. 17:56:45 <j-rod> they had some issues, but look to be moving again 17:56:47 <Kevin_Kofler> We shall vote to remove the "clarifications" on forking and explicitly allowing forked libs instead. 17:56:58 <jds2001> -1 17:57:01 <Kevin_Kofler> s/allowing/allow/ 17:57:27 <nirik> allowing forked bundled libs should be the very very very last resort. 17:57:35 <jwb> what are we voting on 17:57:52 <jds2001> jwb: adding the paragraphs. 17:57:54 <skvidal> jwb: I have no idea at this point 17:57:57 <jds2001> to which im +1 17:58:09 <skvidal> jds2001: but you just said -1 17:58:11 <nirik> the FPC rationale at: 17:58:12 <skvidal> didn't you? 17:58:18 <jds2001> skvidal: to Kevin_Kofler's proposal 17:58:20 <dgilmore> +1 allowing forked/bundled libs should not happen. at least not without a plan to unfork/unbundle 17:58:32 <nirik> +1 to the PFC link above here. 17:58:39 <sharkcz> +1 17:58:54 <j-rod> +1 for the fpc thing 17:59:02 * notting reaffirms his +1 vote to the proposal 17:59:09 <skvidal> +1 for the fpc guidelines 17:59:12 <Kevin_Kofler> 0 - the clarifications are good in principle, but I don't agree with the details on forking 17:59:16 <jds2001> #agreed Clarification on no bundled libraries is approved. 17:59:30 <jwb> +1 for the FPC guidelines 17:59:38 <jds2001> next for removing prebuilt binaries. 17:59:51 <jds2001> this proposal makes sense, +1 17:59:57 * jds2001 reads it as 17:59:58 <j-rod> yeah, seems sane 18:00:00 <j-rod> +1 18:00:04 <jds2001> 'remove them during %prep' 18:00:31 <Kevin_Kofler> +1 18:00:37 <jwb> so i have a dumb question 18:00:44 <nirik> it seems a bit odd to have FPC approve exceptions. 18:00:46 <jwb> does this include pre-generated configure scripts? 18:00:52 <jwb> because OMG pain... 18:00:54 <nirik> since fesco does all the other exceptions it seems like. 18:00:58 <jds2001> jwb: explicitly not 18:01:00 <jds2001> I think. 18:01:13 * jds2001 doesn't see that as a "program binary" 18:01:21 * nirik notes the BinaryFirmware link is broken/nonexistant there. 18:01:21 <jwb> configure is a program? 18:01:24 <notting> +1, this is much better than the one that was proposed before 18:01:32 <jds2001> but maybe tibbs can clarify on that 18:01:32 <notting> jwb: configure is PAIN 18:01:33 <Kevin_Kofler> configure is executable. 18:01:36 <pjones> jwb: not a very good one. 18:01:40 <dgilmore> +1 18:01:40 <Kevin_Kofler> "Is it executable? If so, it is probably a program binary." 18:01:41 <sharkcz> +1 18:01:49 <jwb> notting, pjones: beside the point 18:01:54 <notting> jwb: configure is a shell script 18:02:07 <jwb> notting, look at the criteria in the proposal... 18:02:07 <tibbs> It certainly wasn't the intention to somehow force removal of configure scripts. 18:02:12 <dgilmore> having prebuilt binaries and shard objects is bad 18:02:15 <j-rod> perhaps there should be some additional guidance using 'file' 18:02:18 <jwb> "is it executable? if so, it's probably a program binary" 18:02:29 <Kevin_Kofler> The old guidelines weren't any different there though. 18:02:34 <dgilmore> Ive had sparc builds fail trying to link x86 objects to sparc ones 18:02:38 <Kevin_Kofler> (They just said "binary" instead of "program binary".) 18:02:52 <notting> jwb: if someone wants to start arguing about configure vs. configure.in and preferred form of modification, as to whether you're building from source... let me know and i can take a nap? (not to be too glib) 18:03:02 <j-rod> "is it executable and does the 'file' command believe its binary data? if so, its probably a program binary" 18:03:20 <dgilmore> Kevin_Kofler: we could have the clarify it as .so .a and arch specifc executables 18:03:27 <Kevin_Kofler> And Java classes. 18:03:39 <notting> 'executable compiled file'? 18:03:43 <Kevin_Kofler> (including JARs, which are just ZIPs thereof) 18:03:52 <nirik> tibbs: do you know why FPC wanted to do the exceptions here? I guess I don't have an issue with it, but seems odd. 18:04:06 <nirik> otherwise I am +1 here. 18:04:20 <jwb> notting, it may be splitting hairs, but someone is going to bitch about it and i don't see s/binary/program binary as being too overly descriptive 18:04:32 <jwb> nirik, good question 18:04:34 <tibbs> nirik: I'm afraid I don't understand the question. 18:04:46 <jwb> tibbs, why is FPC changing this guideline? 18:05:00 <Kevin_Kofler> jwb: I think the idea is that a shell script is not a binary because it's plain text. 18:05:03 <jds2001> a:colors darkblue 18:05:03 <jds2001> " * Search & Replace 18:05:03 <jds2001> " make searches case-insensitive, unless they contain upper-case letters: 18:05:06 <jds2001> set ignorecase 18:05:09 <jds2001> set smartcase 18:05:11 <tibbs> It was submitted as a draft to us, and the committee thought it was a good idea. 18:05:11 <jds2001> " show the `best match so far' as search strings are typed: 18:05:14 <jds2001> set incsearch 18:05:15 <nirik> tibbs: in exceptions: " If you have a package which meets this criteria, contact the Fedora Packaging Committee for approval." 18:05:16 <jds2001> " assume the /g flag on :s substitutions to replace all matches in a line: 18:05:20 <jds2001> set gdefault 18:05:22 <notting> jds2001: ???? 18:05:22 <j-rod> uh 18:05:22 <jds2001> " * User Interface 18:05:25 <jds2001> " have syntax highlighting in terminals which can display colours: 18:05:26 <dgilmore> jds2001: 18:05:27 <jds2001> "if has('syntax') && (&t_Co > 2) syntax on 18:05:29 <jwb> Kevin_Kofler, big rats nest there 18:05:30 <jds2001> "endif 18:05:32 <jds2001> " have fifty lines of command-line (etc) history: 18:05:35 <jds2001> set history=50 18:05:37 <jds2001> " have command-line completion <Tab> (for filenames, help topics, option names) 18:05:40 <jds2001> " first list the available options and complete the longest common part, then 18:05:44 <jds2001> " have further <Tab>s cycle through the possibilities: 18:05:46 <jds2001> set wildmode=list:longest,full 18:05:46 * jwb looks at jds2001 18:05:47 * nirik sighs. 18:05:49 <jds2001> " display the current mode and partially-typed commands in the status line: 18:05:52 <jds2001> set showmode 18:05:55 <jds2001> set showcmd 18:05:57 <jds2001> " have the mouse enabled all the time: 18:06:00 <jds2001> "set mouse=a 18:06:03 <jds2001> " don't have files trying to override this .vimrc: 18:06:05 <jds2001> set nomodeline 18:06:07 <jds2001> gack! 18:06:10 <jds2001> nirik: that's what i was trying to paste 18:06:21 <jds2001> let me know when my spam is done 18:06:27 <jwb> i think it's done 18:06:38 * jds2001 is happy that was nothing secret :) 18:06:52 <jds2001> anyhow. 18:07:05 <jds2001> i was trying to paste what nirik did 18:07:14 <Kevin_Kofler> Well, there's nothing more secret than your choice of editor. ^^ 18:07:20 <jds2001> :) 18:07:24 * notting notes we have 6 +1 votes for this 18:07:34 <nirik> tibbs: so, I was just asking why in this guideline exceptions should be handled by FPC instead of FESCo which deals with all the other ones that I can think of. 18:08:03 <notting> jwb: on the good side, if this fpc-written proposal is unclear, it refers all questions as to what's a binary back to fpc themselves 18:08:07 <jds2001> #agreed jds2001 is a moron 18:08:08 <tibbs> If FESCo wants to approve these then we can certainly change it. 18:08:20 <jwb> notting, good point 18:08:31 <jds2001> #agreed No pre-built binaries proposal is approved 18:08:32 <tibbs> I'm not sure that we really considered which committee should handle these things. 18:08:55 * nirik shrugs. I don't care much, but given that we meet each week and FPC meets... sometimes... it might be better for us to do them. Dunno. 18:09:55 <jds2001> next! 18:09:55 <abadger1999> Hmm... that line was in the old guideline as well so we didn't look too closely. 18:10:13 <jds2001> #topic Raduko perl 6 18:10:19 <jds2001> .fesco 218 18:10:25 <jds2001> this was just to check status here 18:10:39 <nirik> it now has a package in the review bug... 18:11:10 <notting> but no reviewer 18:11:15 <nirik> hang on. 18:11:49 <Kevin_Kofler> Why is the review not assigned to a reviewer yet? Wasn't there cwickert interested in the feature too? He's not the submitter, so he should be doing the review if he wants the feature in! 18:11:54 * nirik asked cwickert to drop in here... but then he dropped off the net. ;( 18:12:05 * dgilmore says -1 the review should be complete by now 18:12:26 <nirik> if it will help, I could do the review today. 18:12:34 <nirik> there he is. 18:12:45 <cwickert> sorry, pidgin crashed 18:12:52 <Kevin_Kofler> 18:12:53 <buggbot> Bug 498390: medium, medium, ---, nobody, NEW, Review Request: rakudo - Rakudo - A Perl compiler for Parrot 18:13:01 <nirik> cwickert: so, whats the status of the review? it's not assigned yet or reviewed. ;( 18:13:06 <cwickert> I'm just doing the review and so far it looks good 18:13:29 <jds2001> ok, you should assign it to yourself 18:13:31 <cwickert> one little problem that needs to be fixed, I just called Gerd about it 18:13:33 <jds2001> and set fedora-review? 18:13:44 <cwickert> jds2001: mom, just came home from work 18:13:55 <cwickert> but I'm optimistic we can make it tonight 18:14:03 <jds2001> ok :) 18:14:49 <cwickert> ok, bug is updated 18:15:23 <cwickert> do we want to discuss this for half an hour again or should Gerd and me just go to work? 18:15:27 <jds2001> k, do we want to give this til tonight? 18:15:32 <jds2001> just go work :) 18:15:37 <cwickert> thanks 18:15:58 <Kevin_Kofler> Please make sure there are no bogus Provides/Requires which conflict with perl 5. 18:16:18 <cwickert> Kevin_Kofler: the only provide will be /usr/bin/perl6 18:16:38 <jds2001> k 18:16:47 <skvidal> cwickert: ?? the only explicit provide - but I assume you're not turning off the autoprov generator 18:16:59 <cwickert> skvidal: not explicit 18:17:16 <cwickert> so we won't turn of autoprov 18:17:56 <skvidal> cwickert: right - then you'll get more provides 18:18:20 <skvidal> check those 18:18:22 <cwickert> skvidal: ok, will do 18:18:22 <skvidal> make sure it is not going to be pulled in instead of perl5 for something 18:18:22 <skvidal> that's all of our concern is having it 'trickle' onto systems and blow things up 18:18:22 <cwickert> will try in a vm 18:18:28 <skvidal> when in doubt 18:18:34 <cwickert> skvidal: I share you concerns ;) 18:18:36 <Kevin_Kofler> cwickert: rpm -qp --provides rakudo*.rpm 18:18:37 <jds2001> hold on, $WORK coworker here. 18:18:43 <skvidal> for line in rpm -qp --provides *.rpm 18:18:47 <skvidal> yum resolvdep $line 18:18:54 * nirik notes his review checklist always checks provides and requires. 18:19:53 <nirik> ok, so anything else on this? or shall we move on? 18:20:23 <Kevin_Kofler> If you break Perl, you'll break the KDE spin and an angry Sebastian Vahl might show up at your door with a baseball bat. ;-) 18:20:58 <cwickert> Kevin_Kofler: I guess you and Rex are coming, too ;) 18:21:30 * notting notes rex would have a slightly longer commute 18:22:52 <notting> i think we can move on? 18:23:13 <Kevin_Kofler> Move on++. :-) 18:23:56 <nirik> jds2001 is busy it seems. 18:24:02 <nirik> I think 209 is the next item. 18:24:21 <notting> #agreed this will be worked on today. will revisit if it doesn't land. 18:24:52 <Kevin_Kofler> #topic #209 Request to become provenpackager - otaylor 18:24:52 <nirik> #topic Request to become provenpackager - otaylor - 18:25:02 <nirik> oops. Sorry. ;) 18:25:17 * notting is +1 to owen 18:25:31 <Kevin_Kofler> +1 here too. 18:25:41 <nirik> +1 here. I think he knows his way around packaging, and will help out with desktop packages nicely. 18:25:50 <skvidal> +1 to owen 18:25:53 <sharkcz> +1 18:26:15 <j-rod> +1 18:26:31 <jds2001> +1 to owen, sorry for the delay 18:26:32 <dgilmore> +1 18:26:53 <jwb> 0 18:27:00 * nirik notes that one thing mentioned recently was that we do too many +1's without rationale. :) Might be good to note at least a short thing on why you vote the way you do. ;) 18:27:09 <jwb> yes 18:27:09 <nirik> anyhow, this passes. 18:27:25 <nirik> #agreed otaylor approved for provenpackager. 18:27:43 <nirik> #topic Request to become provenpackager - pingou 18:27:49 <notting> nirik: ok, +1 to owen because i trust him to not make stupid mistakes more than i trust myself 18:27:53 <nirik> .fesco 233 18:28:11 <jds2001> so there were objections here 18:28:21 <cwickert> although I'm not allowed to vote on that -1: to many open bugs without any reaction 18:28:36 <dgilmore> 0, i dont know his work and there is no supporting data to look it up 18:28:38 <notting> the one that gets me is "* 3 FTBFS with no reaction for 6 weeks". on that basis alone, i'm -1. 18:28:47 <jds2001> yeah, i have to agree, -1 18:28:56 <nirik> yeah, he should be caught up on his packages/bugs before fixing others. 18:29:04 <pingou> ! 18:29:29 * jds2001 not sure what ! means 18:29:35 <Kevin_Kofler> pingou: Well, I have to agree that you should fix your own packages first before worrying about other people's... 18:29:37 <pingou> Those packages need new dependancies that are not ready yet (I know I'm late on those) 18:29:55 <cwickert> pingou: then you should at least write this in the bug 18:30:02 <pingou> cwickert: I will then 18:30:22 <Kevin_Kofler> And is there really no way to get the existing packages to build for now (e.g. by a backported patch or something?). 18:31:00 <pingou> I should look more into this but I would prefer bring the new one in 18:31:36 <pingou> but I accept whatever decision you make :) 18:31:56 <jds2001> how bout -1 for now, come back later? 18:32:13 <pingou> fine for me :) 18:32:43 <j-rod> yeah, do some housekeeping w/your own stuff, get that all in order, then reapply 18:33:34 * nirik nods. 18:33:48 <Kevin_Kofler> I agree with that: -1 for now, will reconsider when your own packages are sorted out. 18:34:15 * sharkcz also nods 18:34:38 <nirik> I think thats the last thing on the meeting agenda? 18:34:46 <jds2001> #agreed pingou provenpackager nomination is declined for now, please reapply when packages are in order 18:34:51 <jds2001> #topic open floor 18:34:59 <jds2001> yep 18:35:02 <jds2001> anything else? 18:35:10 * dgilmore added an iteam but it should get discussion and be considered next week 18:35:34 <jds2001> dgilmore: agreed 18:35:49 <jds2001> but that's more for rel-eng, when they present the schedule to us, no? 18:36:04 <dgilmore> jds2001: i dont think so 18:36:08 <dgilmore> but maybe 18:36:15 <nirik> dgilmore: I think that may cut into devel time too much. 18:36:22 <dgilmore> nirik: how 18:36:23 <nirik> but we can discuss next week. 18:36:37 <dgilmore> nirik: its just having proposals in before feature freeze 18:36:51 <notting> i think there probably needs to be a sliding scale where as time to freeze approaches, minimum percentage done must increase. but we can talk more next week. 18:36:56 <nirik> well, do you know a month before freeze that something will land in time for this cycle? 18:37:28 <dgilmore> nirik: maybe not. but having it as a feature makes it visable. and FESCo or others can help toget it done 18:37:35 <nirik> true. 18:37:40 <dgilmore> throwing it there last minute makes it hard for others to help 18:38:35 <jds2001> anyhoo 18:38:38 <jds2001> anything more? 18:39:03 <notting> alpha freeze next tuesday. please fix your bugs so we can ship. 18:39:23 <jds2001> #info alpha freeze next tuesday. please fix your bugs so we can ship. 18:39:48 <jds2001> #info slips are bad, mmkkkayyy? 18:40:06 <notting> #info Please remember to update your feature status as well. 18:40:16 <Kevin_Kofler> I think dgilmore's proposal should link to KDE's soft freeze policy for comparison and credit. 18:40:39 <dgilmore> Kevin_Kofler: why for credit when thats not where it came from 18:40:49 <dgilmore> Kevin_Kofler: i dont know what KDE's policies are 18:41:08 <dgilmore> Kevin_Kofler: but feel free to add it for comparison 18:42:10 <Kevin_Kofler> Hmmm, actually there's apparently no canonical reference, it just shows up in their schedules. 18:42:21 <Kevin_Kofler> E.g. 18:42:59 <jds2001> maybe KDE should do a better job about writing down their policies..... 18:43:09 <jds2001> oh, wait a minute, maybe we should too :) 18:43:30 <notting> anything else for today? 18:43:36 * skvidal hopes no 18:43:40 <jds2001> nothing here 18:43:59 <jwb> i have something 18:44:00 * jds2001 ends the meeting and sends the log.... 18:44:04 <jds2001> or not 18:44:09 <jds2001> whats up jwb 18:44:12 <jwb> unfortunately, skvidal will now hate me 18:44:25 <jwb> where are we with critical path? 18:44:47 <jds2001> good question, I thought you would know best :) 18:44:53 <skvidal> that one is on me 18:45:09 <skvidal> I was supposed to put the requirements in bodhi and I completely blanked on it from wednesday 18:45:17 <skvidal> I'll take care of that today before I call it a day 18:45:56 <jds2001> anything else? 18:46:13 * jds2001 really ends the meeting 18:46:19 <jds2001> #endmeeting | https://www.redhat.com/archives/fedora-devel-list/2009-July/msg02088.html | CC-MAIN-2014-10 | refinedweb | 8,453 | 70.73 |
When.
Could this use TargetNamespace::NoRegister?
I suppose that works. I checked for our target and NoRegister is indeed assigned value 0, so the semantics would be the same. But I am not sure whether that would work for all targets. If it does, I do believe your suggestion to be a better solution than to simply output value 0. Perhaps someone more knowledgeable regarding TableGen could comment on this?
That's generated for all targets. 0 is always $noreg, with an added NoRegister entry.
I also think zero_reg should really be renamed to $noreg or something closer to what it really is
Agree zero_reg is not a good name if this refers to NoRegister (register number zero in the internal representation).
I would have guessed that it mapped to a register always being read as zero (if the target got such a register).
Changing the name is perhaps outside the scope of this patch. But we better (make sure) clarify that zero_reg is NoRegister (holding an undefined value, rather than holding the value zero).
Looking into the Target/Target.td file where zero_reg is defined, it says the following:
/// zero_reg definition - Special node to stand for the zero register.
///
def zero_reg;
Is "zero register" and "NoRegister" really the same thing? If not, what should then be output?
MIRLangRef also talks about "The null registers".
I think that TargetNamespace::NoRegister, zero_reg, null registers, _ (in MIR and dumps) and $noreg more or less all refer to the same thing - register number zero. But perhaps some targets could map a register always being read as zero to $noreg, e.g. translating that specifier to register number 31 in the asm printer.
It is confusing to have that many different names for the same thing. And to me zero_reg and null register is most confusing as there are targets that have registers always being read as zero (and the register number for such a register could be any number). So unless zero_reg actually is supposed to refer to a register being read as zero, it would be better to for example call it noreg or reg_null.
Looks like SystemZ got one use of zero_reg, and the rest belongs to ARM.
Then I suppose we will continue outputting "NoRegister"; I just need to figure out how to retrieve the "NameSpace" from the zero_reg directive, since that's not immediately available...
Actually I think the code would work as is for this. If you look at CopyOrAddZeroRegRenderer,
<< MatchTable::NamedValue(
(ZeroRegisterDef->getValue("Namespace")
? ZeroRegisterDef->getValueAsString("Namespace")
: ""),
I have looked into that but RegisterDef does not have "Namespace" in the case of zero_reg, so I need to retrieve the namespace information elsewhere.
So this check for the namespace is pointless and should be removed?
[TableGen][GlobalISel] Fix handling of zero_reg
Forgot to update testcase.
In the case of ZeroRegisterDef it seems to be a different story, as ZeroRegisterDef takes the value of GIZeroRegister whenever it is used, which could be any kind of register which may or may not have a Namespace. In the case of zero_reg, however, it always does not have a Namespace.
I have now updated the patch to look over the defs present in the target in search of a valid Namespace, so now we can use NoRegister (although the fix for this feels hacky, to me).
Maybe one can add something like this to CodeGenTaget (putting the code close to the getInstNamespace support):
StringRef CodeGenTarget::getRegNamespace() const {
auto &RegClasses = RegBank.getRegClasses();
return RegClasses.size() > 0 ? RegClasses.front().Namespace : "";
}
and then use Target.getRegNamespace() to get it.
Updated patch according to bjope's suggestion.
Is using a different namespace from the overall target actually supported? Can't you just take the namespace from CodeGenTarget directly?
Just like in the header file. I'd rather put this next to getInstNamespace. This would also show more clearly that we currently fetch the namespaces in different ways, and by that we perhaps support different namespaces for regs and instructions etc. I have no idea if that is used (or useful) in reality.
I figure that we at least wouldn't break anything as long as we follow what CodeGenRegisters is using when defining the enum with NoRegister, when fetching the getRegNamespace.
I'd put this next to getInstNamespace(). Or if for some reason it would be better to keep "inst" helpers and "reg" helpers separated, then I'd put it next to something related to registers such as getRegBank or getRegRegisterByName. But I can't see any logic in the current order of methods (it's even mixing public and private declarations), so that's why I'd got for keeping the namespace-related functions close to each other.
Reordered functions as per bjope's feedback
LGTM
I landed this on behalf of Gabriel. | http://reviews.llvm.org/D86215?id=287932 | CC-MAIN-2021-10 | refinedweb | 799 | 55.54 |
I thought I would give an example of using customized face values with the Custom Blinklib as I think this is an easier way to create more advanced games than using datagrams. The example below shows how to create the usual “single click changes color across the cluster” example but with a custom face value. Note that this example would not need and does not use everything provided by the face value but all we want to show is how to use it so this is fine.
The first thing to do is to create a project in the usual way but making sure that you use the Custom Blinklib instead of the official one.
Now lets define our face value. Create a file called face_value.h with the following contents:
#ifndef FACE_VALUE_H_ #define FACE_VALUE_H_ #include <blinklib.h> struct FaceValue { // Our face value has a single data member which is an array of 5 bytes. byte data[5]; // Because structs are not directly comparable, we need to overload the equality // operator. bool operator==(const FaceValue& f) const { // Save some storage space by interpreting the first 4 bytes as an uint32 // and comparing that. In the end this is simply comparing every single byte in // the current face value and the one we are being compared to. return (*((uint32_t*)(data)) == *((uint32_t*)(f.data))) && (data[4] == f.data[4]); } // Implement inequality based on equality. This should have been done by the // Compiler automatically but it was not. So we do it here. bool operator!=(const FaceValue& f) const { return !(*this == f); } }; #endif // FACE_VALUE_H_
The above is simpler than it looks. I just wanted to use some tricks to show types of things you can do but you could as well, for example, just manually compare each byte in the array.
Now that we have our face value, we just need to tell the Custom Blinklib to use it. Create a sub-directory called config inside your project directory and create a file called blinklib_config.h with the following contents:
#ifndef BLINKLIB_CONFIG_H_ #define BLINKLIB_CONFIG_H_ #include "face_value.h" // Our FaceValue. // We are not using datagrams so we can save both space and memory by fully // disabling them. #define BGA_CUSTOM_BLINKLIB_DISABLE_DATAGRAM // Tell the Custom Blinklib to use our face value. #define BGA_CUSTOM_BLINKLIB_FACE_VALUE_TYPE FaceValue // Tweak timeouts as the bigger than usual face value would cause faces to timeout if // we do not do this. The right value to use will depend on the specific program and // how much work it does on every loop iteration. #define BGA_CUSTOM_BLINKLIB_SEND_PROBE_TIMEOUT_MS 210 #define BGA_CUSTOM_BLINKLIB_FACE_EXPIRATION_TIMEOUT_MS 260 #endif // BLINKLIB_CONFIG_H_
Now we can go straight to our program. Create a file called face_value.ino in your project directory (or any name.ino you might want) with the following contents:
#include <blinklib.h> #include <face_value.h> void setup() {} static FaceValue value_; // Our current face value. static FaceValue previous_values_[FACE_COUNT]; // Face value on all faces. // Colors we will use. static Color colors_[] = {RED, GREEN, BLUE, YELLOW, MAGENTA, ORANGE, WHITE}; // Note that calls to all face value related functions look exactly the same. Only // handling the value changes. void loop() { FOREACH_FACE(face) { if (isValueReceivedOnFaceExpired(face)) continue; if (previous_values_[face] != getLastValueReceivedOnFace(face)) { value_ = getLastValueReceivedOnFace(face); previous_values_[face] = value_; } } if (buttonSingleClicked()) { // We only sue the first byte for this example. value_.data[0]++; } // Set color based on the first byte of the face value data. setColor(colors_[value_.data[0] % 7]); // Send our face value on all faces. setValueSentOnAllFaces(value_); FOREACH_FACE(face) { // Check again if faces are expired and, if they are , set them to OFF. This helps // tweak the timeouts in blinklib_config.h. This could have bene doen in the other // face loop but I wanted to show this explicitly. if (isValueReceivedOnFaceExpired(face)) setColorOnFace(OFF, face); } }
And that is it. Now you have a 5 byte face value to do as you please.
| https://forum.move38.com/t/using-a-custom-face-value-with-the-custom-blinklib/904 | CC-MAIN-2021-17 | refinedweb | 635 | 58.28 |
Jesse,
Speaking from experience, Pik is a viable solution if you want to run both
versions of Ruby on the same Windows box (at least until RVM 2 is released
with Windows support). It should only require adding the Ruby from the
RailsInstaller installation to Pik so that it is aware. Then, you should be
able to use Pik and install other versions of Ruby.
The other option that is a bit more complicated is using Cygwin. This will
actually allow you to install RVM and run it in a sandboxed Unix
environment on Windows. It's something that I've been meaning to write up
how to do but just haven't gotten around to it.
Cheers,
Evan
Things to check in order:
Locks on the server side.
Network interruption and timeout.
Responsiveness on the server.
Long-running queries.
Well, I seem to have finally gotten to the root of the problem. It seems
that when you execute a cURL call to the same server as the script that is
doing the calling, and if both the "caller" and the "callee" scripts are
trying to use the same session id, a deadlock occurs, causing both scripts
to wait till the other releases the session. I ended up adding a test to
see if there's already a session id in use, and if so, the calling script
doesn't start a session. If there is no session id, then the caller starts
a session, obtains the session id, then destroys the session, which allows
the "callee" script unfettered access to said session, thus removing the
deadlock situation. Below is the code I used to do this:
$convo_id = (isset ($request_vars['convo_id'])) ? $request_vars['convo_id']
Without going and writing code to check, here are some likely culprits.
1) Your 5 second timeout is not long enough to download the full file
"boo.mp3", so the timeout stops the operation.
2) Your web server is taking too long to respond (unlikely, but possible
over a mobile network)
It might be best to either remove the timeout value altogether, or set it
to a more realistic value.
Well, I never figured out why the build was failing for me. However, as of
video.js version 4.4.2, I am able to build the project without problems.
Maybe it was a configuration problem on my end, or maybe some bugs got
fixed on their end, but this issue is now resolved.
register_Timer.SynchronizingObject = this;
This completely defeats the reason for using System.Timers.Timer. It
prevents the Elapsed event handler from being raised on a threadpool
thread, the property ensures it will run on the UI thread. Which is what
you wanted.
But you still get all the disadvantages of that Timer class. Particularly
its habit for swallowing exceptions without a diagnostic is very ugly. As
well as continuing to raise the Elapsed event after the form is closed,
ensuring this cannot happen is a very difficult problem to solve, there are
two inherent race conditions. .NET 1.0 had some design mistakes related to
threading, this was one of them.
Just don't do this, use a System.Windows.Forms.Timer instead. It will work
exactly like your timer, minus all the d
The SCM (the component which handles services) has built-in auto-restart
logic that you can take advantage of to restart your service, as necessary.
Additionally and/or alternatively, you can configure 'custom actions' to be
associated with a failure of the service - the custom action can include
launching a program of your own, which could then log the failure, and
perhaps manually restart your service.
You can read more about such custom actions on MSDN by looking at the
documentation of the structure used to configure such actions:
SERVICE_FAILURE_ACTIONS. Once you fill that structure, you notify the SCM
by calling the ChangeServiceConfig2 function.
Please don't ask "well, what happens if my failure handler program crashes"
:)
I am told that the data will remain intact if you make no changes to the
schema of the db.
Here is the link from a Microsoft forum:
Take a look at the MSDN topic REINSTALLMODE property. The "v" is needed by
the minor upgrade, the "o" tells it to only overwrite older files and the
"a" tells it to reinstall all files regardless of hash or version.
So REINSTALLMODE=vamus should be what you need. However, if the files in
the installer were properly versioned this shouldn't be a problem in the
first place.
Windows Server 2012 includes latest version of SMB protocol 3.0 and it full
of really interesting updates and improvements (details and some relevant
discussion here). As with any serious improvements there is a compromise in
terms of legacy clients support.
Probably there is no support for SMB 3.0 on your Samba client side or some
backward compatibility should be enabled on Server 2012 side.
This error can occur in every Language-Version and depending on the
Word-Version you use, it may not be easy to prevent hidden dialogs. Which
Word-Version do you use?
But first, your parameters are off by one (i think). ReadOnly is the third
parameter and thats why _isVisible my not be working.
I tried to set objWord.Application.Visible to true and it worked for me.
Maybe something else is wrong too?
One quick solution may be setting OpenAndRepair to true. Its the 13.
Parameter, right behind isVisible.
Otherwise have a look at this Link:
How To Dismiss a Dialog Box Displayed by an Office Application with Visual
Basic.
Note that you do not need MinGW to run pandas, it's a optional.
But if you like to use it than check this page:
Quoting one of my previous answers:
HTTP Upgrade is used to indicate a preference or requirement to
switch to a different version of HTTP or to another protocol, if
possible:.
According to the IANA regist
The typical design pattern for database updates in an app goes something
like the code below and every time you update your application where a
database change is required, you bump the database version used in your
SQLiteOpenHelper-derived class.
This, of course, presumes you used SQLiteOpenHelper to manage getting a
reference to your SQLite DB in your provider:
public void onUpgrade(SQLiteDatabase db, int oldVersion, int newVersion) {
if (oldVersion == 1) {
// DO WORK TO UPGRADE FROM VERSION 1 to 2
oldVersion += 1;
}
if (oldVersion == 2) {
// DO WORK TO UPGRADE FROM VERSION 2 to 3
oldVersion += 1;
}
if (oldVersion == 3) {
// DO WORK TO UPGRADE FROM VERSION 3 to 4
oldV
This has seen when a related entity in another context (and on another
thread) has been modified but not yet persisted.
The scenario:
A --> B
Due to a bug B had pending changes, in another context, on another thread.
The Bug caused B to hang around instead of saving or rolling it back.
Attempting to save A in the current context/thread will will cause the wait
for the other thread to release the lock on B.
Only successful way to trouble shoot was to list all pending entities and
compare to ones in the blocked thread. Took a while :(
I am still looking for something that list all locks on the database and
entities.
The current version of MPJ-Express uses Java Service Wrapper binaries to
launch daemons on remote nodes. The current version (v0.38 I assume) does
not have hard/soft float ABIs for ARM. Please wait for the next public
release, I hope they will incorporate ARM support for that.
It's a bit late but the log command is probably trying to page results and
is waiting for stdin, try the following to turn off the pager:
git = sh.git.bake("--no-pager", _cwd=cwd)
git.log('-n 1', '--pretty=%H')
Project + Properties, Debug tab, tick the "Enable native code debugging"
option. Ensure that you've got the Microsoft Symbol server enabled (Tools
+ Options, Debugging, Symbols), I know you do :)
You can now use Debug + Break All and see what's going on inside the
browser. Do so repeatedly to get the lay of the land. You'll see that
Javascript is executing (jscript8.dll) and constantly forcing the layout
engine to recalculate layout, never getting the job done.
Browsers in general are vulnerable to Javascript that's stuck in an endless
loop. A regular browser tends to put up a dialog after several dozen
seconds to allow the user to abort the JS, but WebBrowser doesn't have that
feature. Nor does it have the dev tools so debugging the JS isn't going to
be a great joy either. This is
From the script I can see this comment:
# need to use "java.ext.dirs" because "-jar" causes classpath to be ignored
# might need more memory, e.g. -Xmx128M
but anyway both command work for me:
java -Djava.ext.dirs=. -jar draw9patch.jar
java -jar draw9patch.jar
Probably you have a failing jar. Can you try with mine?
[I'm on a Mac running Mountain Lion 10.8.4 but I had no problem with Lion
10.7.5]
You can do this, otherwise your gui thread will have to wait until the
endless while loop finishes; consider adding a break function if the
following thread isn't what you want.
using System.Threading;
Thread th = new Thread(new ThreadStart(delegate
{
//while loop
}));
th.Start();
GUI elements from a Windows Service are shown on Session 0. On Windows XP
& 2003, users were allowed to log in to Session 0 and interact normally
with the windows created by a service, but Microsoft put a knife in the
heart of interactive services in Vista (and beyond) by isolating Session 0.
So, to answer your specific questions:
Is a process running in the context of a Service (the Service itself
or any process it starts) "allowed" to make Windows API calls that
display a visible GUI?
Will most Windows API calls (e.g. creating and showing a window,
using Shell_NotifyIcon, etc.) behave the same in the invisible session
of the service?
Yes, GUI calls are allowed and should succeed as normal. The only notable
exceptions that I know of are those related to tray icons b
First: Note that go test will create, compile and run a test program which
intercept output from each test so you will not see output from your tests
until the test finishes (in this moment the output is printed).
Two issues with your code:
If you cannot start the tcp server t is not notified, you only do Printf
here and continue as if everything was fine.
You call t.FailNow from a different goroutine than your test runs in. You
must not do this. See
Fixing those might at least show what else goes wrong. Also: Take a look at
how package net/http does it's testing.
Check this issue
Quoting Isaac Schlueter:
The good news is that you're no longer leaking memory. The bad news is
that you do indeed need to add error listeners to your objects.
In most apps, you can usually treat ECONNRESET as roughly the same as
a graceful close. Just make sure that you actually stop using that
socket, since it's now closed. However, it's not a graceful close,
since it's not in any sense "graceful". Prior to v0.8.20, node would
happily buffer all writes to reset sockets, causing memory explosion
death.
If you just try calling update yourself, instead of passing it to, you'll see the problem immediately:
>>> update('')
UnboundLocalError: local variable 'count' referenced before assignment
If you want to update a global variable, you have to mark it global
explicitly:
def update(block):
global count
count += 1
print count
See the FAQ entry Why am I getting an UnboundLocalError when the variable
has a value? and the following question What are the rules for local and
global variables in Python?, and the docs on global, for more details.
A better way to solve this would be to write a class:
class FtpHandler(object):
def __init__(self):
self.count = 0
def update(self, block):
self.count += 1
print self.count
Then, to
Looks like you may have a cartesian join here.
I'm not sure exactly how your database keys are constructed so I just
guessed below, but it may give you an idea of how you need to link your
tables. These are ANSI joins, not Oracle-specific. I recommend you learn
to use these types of joins so your code is more portable. I hope this
helps.
For outer joins use FULL JOIN for inner joins use JOIN or INNER JOIN. For
left/right joins use LEFT JOIN or RIGHT JOIN.
FROM tickets t
JOIN customer_care_tickets cct ON t.ticket_id = cct.ticket_id
JOIN accounts a ON cct.account_id = a.account_id
JOIN orders o ON o.ticket_id = t.ticket_id
WHERE t.owner IN (SELECT cont.contact_id
FROM contacts cont
WHERE ( c
Please read the javadocs... it will only have n > 0 if a some OP was
selected. This will happen in your example if a socket is accepted. Just do
a telnet localhost 1234 and you will see it in action.
I am editing your code, try this:
WebClient client = new WebClient();
string url = lines[i];
try
{
string downloadString = client.DownloadString(url);
client.Dispose(); //dispose the object because you don't need it
anynmore
findNewListings(downloadString, url);
}
catch (Exception exce)
{
MessageBox.Show("Error downlaoding page:
" + exce.Message);
}
If an object is no longer in use, its better to Dispose it.
I've had exactly the same problem and could solve it using the answer here which works by moving the
projects out of the workspace and back in again after Eclipse has been
started and stopped.
Starting it with -clean -data started eclipse but whenever I switched the
workspace using the eclipse gui it wouldn't load.
cat hangs if no input is given -- as well as many other programs. One of
your paths are most probably empty, as you're able to go through the if
statement:
$ if [ -f ]; then echo "foo"; fi
foo
Once you inside the block, you hang on the call cat <empty>. As
pointed by @ruakh, you should get it to work double-quoting the variable.
It's because you have never actually launched your task. You call
[task launchPath];
That just returns the task's path as a string, it doesn't actually launch
the task. You want
[task launch];
Well it's a streaming API, so it will stream infinitely. Hence if you are
trying to read to the end of stream, it will block forever until you run
out of memory or get disconnected due to network error etc.
You need to process the data while connected to the stream. If I were you
I'd just use a client library (see).
bash-3.2$
?? That looks like the postgres prompt to me. You should be at a prompt
like:
~/rails_projects/my_app$
Try typing:
bash-3.2$ exit
to get back to a prompt you can recognize.
While I cannot speak to this specifically, what seems to have happened is
that I went home for the night and suspended my computer. When I came into
work this morning and resumed my computer it seems to be working exactly as
expected. I can only assume that the "suspend/resume" is equivalent to a
reboot which has forced a correct reference to the base Gtk libraries. But
this is only a guess. I miss linux :(
try to debug it. pause the program when it hangs and see where it hangs. if
it hangs on the SQL query you might want to put a timeout on it.
also you shouldn't create the query like this
cmd.CommandText = "SELECT last_name +', '+ first_name +' ('+major_key+') '
from name where id ='" + textBox1.Text.Replace(@"L", "") + "'";
it's open to sql injections.
use parameterized sql or other form of protection:
SqlCommand command = new SqlCommand("SELECT last_name +', '+ first_name +'
('+major_key+') ' from name where id =@Name"
command.Parameters.Add(new SqlParameter("Name", textBox1.Text.Replace(@"L",
"")));
EDIT
if you want to put a timeout on the connection you can look here at MSDN:
you can set the timeout parameter in the connection string or simply
DBConnection.ConnectionTimeo
Easy answer after much head scratching.
Don't use Cygwin for github access. An alternative is to do all your
normal terminal functions in Cygwin and then use Windows Command Line for
git push origin
Be sure to have ssh keys added to your account. Here are steps to add ssh
to github. Also be sure your ssh keys have a passphrase.
Your regex is causing catastrophic backtracking.
Briefly, your regex has terms that can both capture the same part of the
input, but fails to do so. The regex engine must try all combinations
before failing and due to the matching tree created, each extra character
doubles the number of ways the match can be made. Creating and traversing
this tree leads to a geometrically exponential execution time proportional
to 2^n - which you are seeing.
You may find that changing your dual expression to a possessive quantifier
(ie ++ rather than +) stops this behaviour, because with ++ once characters
are consumed, they stay consumed.
Incidentally, this expression
[-a-zA-z0-9_''^&+?:]
may be rewritten as:
[-w''^&+?:]
Because:
inside a character class (almost) all characters lose t
When you need lots of threads it is better to use thread pool. I found that
there is python binding for that, so definitely you should try this. Here
is similar topic.
Sadly there is no python binding for QtConcurrent. It is nice API but it
uses templates and it is badly documented.
I think I figured it out.
The command I was running included an extra line break after the user name.
As in, I was trying to execute this
pg_restore --verbose --clean --no-acl --no-owner -h localhost -U myusername
-d mydb latest.dump
instead of this
pg_restore --verbose --clean --no-acl --no-owner -h localhost -U myusername
-d mydb latest.dump
For some reason that extra linebreak was gumming things up. Once I removed
it pg_restore worked properly.
So you have |r1| = 50,000 and |r2| = 1,000,000. If you want to compare each
r1 against each r2, you have 50,000 * 1,000,000 = 50,000,000,000. 50
BILLION comparisons you need to make. If each comparison takes 1 ms, it's
still going to take you 50,000,000 (fifty million) seconds to perform this
comparison. This is 578 days.
The only possible way I can see you reducing this complexity would be if
you were to create a r1 and r2 map on node p keyed on SOMEPROPERTY. Then
you would just need to get the list of r1 for SOMEPROPERTY and the list of
r2 by SOMEPROPERTY.
I think the way the program is written, it's expecting data to be passed in
from stdin:
ms.load(sys.stdin)
so you need:
something | quarqd_messages.py --header > message-headers.h | http://www.w3hello.com/questions/pip-hangs-after-upgrade-to-7-1-2-on-Windows-x64 | CC-MAIN-2018-17 | refinedweb | 3,183 | 72.76 |
# Tips and tricks from my Telegram-channel @pythonetc, April 2019

It is a new selection of tips and tricks about Python and programming from my Telegram-channel @pythonetc.
[Previous publications](https://habr.com/ru/search/?q=%5Bpythonetc%20eng%5D&target_type=posts).
---
Storing and sending object via network as bytes is a huge topic. Let’s discuss some tools that are usually used for that in Python and their advantages and disadvantages.
As an example I’ll try to serialize the Cities object which contains some City objects as well as their order. Here is four method you can use:
1. JSON. It’s human readable, easy to use, but consumes a lot of memory. The same is true for other formats like YAML or XML.
```
class City:
def to_dict(self):
return dict(
name=self._name,
country=self._country,
lon=self._lon,
lat=self._lat,
)
class Cities:
def __init__(self, cities):
self._cities = cities
def to_json(self):
return json.dumps([
c.to_dict() for c in self._cities
]).encode('utf8')
```
2. Pickle. Pickle is native for Python, can be customized and consumes less memory than JSON. The downside is you have to use Python to unpickle the data.
```
class Cities:
def pickle(self):
return pickle.dumps(self)
```
3. Protobuf (and other binary serializers such as msgpack). Consumes even less memory, can be used from any other programming languages, but require custom schema:
```
syntax = "proto2";
message City {
required string name = 1;
required string country = 2;
required float lon = 3;
required float lat = 4;
}
message Cities {
repeated City cities = 1;
}
class City:
def to_protobuf(self):
result = city_pb2.City()
result.name = self._name
result.country = self._country
result.lon = self._lon
result.lat = self._lat
return result
class Cities:
def to_protobuf(self):
result = city_pb2.Cities()
result.cities.extend([
c.to_protobuf() for c in self._cities
])
return result
```
4. Manual. You can manually pack and unpack data with the struct module. It allow you to consume the absolute minimum amount of memory, but protobuf still can be a better choice since it supports versioning and explicit schemas.
```
class City:
def to_bytes(self):
name_encoded = self._name.encode('utf8')
name_length = len(name_encoded)
country_encoded = self._country.encode('utf8')
country_length = len(country_encoded)
return struct.pack(
'BsBsff',
name_length, name_encoded,
country_length, country_encoded,
self._lon, self._lat,
class Cities:
def to_bytes(self):
return b''.join(
c.to_bytes() for c in self._cities
)
```
---
If a function argument has the default value of `None` and is annotated as `T`, `mypy` automatically treats it as `Optional[T]` (in other words, `Union[T, None]`).
That doesn't work with other types, so you can't have something like `f(x: A = B())`. It also doesn't work with a variable assignment: `a: A = None` will cause an error.
```
def f(x: int = None):
reveal_type(x)
def g(y: int = 'x'):
reveal_type(y)
z: int = None
reveal_type(z)
$ mypy test.py
test.py:2: error: Revealed type is 'Union[builtins.int, None]'
test.py:4: error: Incompatible default for argument "y" (default has type "str", argument has type "int")
test.py:5: error: Revealed type is 'builtins.int'
test.py:7: error: Incompatible types in assignment (expression has type "None", variable has type "int")
test.py:8: error: Revealed type is 'builtins.int'
```
---
In Python 3, once the `except` block is exited, the variables that store caught exceptions are removed from `locals()` even if they previously existed:
```
>>> e = 2
>>> try:
... 1/0
... except Exception as e:
... pass
...
>>> e
Traceback (most recent call last):
File "", line 1, in
NameError: name 'e' is not defined
```
If you want to save a reference to the exception, you have to use another variable:
```
>>> error = None
>>> try:
... 1/0
... except Exception as e:
... error = e
...
>>> error
ZeroDivisionError('division by zero',)
```
This is not true for Python 2.
---
You may have your own `pypi` repository. It lets you release packages inside your project and install them with pip as though they are regular packages.
It is remarkable that you don’t have to install any specific software, but can use a regular http-server instead. Here is how it works for me, for example.
Let’s have a trivial package named `pythonetc`.
```
setup.py:
from setuptools import setup, find_packages
setup(
name='pythonetc',
version='1.0',
packages=find_packages(),
)
pythonetc.py:
def ping():
return 'pong'
```
Let’s release it to the ~/pypi directory:
```
$ python setup.py sdist bdist_wheel
…
$ mv dist ~/pypi/pythonetc
```
Now server this on the `pypi.pushtaev.ru` domain with nginx:
```
$ cat /etc/nginx/sites-enabled/pypi
server {
listen 80;
server_name pypi.pushtaev.ru;
root /home/vadim/pypi;
index index.html index.htm index.nginx-debian.html;
location / {
autoindex on;
try_files $uri $uri/ =404;
}
}
```
It now can be installed:
```
$ pip install -i http://pypi.pushtaev.ru --trusted-host pypi.pushtaev.ru pythonetc
…
Collecting pythonetc
Downloading http://pypi.pushtaev.ru/pythonetc/pythonetc-1.0-py3-none-any.whl
Installing collected packages: pythonetc
Successfully installed pythonetc-1.0
$ python
Python 3.7.0+ (heads/3.7:0964aac, Mar 29 2019, 00:40:55)
[GCC 4.9.2] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import pythonetc
>>> pythonetc.ping()
'pong'
```
---
It’s quite often when you have to declare a dictionary with all keys equal to the local variables with the same name. Something like this:
```
dict(
context=context,
mode=mode,
action_type=action_type,
)
```
ECMAScript even has the special form of object literal for such cases (it’s called Object Literal Property Value Shorthand):
```
> var a = 1;
< undefined
> var b = 2;
< undefined
> {a, b}
< {a: 1, b: 2}
```
It is possible to create a similar helper in Python (alas, it looks not even closely as good as the ECMAScript notation):
```
def shorthand_dict(lcls, names):
return {k: lcls[k] for k in names}
context = dict(user_id=42, user_ip='1.2.3.4')
mode = 'force'
action_type = 7
shorthand_dict(locals(), [
'context',
'mode',
'action_type',
])
```
You may wonder why we have to pass `locals()` as a parameter in the previous example. Is it possible to get the `locals` of the caller in the callee? It is indeed, but you have to mess with the `inpsect` module:
```
import inspect
def shorthand_dict(names):
lcls = inspect.currentframe().f_back.f_locals
return {k: lcls[k] for k in names}
context = dict(user_id=42, user_ip='1.2.3.4')
mode = 'force'
action_type = 7
shorthand_dict([
'context',
'mode',
'action_type',
])
```
You can go even further and use something like this —<https://github.com/alexmojaki/sorcery>:
```
from sorcery import dict_of
dict_of(context, mode, action_type)
``` | https://habr.com/ru/post/450864/ | null | null | 1,078 | 50.23 |
table of contents
NAME¶
insque, remque - insert/remove an item from a queue
SYNOPSIS¶
#include <search.h>
void insque(void *elem, void *prev); void remque(void *elem);
insque(), remque():
_XOPEN_SOURCE >= 500
|| /* Glibc since 2.19: */ _DEFAULT_SOURCE
|| /* Glibc <= 2.19: */ _SVID_SOURCE
DESCRIPTION¶
The insque() and remque() functions manipulate doubly linked lists. Each element in the list is a structure of which the first two elements are a forward and a backward pointer. The linked list may be linear (i.e., NULL forward pointer at the end of the list and NULL backward pointer at the start of the list) or circular.
The insque() function inserts the element pointed to by elem immediately after the element pointed to by prev.
If the list is linear, then the call insque(elem, NULL) can be used to insert the initial list element, and the call sets the forward and backward pointers of elem to NULL.
If the list is circular, the caller should ensure that the forward and backward pointers of the first element are initialized to point to that element, and the prev argument of the insque() call should also point to the element.
The remque() function removes the element pointed to by elem from the doubly linked list.
ATTRIBUTES¶
For an explanation of the terms used in this section, see attributes(7).
CONFORMING TO¶
POSIX.1-2001, POSIX.1-2008.
NOTES¶ several versions of UNIX. The above is the POSIX version. Some systems place them in <string.h>.
BUGS¶.
EXAMPLES¶
The program below demonstrates the use of insque(). Here is an example run of the program:
$ ./a.out -c a b c Traversing completed list:
a
b
c That was a circular list
Program source¶
#include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <search.h> struct element {
struct element *forward;
struct element *backward;
char *name; }; static struct element * new_element(void) {
struct element *e = malloc(sizeof(*e));
if (e == NULL) {
fprintf(stderr, "malloc() failed\n");
exit(EXIT_FAILURE);
}
return e; } int main(int argc, char *argv[]) {
struct element *first, *elem, *prev;
int circular, opt, errfnd;
/* The "); }
SEE ALSO¶
COLOPHON¶
This page is part of release 5.13 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. | https://manpages.debian.org/unstable/manpages-dev/remque.3.en.html | CC-MAIN-2022-33 | refinedweb | 381 | 65.62 |
TOTD #117: Invoke a JAX-WS Web service from a Rails app deployed in GlassFish
By arungupta on Jan 12, 2010
A user on GlassFish Forum tried invoking a JAX-WS Web service from a Rails application and faced some issues. This Tip Of The Day (TTOD) will discuss the different approaches and shows their current status.
A Rails app can be deployed on GlassFish in 3 different ways:
- Directory Deployment in GlassFish v3 Server - TOTD #72 explains how to deploy a trivial Rails application (with just a scaffold) on GlassFish v3 server. Even though the blog uses a Rails application, any Rack-based application can be deployed on the server. This server is also the Reference Implementation for Java EE 6 and can also run Grails and Django applications.
- Directory Deployment using light-weight GlassFish Gem - GlassFish Gem is a light-weight version of the full-blown server and is stripped to run, just like the server, any Rack-based application such as Merb, Rails, and Sinatra. TOTD #70 shows how to deploy the same application using GlassFish Gem.
- WAR file in GlassFish v2.x or v3 - TOTD #73 explains how to deploy a Rails application as WAR file on GlassFish v2. The JNDI connection pooling part of the blog may be skipped to simplify the steps but the concepts are still valid. TOTD #44 shows how to do JNDI connection pooling for GlassFish v3. As GlassFish v2 has in-built support for session replication, TOTD #92 demonstrate how Rails application can leverage that functionality.
Now lets get to the issue reported by the user using these 3 deployment models.
First, lets deploy a simple Web service endpoint and generate a JAR file of the client-side artifacts:
- This blog will use a simple Web service as defined in screencast #ws7. The Web service endpoint looks like:
package server; import javax.jws.WebService; /\*\* \* @author arungupta \*/ @WebService() public class HelloService { public String sayHello(String name) { return "Hello " + name; } }
- Generate Web service client-side artifacts as:
~/samples/v3/rails/webservice/tmp >wsimport -keep parsing WSDL... generating code... compiling code...
- Create a Web service client jar file as:
jar cvf wsclient.jar ./server
Now lets write a Rails application and invoke this Web service:
- Create a simple Rails application as:
jruby -S rails webservice
Optionally you may specify "-d mysql" to use MySQL database. Or better un-comment the following line:
# config.frameworks -= [ :active_record, :active_resource, :action_mailer ]
in "config/environment.rb" as no database interaction is required.
- Create a controller and view as:
jruby script/generate controller home index
- Update the Controller in "app/controllers/home_controller.rb" as:
include Java class HomeController < ApplicationController def index service = Java::server.HelloServiceService.new port = service.getHelloServicePort @result = port.sayHello("Duke") end end
- Change the View in "app/views/home/index.html.erb" as:
<h1>Home#index</h1%gt; <p>Find me in app/views/home/index.html.erb</p> <%= @result %>
Now lets deploy this Web service using the 3 different deployment models mentioned above.
GlassFish v3 allows a directory-based deployment of Rails applications. This application needs to locate the Web service client classes. The "wsclient.jar" can be copied to the "lib" directory of Rails application ("webservice/lib" in our case), "domains/domain1/lib/ext" or "JRUBY_HOME/lib". The library can also be passed during deployment using "--libraries" switch. None of this approach seem to work correctly as explained in issue# 11408. So for now, invoking a JAX-WS Web service from a Rails application deployed directly on GlassFish v3 is not possible, at least until the bug is fixed.
In order to deploy the same application using GlassFish Gem, you can copy "wsclient.jar" to the "lib" directory of your Rails application. And also add the following line to "app/controllers/home_controller.rb":
require 'lib/wsclient.jar'
Alternatively you can copy it to "JRUBY_HOME/lib" directory if this Web service client is accessed my multiple applications. In this case there is no need to add any "require" statement to your Controller. Anyway, running the application as:
jruby -S glassfish
and accessing "" shows the following output:
And finally as explained in TOTD #73, bundle up your original Rails application as WAR and then deploy on GlassFish v3 as:
asadmin deploy webservice.war
Make sure to copy "wsclient.jar" to the "lib" directory of your Rails application and then Warbler will copy it to "WEB-INF/lib" of the generated WAR file. The output is shown as below:
So if you want to invoke a Metro/JAX-WS Web service from a Rails application, then run your Rails application using GlassFish Gem or deploying as a WAR file. It'll work on GlassFish v3 server when issue# 11408 is fixed.
Here are some additional links:
- TOTD #104 also shows how popular Rails applications such as Redmine, Typo, and Substruct can be easily deployed on GlassFish.
- Rails applications can be easily clustered using Apache + mod_proxy or nginx.
A complete archive of all the TOTDs is available here.
Technorati: totd glassfish v3 jruby rails webservice jax-ws metro
hi,
sorry to post here because i don't find your mail in this amazing blog.
i would like to know how to invoke a distant webs services eg i have the same web service deployed in two machines
how to create a client that can invoke this 2 web services?
thank you in advance
Posted by Marouene on March 17, 2010 at 05:37 AM PDT #
Marouene,
Generate the client and change the address at runtime as described at:
Feel free to follow up at users@metro.dev.java.net.
Posted by Arun Gupta on March 31, 2010 at 04:09 AM PDT # | https://blogs.oracle.com/arungupta/entry/totd_117_invoke_a_jax | CC-MAIN-2014-15 | refinedweb | 944 | 54.63 |
30 September 2007 10:21 [Source: ICIS news]
BERLIN (ICIS news)--BP Acetyls, the global acetic acid producer, could complete the divestment of its ethyl acetate (ETAC) and vinyl acetate monomer (VAM) units in the UK within the next six months, a senior company official said on Sunday.
?xml:namespace>
“The divestment of the ETAC and VAM business is on track and we are closer to a deal in the next few months,” Guy Moeyens, chief executive of the global acetyls business, said on the sidelines of the European Petrochemical Conference (EPCA).
The sale will be in line with the company’s strategy not to compete with its customers.
BP Acetyls produces around 300,000 tonnes/year of VAM and 250,000 tonnes/year of ET | http://www.icis.com/Articles/2007/09/30/9066345/epca-07-bp-to-tie-up-etac-vam-sale-in-6-mths.html | CC-MAIN-2014-15 | refinedweb | 126 | 58.45 |
Post Syndicated from Lennart Poettering original
With the new v221 release of
systemd
we are declaring the
sd-bus
API shipped with
systemd
stable. sd-bus is our minimal D-Bus
IPC C library, supporting as
back-ends both classic socket-based D-Bus and
kdbus. The library has been been
part of systemd for a while, but has only been used internally, since
we wanted to have the liberty to still make API changes without
affecting external consumers of the library. However, now we are
confident to commit to a stable API for it, starting with v221.
In this blog story I hope to provide you with a quick overview on
sd-bus, a short reiteration on D-Bus and its concepts, as well as a
few simple examples how to write D-Bus clients and services with it.
What is D-Bus again?
Let’s start with a quick reminder what
D-Bus actually is: it’s a
powerful, generic IPC system for Linux and other operating systems. It
knows concepts like buses, objects, interfaces, methods, signals,
properties. It provides you with fine-grained access control, a rich
type system, discoverability, introspection, monitoring, reliable
multicasting, service activation, file descriptor passing, and
more. There are bindings for numerous programming languages that are
used on Linux.
D-Bus has been a core component of Linux systems since more than 10
years. It is certainly the most widely established high-level local
IPC system on Linux. Since systemd’s inception it has been the IPC
system it exposes its interfaces on. And even before systemd, it was
the IPC system Upstart used to expose its interfaces. It is used by
GNOME, by KDE and by a variety of system components.
D-Bus refers to both a
specification,
and a reference
implementation. The
reference implementation provides both a bus server component, as well
as a client library. While there are multiple other, popular
reimplementations of the client library – for both C and other
programming languages –, the only commonly used server side is the
one from the reference implementation. (However, the kdbus project is
working on providing an alternative to this server implementation as a
kernel component.)
D-Bus is mostly used as local IPC, on top of AF_UNIX sockets. However,
the protocol may be used on top of TCP/IP as well. It does not
natively support encryption, hence using D-Bus directly on TCP is
usually not a good idea. It is possible to combine D-Bus with a
transport like ssh in order to secure it. systemd uses this to make
many of its APIs accessible remotely.
A frequently asked question about D-Bus is why it exists at all,
given that AF_UNIX sockets and FIFOs already exist on UNIX and have
been used for a long time successfully. To answer this question let’s
make a comparison with popular web technology of today: what
AF_UNIX/FIFOs are to D-Bus, TCP is to HTTP/REST. While AF_UNIX
sockets/FIFOs only shovel raw bytes between processes, D-Bus defines
actual message encoding and adds concepts like method call
transactions, an object system, security mechanisms, multicasting and
more.
From our 10year+ experience with D-Bus we know today that while there
are some areas where we can improve things (and we are working on
that, both with kdbus and sd-bus), it generally appears to be a very
well designed system, that stood the test of time, aged well and is
widely established. Today, if we’d sit down and design a completely
new IPC system incorporating all the experience and knowledge we
gained with D-Bus, I am sure the result would be very close to what
D-Bus already is.
Or in short: D-Bus is great. If you hack on a Linux project and need a
local IPC, it should be your first choice. Not only because D-Bus is
well designed, but also because there aren’t many alternatives that
can cover similar functionality.
Where does sd-bus fit in?
Let’s discuss why sd-bus exists, how it compares with the other
existing C D-Bus libraries and why it might be a library to consider
for your project.
For C, there are two established, popular D-Bus libraries: libdbus, as
it is shipped in the reference implementation of D-Bus, as well as
GDBus, a component of GLib, the low-level tool library of GNOME.
Of the two libdbus is the much older one, as it was written at the
time the specification was put together. The library was written with
a focus on being portable and to be useful as back-end for higher-level
language bindings. Both of these goals required the API to be very
generic, resulting in a relatively baroque, hard-to-use API that lacks
the bits that make it easy and fun to use from C. It provides the
building blocks, but few tools to actually make it straightforward to
build a house from them. On the other hand, the library is suitable
for most use-cases (for example, it is OOM-safe making it suitable for
writing lowest level system software), and is portable to operating
systems like Windows or more exotic UNIXes.
GDBus
is a much newer implementation. It has been written after considerable
experience with using a GLib/GObject wrapper around libdbus. GDBus is
implemented from scratch, shares no code with libdbus. Its design
differs substantially from libdbus, it contains code generators to
make it specifically easy to expose GObject objects on the bus, or
talking to D-Bus objects as GObject objects. It translates D-Bus data
types to GVariant, which is GLib’s powerful data serialization
format. If you are used to GLib-style programming then you’ll feel
right at home, hacking D-Bus services and clients with it is a lot
simpler than using libdbus.
With sd-bus we now provide a third implementation, sharing no code
with either libdbus or GDBus. For us, the focus was on providing kind
of a middle ground between libdbus and GDBus: a low-level C library
that actually is fun to work with, that has enough syntactic sugar to
make it easy to write clients and services with, but on the other hand
is more low-level than GDBus/GLib/GObject/GVariant. To be able to use
it in systemd’s various system-level components it needed to be
OOM-safe and minimal. Another major point we wanted to focus on was
supporting a kdbus back-end right from the beginning, in addition to
the socket transport of the original D-Bus specification (“dbus1”). In
fact, we wanted to design the library closer to kdbus’ semantics than
to dbus1’s, wherever they are different, but still cover both
transports nicely. In contrast to libdbus or GDBus portability is not
a priority for sd-bus, instead we try to make the best of the Linux
platform and expose specific Linux concepts wherever that is
beneficial. Finally, performance was also an issue (though a secondary
one): neither libdbus nor GDBus will win any speed records. We wanted
to improve on performance (throughput and latency) — but simplicity
and correctness are more important to us. We believe the result of our
work delivers our goals quite nicely: the library is fun to use,
supports kdbus and sockets as back-end, is relatively minimal, and the
performance is substantially
better
than both libdbus and GDBus.
To decide which of the three APIs to use for you C project, here are
short guidelines:
If you hack on a GLib/GObject project, GDBus is definitely your
first choice.
If portability to non-Linux kernels — including Windows, Mac OS and
other UNIXes — is important to you, use either GDBus (which more or
less means buying into GLib/GObject) or libdbus (which requires a
lot of manual work).
Otherwise, sd-bus would be my recommended choice.
(I am not covering C++ specifically here, this is all about plain C
only. But do note: if you use Qt, then QtDBus is the D-Bus API of
choice, being a wrapper around libdbus.)
Introduction to D-Bus Concepts
To the uninitiated D-Bus usually appears to be a relatively opaque
technology. It uses lots of concepts that appear unnecessarily complex
and redundant on first sight. But actually, they make a lot of
sense. Let’s have a look:
A bus is where you look for IPC services. There are usually two
kinds of buses: a system bus, of which there’s exactly one per
system, and which is where you’d look for system services; and a
user bus, of which there’s one per user, and which is where you’d
look for user services, like the address book service or the mail
program. (Originally, the user bus was actually a session bus — so
that you get multiple of them if you log in many times as the same
user –, and on most setups it still is, but we are working on
moving things to a true user bus, of which there is only one per
user on a system, regardless how many times that user happens to
A service is a program that offers some IPC API on a bus. A
service is identified by a name in reverse domain name
notation. Thus, the
org.freedesktop.NetworkManagerservice on the
system bus is where NetworkManager’s APIs are available and
org.freedesktop.login1on the system bus is where
systemd-logind‘s APIs are exposed.
A client is a program that makes use of some IPC API on a bus. It
talks to a service, monitors it and generally doesn’t provide any
services on its own. That said, lines are blurry and many services
are also clients to other services. Frequently the term peer is
used as a generalization to refer to either a service or a client.
An object path is an identifier for an object on a specific
service. In a way this is comparable to a C pointer, since that’s
how you generally reference a C object, if you hack object-oriented
programs in C. However, C pointers are just memory addresses, and
passing memory addresses around to other processes would make
little sense, since they of course refer to the address space of
the service, the client couldn’t make sense of it. Thus, the D-Bus
designers came up with the object path concept, which is just a
string that looks like a file system path. Example:
/org/freedesktop/login1is the object path of the ‘manager’
object of the
org.freedesktop.login1service (which, as we
remember from above, is still the service
systemd-logind
exposes). Because object paths are structured like file system
paths they can be neatly arranged in a tree, so that you end up
with a venerable tree of objects. For example, you’ll find all user
sessions
systemd-logindmanages below the
/org/freedesktop/login1/sessionsub-tree, for example called
/org/freedesktop/login1/session/_7,
/org/freedesktop/login1/session/_55and so on. How services
precisely label their objects and arrange them in a tree is
completely up to the developers of the services.
Each object that is identified by an object path has one or more
interfaces. An interface is a collection of signals, methods, and
properties (collectively called members), that belong
together. The concept of a D-Bus interface is actually pretty
much identical to what you know from programming languages such as
Java, which also know an interface concept. Which interfaces an
object implements are up the developers of the service. Interface
names are in reverse domain name notation, much like service
names. (Yes, that’s admittedly confusing, in particular since it’s
pretty common for simpler services to reuse the service name string
also as an interface name.) A couple of interfaces are standardized
though and you’ll find them available on many of the objects
offered by the various services. Specifically, those are
org.freedesktop.DBus.Introspectable,
org.freedesktop.DBus.Peer
and
org.freedesktop.DBus.Properties.
An interface can contain methods. The word “method” is more or
less just a fancy word for “function”, and is a term used pretty
much the same way in object-oriented languages such as Java. The
most common interaction between D-Bus peers is that one peer
invokes one of these methods on another peer and gets a reply. A
D-Bus method takes a couple of parameters, and returns others. The
parameters are transmitted in a type-safe way, and the type
information is included in the introspection data you can query
from each object. Usually, method names (and the other member
types) follow a CamelCase syntax. For example,
systemd-logind
exposes an
ActivateSessionmethod on the
org.freedesktop.login1.Managerinterface that is available on the
/org/freedesktop/login1object of the
org.freedesktop.login1
service.
A signature describes a set of parameters a function (or signal,
property, see below) takes or returns. It’s a series of characters
that each encode one parameter by its type. The set of types
available is pretty powerful. For example, there are simpler types
like
sfor string, or
ufor 32bit integer, but also complex
types such as
asfor an array of strings or
a(sb)for an array
of structures consisting of one string and one boolean each. See
the D-Bus specification
for the full explanation of the type system. The
ActivateSessionmethod mentioned above takes a single string as
parameter (the parameter signature is hence
s), and returns
nothing (the return signature is hence the empty string). Of
course, the signature can get a lot more complex, see below for
more examples.
A signal is another member type that the D-Bus object system
knows. Much like a method it has a signature. However, they serve
different purposes. While in a method call a single client issues a
request on a single service, and that service sends back a response
to the client, signals are for general notification of
peers. Services send them out when they want to tell one or more
peers on the bus that something happened or changed. In contrast to
method calls and their replies they are hence usually broadcast
over a bus. While method calls/replies are used for duplex
one-to-one communication, signals are usually used for simplex
one-to-many communication (note however that that’s not a
requirement, they can also be used one-to-one). Example:
systemd-logindbroadcasts a
SessionNewsignal from its manager
object each time a user logs in, and a
SessionRemovedsignal
every time a user logs out.
A property is the third member type that the D-Bus object system
knows. It’s similar to the property concept known by languages like
C#. Properties also have a signature, and are more or less just
variables that an object exposes, that can be read or altered by
clients. Example:
systemd-logindexposes a property
Dockedof
the signature
b(a boolean). It reflects whether
systemd-logind
thinks the system is currently in a docking station of some form
(only applies to laptops …).
So much for the various concepts D-Bus knows. Of course, all these new
concepts might be overwhelming. Let’s look at them from a different
perspective. I assume many of the readers have an understanding of
today’s web technology, specifically HTTP and REST. Let’s try to
compare the concept of a HTTP request with the concept of a D-Bus
method call:
A HTTP request you issue on a specific network. It could be the
Internet, or it could be your local LAN, or a company
VPN. Depending on which network you issue the request on, you’ll be
able to talk to a different set of servers. This is not unlike the
“bus” concept of D-Bus.
On the network you then pick a specific HTTP server to talk
to. That’s roughly comparable to picking a service on a specific bus.
On the HTTP server you then ask for a specific URL. The “path” part
of the URL (by which I mean everything after the host name of the
server, up to the last “/”) is pretty similar to a D-Bus object path.
The “file” part of the URL (by which I mean everything after the
last slash, following the path, as described above), then defines
the actual call to make. In D-Bus this could be mapped to an
interface and method name.
Finally, the parameters of a HTTP call follow the path after the
“?”, they map to the signature of the D-Bus call.
Of course, comparing an HTTP request to a D-Bus method call is a bit
comparing apples and oranges. However, I think it’s still useful to
get a bit of a feeling of what maps to what.
From the shell
So much about the concepts and the gray theory behind them. Let’s make
this exciting, let’s actually see how this feels on a real system.
Since a while systemd has included a tool
busctl that is useful to
explore and interact with the D-Bus object system. When invoked
without parameters, it will show you a list of all peers connected to
the system bus. (Use
--user to see the peers of your user bus
instead):
$ busctl NAME PID PROCESS USER CONNECTION UNIT SESSION DESCRIPTION :1.1 1 systemd root :1.1 - - - :1.11 705 NetworkManager root :1.11 NetworkManager.service - - :1.14 744 gdm root :1.14 gdm.service - - :1.4 708 systemd-logind root :1.4 systemd-logind.service - - :1.7200 17563 busctl lennart :1.7200 session-1.scope 1 - […] org.freedesktop.NetworkManager 705 NetworkManager root :1.11 NetworkManager.service - - org.freedesktop.login1 708 systemd-logind root :1.4 systemd-logind.service - - org.freedesktop.systemd1 1 systemd root :1.1 - - - org.gnome.DisplayManager 744 gdm root :1.14 gdm.service - - […]
(I have shortened the output a bit, to make keep things brief).
The list begins with a list of all peers currently connected to the
bus. They are identified by peer names like “:1.11”. These are called
unique names in D-Bus nomenclature. Basically, every peer has a
unique name, and they are assigned automatically when a peer connects
to the bus. They are much like an IP address if you so will. You’ll
notice that a couple of peers are already connected, including our
little busctl tool itself as well as a number of system services. The
list then shows all actual services on the bus, identified by their
service names (as discussed above; to discern them from the unique
names these are also called well-known names). In many ways
well-known names are similar to DNS host names, i.e. they are a
friendlier way to reference a peer, but on the lower level they just
map to an IP address, or in this comparison the unique name. Much like
you can connect to a host on the Internet by either its host name or
its IP address, you can also connect to a bus peer either by its
unique or its well-known name. (Note that each peer can have as many
well-known names as it likes, much like an IP address can have
multiple host names referring to it).
OK, that’s already kinda cool. Try it for yourself, on your local
machine (all you need is a recent, systemd-based distribution).
Let’s now go the next step. Let’s see which objects the
org.freedesktop.login1 service actually offers:
$ busctl tree org.freedesktop.login1 └─/org/freedesktop/login1 ├─/org/freedesktop/login1/seat │ ├─/org/freedesktop/login1/seat/seat0 │ └─/org/freedesktop/login1/seat/self ├─/org/freedesktop/login1/session │ ├─/org/freedesktop/login1/session/_31 │ └─/org/freedesktop/login1/session/self └─/org/freedesktop/login1/user ├─/org/freedesktop/login1/user/_1000 └─/org/freedesktop/login1/user/self
Pretty, isn’t it? What’s actually even nicer, and which the output
does not show is that there’s full command line completion
available: as you press TAB the shell will auto-complete the service
names for you. It’s a real pleasure to explore your D-Bus objects that
way!
The output shows some objects that you might recognize from the
explanations above. Now, let’s go further. Let’s see what interfaces,
methods, signals and properties one of these objects actually exposes:
$ busctl introspect org.freedesktop.login1 /org/freedesktop/login1/session/_31 NAME TYPE SIGNATURE RESULT/VALUE FLAGS - - org.freedesktop.login1.Session interface - - - .Activate method - - - .Kill method si - - .Lock method - - - .PauseDeviceComplete method uu - - .ReleaseControl method - - - .ReleaseDevice method uu - - .SetIdleHint method b - - .TakeControl method b - - .TakeDevice method uu hb - .Terminate method - - - .Unlock method - - - .Active property b true emits-change .Audit property u 1 const .Class property s "user" const .Desktop property s "" const .Display property s "" const .Id property s "1" const .IdleHint property b true emits-change .IdleSinceHint property t 1434494624206001 emits-change .IdleSinceHintMonotonic property t 0 emits-change .Leader property u 762 const .Name property s "lennart" const .Remote property b false const .RemoteHost property s "" const .RemoteUser property s "" const .Scope property s "session-1.scope" const .Seat property (so) "seat0" "/org/freedesktop/login1/seat... const .Service property s "gdm-autologin" const .State property s "active" - .TTY property s "/dev/tty1" const .Timestamp property t 1434494630344367 const .TimestampMonotonic property t 34814579 const .Type property s "x11" const .User property (uo) 1000 "/org/freedesktop/login1/user/_1... const .VTNr property u 1 const .Lock signal - - - .PauseDevice signal uus - - .ResumeDevice signal uuh - - .Unlock signal - - -
As before, the busctl command supports command line completion, hence
both the service name and the object path used are easily put together
on the shell simply by pressing TAB. The output shows the methods,
properties, signals of one of the session objects that are currently
made available by
systemd-logind. There’s a section for each
interface the object knows. The second column tells you what kind of
member is shown in the line. The third column shows the signature of
the member. In case of method calls that’s the input parameters, the
fourth column shows what is returned. For properties, the fourth
column encodes the current value of them.
So far, we just explored. Let’s take the next step now: let’s become
active – let’s call a method:
# busctl call org.freedesktop.login1 /org/freedesktop/login1/session/_31 org.freedesktop.login1.Session Lock
I don’t think I need to mention this anymore, but anyway: again
there’s full command line completion available. The third argument is
the interface name, the fourth the method name, both can be easily
completed by pressing TAB. In this case we picked the
Lock method,
which activates the screen lock for the specific session. And yupp,
the instant I pressed enter on this line my screen lock turned on
(this only works on DEs that correctly hook into
systemd-logind for
this to work. GNOME works fine, and KDE should work too).
The
Lock method call we picked is very simple, as it takes no
parameters and returns none. Of course, it can get more complicated
for some calls. Here’s another example, this time using one of
systemd’s own bus calls, to start an arbitrary system unit:
# busctl call org.freedesktop.systemd1 /org/freedesktop/systemd1 org.freedesktop.systemd1.Manager StartUnit ss "cups.service" "replace" o "/org/freedesktop/systemd1/job/42684"
This call takes two strings as input parameters, as we denote in the
signature string that follows the method name (as usual, command line
completion helps you getting this right). Following the signature the
next two parameters are simply the two strings to pass. The specified
signature string hence indicates what comes next. systemd’s StartUnit
method call takes the unit name to start as first parameter, and the
mode in which to start it as second. The call returned a single object
path value. It is encoded the same way as the input parameter: a
signature (just
o for the object path) followed by the actual value.
Of course, some method call parameters can get a ton more complex, but
with
busctl it’s relatively easy to encode them all. See the man
page for
details.
busctl knows a number of other operations. For example, you can use
it to monitor D-Bus traffic as it happens (including generating a
.cap file for use with Wireshark!) or you can set or get specific
properties. However, this blog story was supposed to be about sd-bus,
not
busctl, hence let’s cut this short here, and let me direct you
to the man page in case you want to know more about the tool.
busctl (like the rest of system) is implemented using the sd-bus
API. Thus it exposes many of the features of sd-bus itself. For
example, you can use to connect to remote or container buses. It
understands both kdbus and classic D-Bus, and more!
sd-bus
But enough! Let’s get back on topic, let’s talk about sd-bus itself.
The sd-bus set of APIs is mostly contained in the header file
sd-bus.h.
Here’s a random selection of features of the library, that make it
compare well with the other implementations available.
Supports both kdbus and dbus1 as back-end.
Has high-level support for connecting to remote buses via ssh, and
to buses of local OS containers.
Powerful credential model, to implement authentication of clients
in services. Currently 34 individual fields are supported, from the
PID of the client to the cgroup or capability sets.
Support for tracking the life-cycle of peers in order to release
local objects automatically when all peers referencing them
disconnected.
The client builds an efficient decision tree to determine which
handlers to deliver an incoming bus message to.
Automatically translates D-Bus errors into UNIX style errors and
back (this is lossy though), to ensure best integration of D-Bus
into low-level Linux programs.
Powerful but lightweight object model for exposing local objects on
the bus. Automatically generates introspection as necessary.
The API is currently not fully documented, but we are working on
completing the set of manual pages. For details
see all pages starting with
sd_bus_.
Invoking a Method, from C, with sd-bus
So much about the library in general. Here’s an example for connecting
to the bus and issuing a method call:
#include <stdio.h> #include <stdlib.h> #include <systemd/sd-bus.h> int main(int argc, char *argv[]) { sd_bus_error error = SD_BUS_ERROR_NULL; sd_bus_message *m = NULL; sd_bus *bus = NULL; const char *path; int r; /* Connect to the system bus */ r = sd_bus_open_system(&bus); if (r < 0) { fprintf(stderr, "Failed to connect to system bus: %s\n", strerror(-r)); goto finish; } /* Issue the method call and store the respons message in m */ r = sd_bus_call_method(bus, "org.freedesktop.systemd1", /* service to contact */ "/org/freedesktop/systemd1", /* object path */ "org.freedesktop.systemd1.Manager", /* interface name */ "StartUnit", /* method name */ &error, /* object to return error in */ &m, /* return message on success */ "ss", /* input signature */ "cups.service", /* first argument */ "replace"); /* second argument */ if (r < 0) { fprintf(stderr, "Failed to issue method call: %s\n", error.message); goto finish; } /* Parse the response message */ r = sd_bus_message_read(m, "o", &path); if (r < 0) { fprintf(stderr, "Failed to parse response message: %s\n", strerror(-r)); goto finish; } printf("Queued service job as %s.\n", path); finish: sd_bus_error_free(&error); sd_bus_message_unref(m); sd_bus_unref(bus); return r < 0 ? EXIT_FAILURE : EXIT_SUCCESS; }
Save this example as
bus-client.c, then build it with:
$ gcc bus-client.c -o bus-client `pkg-config --cflags --libs libsystemd`
This will generate a binary
bus-client you can now run. Make sure to
run it as root though, since access to the
StartUnit method is
privileged:
# ./bus-client Queued service job as /org/freedesktop/systemd1/job/3586.
And that’s it already, our first example. It showed how we invoked a
method call on the bus. The actual function call of the method is very
close to the
busctl command line we used before. I hope the code
excerpt needs little further explanation. It’s supposed to give you a
taste how to write D-Bus clients with sd-bus. For more more
information please have a look at the header file, the man page or
even the sd-bus sources.
Implementing a Service, in C, with sd-bus
Of course, just calling a single method is a rather simplistic
example. Let’s have a look on how to write a bus service. We’ll write
a small calculator service, that exposes a single object, which
implements an interface that exposes two methods: one to multiply two
64bit signed integers, and one to divide one 64bit signed integer by
another.
#include <stdio.h> #include <stdlib.h> #include <errno.h> #include <systemd/sd-bus.h> static int method_multiply; } /* Reply with the response */ return sd_bus_reply_method_return(m, "x", x * y); } static int method_divide; } /* Return an error on division by zero */ if (y == 0) { sd_bus_error_set_const(ret_error, "net.poettering.DivisionByZero", "Sorry, can't allow division by zero."); return -EINVAL; } return sd_bus_reply_method_return(m, "x", x / y); } /* The vtable of our little object, implements the net.poettering.Calculator interface */ static const sd_bus_vtable calculator_vtable[] = { SD_BUS_VTABLE_START(0), SD_BUS_METHOD("Multiply", "xx", "x", method_multiply, SD_BUS_VTABLE_UNPRIVILEGED), SD_BUS_METHOD("Divide", "xx", "x", method_divide, SD_BUS_VTABLE_UNPRIVILEGED), SD_BUS_VTABLE_END }; int main(int argc, char *argv[]) { sd_bus_slot *slot = NULL; sd_bus *bus = NULL; int r; /* Connect to the user bus this time */ r = sd_bus_open_user(&bus); if (r < 0) { fprintf(stderr, "Failed to connect to system bus: %s\n", strerror(-r)); goto finish; } /* Install the object */ r = sd_bus_add_object_vtable(bus, &slot, "/net/poettering/Calculator", /* object path */ "net.poettering.Calculator", /* interface name */ calculator_vtable, NULL); if (r < 0) { fprintf(stderr, "Failed to issue method call: %s\n", strerror(-r)); goto finish; } /* Take a well-known service name so that clients can find us */ r = sd_bus_request_name(bus, "net.poettering.Calculator", 0); if (r < 0) { fprintf(stderr, "Failed to acquire service name: %s\n", strerror(-r)); goto finish; } for (;;) { /* Process requests */ r = sd_bus_process(bus, NULL);; } } finish: sd_bus_slot_unref(slot); sd_bus_unref(bus); return r < 0 ? EXIT_FAILURE : EXIT_SUCCESS; }
Save this example as
bus-service.c, then build it with:
$ gcc bus-service.c -o bus-service `pkg-config --cflags --libs libsystemd`
Now, let’s run it:
$ ./bus-service
In another terminal, let’s try to talk to it. Note that this service
is now on the user bus, not on the system bus as before. We do this
for simplicity reasons: on the system bus access to services is
tightly controlled so unprivileged clients cannot request privileged
operations. On the user bus however things are simpler: as only
processes of the user owning the bus can connect no further policy
enforcement will complicate this example. Because the service is on
the user bus, we have to pass the
--user switch on the
busctl
command line. Let’s start with looking at the service’s object tree.
$ busctl --user tree net.poettering.Calculator └─/net/poettering/Calculator
As we can see, there’s only a single object on the service, which is
not surprising, given that our code above only registered one. Let’s
see the interfaces and the members this object exposes:
$ busctl --user introspect net.poettering.Calculator /net/poettering/Calculator NAME TYPE SIGNATURE RESULT/VALUE FLAGS net.poettering.Calculator interface - - - .Divide method xx x - .Multiply method xx x - - -
The sd-bus library automatically added a couple of generic interfaces,
as mentioned above. But the first interface we see is actually the one
we added! It shows our two methods, and both take “xx” (two 64bit
signed integers) as input parameters, and return one “x”. Great! But
does it work?
$ busctl --user call net.poettering.Calculator /net/poettering/Calculator net.poettering.Calculator Multiply xx 5 7 x 35
Woohoo! We passed the two integers 5 and 7, and the service actually
multiplied them for us and returned a single integer 35! Let’s try the
other method:
$ busctl --user call net.poettering.Calculator /net/poettering/Calculator net.poettering.Calculator Divide xx 99 17 x 5
Oh, wow! It can even do integer division! Fantastic! But let’s trick
it into dividing by zero:
$ busctl --user call net.poettering.Calculator /net/poettering/Calculator net.poettering.Calculator Divide xx 43 0 Sorry, can't allow division by zero.
Nice! It detected this nicely and returned a clean error about it. If
you look in the source code example above you’ll see how precisely we
generated the error.
And that’s really all I have for today. Of course, the examples I
showed are short, and I don’t get into detail here on what precisely
each line does. However, this is supposed to be a short introduction
into D-Bus and sd-bus, and it’s already way too long for that …
I hope this blog story was useful to you. If you are interested in
using sd-bus for your own programs, I hope this gets you started. If
you have further questions, check the (incomplete) man pages, and
inquire us on IRC or the systemd mailing list. If you need more
examples, have a look at the systemd source tree, all of systemd’s
many bus services use sd-bus extensively. | https://noise.getoto.net/tag/hacking/page/18/ | CC-MAIN-2020-05 | refinedweb | 5,540 | 56.05 |
Opened 5 years ago
Closed 5 years ago
Last modified 5 years ago
#24937 closed Bug (fixed)
DateTimeRangeField.value_to_string raises TypeError
Description
DateTimeRangeField's
value_to_string method raises
TypeError because
json.dumps is unable to serialize
datetime.datetime objects.
Steps to replicate:
Step 1:
# myapp/models.py from django.contrib.postgres.fields import DateTimeRangeField from django.db.models import Model class DateTimeRangeTest(Model): date_time_range = DateTimeRangeField()
Step 2: migrate etc.
Step 3:
from django.contrib.postgres.fields import DateTimeRangeField from myapp.models import DateTimeRangeTest t = DateTimeRangeTest.objects.create(date_time_range=('2015-06-05 16:00:00+0200','2015-06-05 17:00:00+0200')) dtrf = DateTimeRangeField() dtrf.attname = 'date_time_range' dtrf.value_to_string(t) # raises: TypeError: datetime.datetime(2015, 6, 5, 14, 00, 00, tzinfo=<UTC>) is not JSON serializable
The
value_to_string method is used with the
manage.py dumpdata command, so this bug effectively breaks
dumpdata for models with
DateTimeRangeFields.
Change History (10)
comment:1 Changed 5 years ago by
comment:2 Changed 5 years ago by
comment:3 Changed 5 years ago by
I'll add a test to show this, and hopefully a patch.
comment:4 Changed 5 years ago by
PR at – I used DjangoJSONEncoder on the base value_to_string so that it should hopefully handle both DateRangeField and DateTimeRangeField (the changed test did fail on both those ranges). The PR does pass, the failure is in the last commit on master, I'm afraid :)
DateFieldis serializing its value through
isoformat(), so I guess date-based range fields should do something similar. | https://code.djangoproject.com/ticket/24937 | CC-MAIN-2020-24 | refinedweb | 250 | 51.44 |
Brief items
About 100 patches have been merged into the mainline git repository since -rc4, as of this writing. They are fixes, mostly in the architecture, ALSA, and networking subsystems.
The current -mm tree is 2.6.20-rc3-mm1. Recent changes to -mm include a bunch of KVM work (see below), another set of workqueue API changes, and the virtualization of struct user.
The current stable 2.6 kernel is 2.6.19.2, released on January 10. It contains a long list of fixes, including the fix for the file corruption problem and several with security implications.
For older kernels: 2.6.16.38-rc1 was released on January 9 with a long list of fixes - many of which are security-related.
Kernel development news
Discussion on the mailing lists reveal that the kernel.org servers (there are two of them) often run with load averages in the range of 2-300. So it's not entirely surprising that they are not always quite as responsive as one would like. There is talk of adding servers, but there is also a sense that the current servers should be able to keep up with the load. So the developers have been looking into what is going on.
The problem seems to originate with git. Kernel.org hosts quite a few git repositories and a version of the gitweb system as well - though gitweb is often disabled when the load gets too high. The git-related problems, in turn, come down to the speed with which Linux can read directories. According to kernel.org administrator H. Peter Anvin:
Clearly, something is not quite right with the handling of large filesystems under heavy load. Part of the problem may be that Linux is not dedicating enough memory to caching directories in this situation, but the real problems are elsewhere. It turns out that:
It has been reported that the third of the above-listed problems can be addressed by moving to XFS, which does a better job at keeping directories together. Kernel.org could make such a switch - at the cost of about a week's downtime for each server. So one should not expect it to happen overnight.
The first priority for improving the situation is, most likely, the implementation of some sort of directory readahead. That change would cut the amount of time spent waiting for directory I/O and, crucially, would require no change to existing filesystems - not even a backup and restore - to get better performance. An early readahead patch has been circulated, but this issue looks complex enough that a few iterations of careful work will be required to arrive at a real solution. So look for something to show up in the 2.6.21 time frame.
While K.
As an example, imagine that a user wanted to mount a distribution DVD full of packages. It would be nice to be able to add updated packages to close today's security holes, but the DVD is a read-only medium. The solution is a union filesystem. A system administrator can take a writable filesystem and join it with the read-only DVD, creating a writable filesystem with the contents of both. If the user then adds packages, they will go into the writable filesystem, which can be smaller than would be needed if it were to hold the entire contents.
The unionfs patch posted by Josef Sipek provides this capability. With unionfs in place, the system administrator could construct the union with a command sequence like:
mount -r /dev/dvd /mnt/media/dvd mount /dev/hdb1 /mnt/media/dvd-overlay mount -t unionfs \ -o dirs=/mnt/media/dvd-overlay=rw:/mnt/media/dvd=ro \ /writable-dvd
The first two lines just mount the DVD and the writable partition as normal filesystems. The final command then joins them into a single union, mounted on /writable-dvd. Each "branch" of a union has a priority, determined by the order in which they are given in the dirs= option. When a file is looked up, the branches are searched in priority order, with the first occurrence found being returned to the user. If an attempt is made to write a read-only file, that file will be copied into the highest-priority writable branch and written there.
As one might imagine, there is a fair amount of complexity required to make all of this actually work. Joining together filesystem hierarchies, copying files between them, and inserting "whiteouts" to mask files deleted from read-only branches are just a few of the challenges which must be met. The unionfs code seems to handle most of them well, providing convincing Unix semantics in the joined filesystem.
Reviewers immediately jumped on one exception, which was noted in the documentation:
What this means is that it is dangerous to mess directly with the filesystems which have been joined into a union mount. Andrew Morton pointed out that, as user-friendly interfaces go, this one is a little on the rough side. Since bind mounts don't have this problem, he asked, why should unionfs present such a trap to its users? Josef responded:
That, in turn, led to some fairly definitive statements that unionfs should be implemented at the virtual filesystem level. Without that, it's not clear that it will ever be possible to keep the namespace coherent in the face of modifications at all levels of the union. So it seems clear that, to truly gain the approval of the kernel developers, unionfs needs a rewrite. Andrew Morton has been heard to wonder if the current version should be merged anyway in the hopes that it would help inspire that rewrite to happen. No decisions have been made as of this writing, so it's far from clear whether Linux will have unionfs support in the near future or not.
Patches and updates
Kernel trees
Core kernel code
Development tools
Device drivers
Documentation
Filesystems and block I/O
Memory management
Security-related
Virtualization and containers
Page editor: Jonathan Corbet
Next page: Distributions>>
Linux is a registered trademark of Linus Torvalds | https://lwn.net/Articles/216388/ | CC-MAIN-2017-30 | refinedweb | 1,020 | 61.46 |
Job orchestration framework based on individual idempotent python classes and dependencies.
Project description
DoJobber
========
DoJobber is a python task orchestration framework based on writing
small single-task idempotent classes (Jobs), defining
interdependencies, and letting python do all the work of running
them in the "right order".
DoJobber builds an internal graph of Jobs. It will run
Jobs that have no unmet dependencies, working up the chain
until it either reaches the root or cannot go further due to
Job failures.
Each Job serves a single purpose, and must be idempotent,
i.e. it will produce the same results if executed once or
multiple times, without causing any unintended side effects.
Because of this you can run your python script multiple times
and it will get closer and closer to completion as any
previously-failed Jobs succeed.
Here's an example of how one might break down the overall
goal of inviting friends over to watch a movie - this
is the result of the ``tests/dojobber_example.py`` script.
.. image::
:alt: DoJobber example graph
:width: 90%
:align: center
Rather than a yaml-based syntax with many plugins, DoJobber
lets you write in native python, so anything you can code
you can plumb into the DoJobber framework.
DoJobber is conceptually based on a Google program known as
Masher that was built for automating service and datacenter
spinups, but shares no code with it.
Job Structure
=============
Each Job is is own class. Here's an example::
class FriendsArrive(Job):
DEPS = (InviteFriends,)
def Check(self, *dummy_args, **dummy_kwargs):
# Do something to verify that everyone has arrived.
pass
def Run(self, *dummy_args, **dummy_kwargs):
pass
Each Job has a DEPS attribute, ``Check`` method, and ``Run`` method.
DEPS
----
DEPS defines which other Jobs it is dependent on. This is used
for generating the internal graph.
Check
-----
``Check`` executes and, if it does not raise an Exception, is considered
to have passed. If it passes then the Job passed and the next Job will
run. It's purpose is to verify that we are in the desired state for
this Job. For example if the job was to create a user, this may
look up the user in /etc/passwd.
Run
---
``Run`` executes if ``Check`` failed. Its job is to do something to achieve
our goal. DoJobber doesn't care if it returns anything, throws an
exception, or exits - all this is ignored.
An example might be creating a user account, or adding a database
entry, or launching an ansible playbook.
Recheck
-------
The Recheck phase simply executes the ``Check`` method again. Hopefully
the ``Run`` method did the work that was necessary, so ``Check`` will verify
all is now well. If so (i.e. ``Check`` does not raise an Exception) then
we consider this Job a success, and any dependent Jobs are not blocked
from running.
Job Features
============
Job Arguments
-------------
Jobs can take both positional and keyword arguments. These are set via the
set_args method::
dojob = dojobber.DoJobber()
dojob.configure(RootJob, ......)
dojob.set_args('arg1', 'arg2', foo='foo', bar='bar', ...)
Because of this it is best to accept both in your ``Check`` and ``Run``
methods::
def Check(self, *args, **kwargs):
....
def Run(self, *args, **kwargs):
....
If you're generating your keyword arguments from argparse or optparse,
then you can be even lazier - send it in as a dict::
myparser = argparse.ArgumentParser()
myparser.add_argument('--movie', dest='movie', help='Movie to watch.')
...
args = myparser.parse_args()
dojob.set_args(**args.__dict__)
An then in your ``Check``/``Run`` you can use them by name::
def Check(self, *args, **kwargs):
if kwargs['movie'] == 'Zardoz':
raise Error('Really?')
Local Job Storage
-----------------
Local Storage allows you to share information between
a Job's ``Check`` and ``Run`` methods. For example a ``Check``
may do an expensive lookup or initialization which
the ``Run`` may then use to speed up its work.
To use Local Job Storage, simply use the
``self.storage`` dictionary from your ``Check`` and/or
``Run`` methods.
Local Storage is not available to any other Jobs. See
Global Job Storage for how you can share information
between Jobs.
Example::
class UselessExample(Job):
def Check(self, \*dummy_args, **dummy_kwargs):
if not self.storage.get('sql_username'):
self.storage['sql_username'] = (some expensive API call)
(check something)
def Run(self, *dummy_args, **kwargs):
subprocess.call(COMMAND + [self.storage['sql_username']])
Global Job Storage
------------------
Global Storage allows you to share information between
Jobs. Naturally it is up to you to assure any
Job that requires Global Storage is defined as
dependent on the Job(s) that set Global Storage.
To use Global Job Storage, simply use the
``self.global_storage`` dictionary from your
``Check`` and/or ``Run`` methods.
Global Storage is available to all Jobs. It is up to
you to avoid naming collisions.
Example::
# Store the number of CPUs on this machine for later
# Jobs to use for nefarious purposes.
class CountCPUs(Job):
def Check(self, *dummy_args, **dummy_kwargs):
self.global_storage['num_cpus'] = len(
[x
for x in open('/proc/cpuinfo').readlines()
if 'vendor_id' in x])
# FixFanSpeed is dependent on CountCPUs
class FixFanSpeed(Job):
DEPS = (CountCPUs,)
def Check(self, *args, **kwargs):
for cpu in range(self.global_storage['num_cpus']):
....
Cleanup
-------
Jobs can have a Cleanup method. After checknrun is complete,
the Cleanup method of each Job that ran (i.e. ``Run`` was executed)
will be excuted. They are run in LIFO order, so Cleanups 'unwind'
everything.
You can pass the cleanup=False option to DoJobber() to prevent
Cleanup from happening and run it manually if you prefer::
dojob = dojobber.DoJobber()
dojob.configure(RootJob, cleanup=False, ......)
dojob.checknrun()
dojob.cleanup()
Creating Jobs Dynamically
-------------------------
You can dynamically create Jobs by making new Job classes
and adding them to the DEPS of an existing class. This is
useful if you need to create new Jobs based on commandline
options. Dynamically creating many small single-purpose jobs
is a better pattern than creating one large monolithic
job that dynamically determines what it needs to do and check.
Here's an example of how you could create a new Job dynamically.
We start with a base Job, ``SendInvite``, which has uninitialized
class valiables ``EMAIL`` and ``NAME``::
# Base Job
class SendInvite(Job):
NAME = None
def Check(self, *args, **kwargs):
r = requests.get(
'' + self.EMAIL)
assert(r.status_code == 200)
def Run(self, *args, **kwargs):
requests.post(
'' + self.EMAIL)
This example Job has ``Check``/``Run`` methods which use class
attribute ``EMAIL`` and ``NAME`` for their configuration.
So to get new Jobs based on this class, you create them and them
to the ``DEPS`` of an existing Job such that they appear in the graph::
class InviteFriends(DummyJob):
"""Job that will become dynamically dependent on other Jobs."""
DEPS = []
def invite_friends(people):
"""Add Invite Jobs for these people.
People is a list of dictionaries with keys email and name.
"""
for person in people:
job = type('Invite {}'.format(person['name']),
(SendInvite,), {})
job.EMAIL = person['email']
job.NAME = person['name']
InviteFriends.DEPS.append(job)
def main():
# do a bunch of stuff
...
# Dynamically add new Jobs to the InviteFriends
invite_friends([
{'name': 'Wendell Bagg', 'email': 'bagg@example.com'},
{'name': 'Lawyer Cat', 'email': 'lawyercat@example.com'}
])
Retry Logic
===========
DoJobber is meant to be able to be retried over and over until
you achieve success. You may be tempted to write something like
this::
...
retry = 5
while retry:
dojob.checknrun()
if dojob.success():
break
print('Trying again...')
retry -= 1
However this is not necessary, and in fact is a waste of computing
cycles. The above code would cause us to check even the already
successful nodes unnecessarily, slowing everything down.
Instead, you can use two class attribute to configure retry
parameters. ``TRIES`` specifies how many times your Job can
erun before we give up, and ``RETRY_DELAY`` specifies the
minimum amount of time between retries.
Retries are useful for those cases where an action in ``Run``
fails due to a temporary condition (maybe the remote server
is unavailable briefly), or where the activities triggered
in the ``Run`` take time to complete (maybe an API call
returns immediately, but background fullfillment takes 30
seconds).
By relying on retry logic, instead of adding in arbirtary
``sleep`` cycles in your code, you can have a more robust
Job graph.
Storage Considerations
----------------------
When a Job is retried, it will be created from scratch. This means
that ``storage`` **is not available between runs**, however ``global_storage``
is. This is done to keep things as pristine as possible between
Job executions.
TRIES Attribute
--------------
TRIES defines the number of tries (check/run/recheck cycles)
that the Job is allowed to do before giving up. It must be >= 1.
The TRIES default if unspecified is 3, which can be changed
in ``configure()`` via the ``default_tries=###`` argument, for
example::
class Foo(Job):
TRIES = 10
...
class Bar(Job):
DEPS = (Foo,)
... # No TRIES attribute
...
dojob = dojobber.DoJobber()
dojob.configure(Foo, default_tries=1)
In the above case, Foo can be tried 10 times, while Bar can only be
tried 1 time, since it has no ``TRIES`` specified and ``default_tries``
in configure is 1.
RETRY_DELAY
-----------
RETRY_DELAY defines the minimum amount of time to wait between
tries (check/run/recheck cycles) of **this** Job before giving
up with permanent failure. It is measured in seconds, and may
be any non-negative numeric value, including 0 and fractional
seconds like 0.02.
The RETRY_DELAY default if unspecified is 1 , which can be
changed in ``configure()`` via the ``default_retry-delay=###`` argument,
for example::
class Foo(Job):
RETRY_DELAY = 10.5 # A long but strangely precise value...
...
class Bar(Job):
DEPS = (Foo,)
... # No RETRY_DELAY attribute
...
dojob = dojobber.DoJobber()
dojob.configure(Foo, default_retry_delay=0.5)
In the above case, Foo will never start unless at least 10.5 seconds
have passed since the previous Foo attempt, while Bar only required
0.5 seconds have passed since it has no ``RETRY_DELAY`` specified
and ``default_retry_delay`` in configure is 0.5.
Delay minimization
------------------
When a Job has a failure it is not immediately retried.
Instead we will hit all Jobs in the graph that are still
awaiting check/run/recheck. Once every reachable Job has
been hit we will 'start over' on the Jobs that failed.
In practice this means that you aren't wasting the full
RETRY_DELAY because other Jobs were likely doing work
between retries of this Job. (Unless your graph is
highly linear and there are no unblocked Jobs.)
You can see how Job retries are interleaved by looking
at the example code::
$ tests/dojobber_example.py -v | grep 'recheck:."
PopcornBowl.recheck: fail "Dishwasher cycle not done yet."
PopcornBowl.recheck: fail "Dishwasher cycle not done yet."
Popcorn.recheck: fail "Still popping..."
Popcorn.recheck: fail "Still popping..."
Note initially we have several Jobs that fail on
distinct branches, and these can be retried in a round-robin
sort of fashion. Only once we end up at strict dependencies
of PopcornBowl and Popcorn do we see single Jobs being retried
without others getting their time.
Job Types
=========
There are several DoJobber Job types:
Job
---
Job requires a ``Check``, ``Run``, and may have optional Cleanup::
class CreateUser(Job):
"""Create our user's account."""
def Check(self, *_, **kwargs):
"""Verify the user exists"""
import pwd
pwd.getpwnam(kwargs['username'])
def Run(self, *_, **kwargs):
"""Create user given the commandline username/gecos arguments"""
import subprocess
subprocess.call([
'sudo', '/usr/sbin/adduser',
'--shell', '/bin/tcsh',
'--gecos', kwargs['gecos'],
kwargs['username'])
### Optional Cleanup method
#def Cleanup(self):
# """Do something to clean up."""
# pass
DummyJob
--------
DummyJob has no ``Check``, ``Run``, nor Cleanup. It is used simply to
have a Job for grouping dependent or dynamically-created Jobs.
So a DummyJob may look as simple as this::
class PlaceHolder(DummyJob):
DEPS = (Dependency1, Dependency2, ...)
RunonlyJob
----------
A ``RunonlyJob`` has no check, just a ``Run``, which will run every time.
If ``Run`` raises an exception then the Job is considered failed.
They cannot succeed in no_act mode, because
in this mode the ``Run`` is never executed.
So an example ``Run`` may look like this::
class RemoveDangerously(RunonlyJob):
DEPS = (UserAcceptedTheConsequences,)
def Run(...):
os.system('rm -rf /')
In general, avoid ``RunonlyJobs`` - it's better if you can understand if
a change even needs making.
Debugging and Logging
=====================
There are two types of logging for DoJobber: runtime information
about Job success/failure for anyone wanting more details
about the processing of your Jobs, and developer DoJobber
debugging which is useful when writing your DoJobber code.
Runtime Debugging
-----------------
To increase verbosity of Job success and failures you
pass `verbose` or `debug` keyword arguments to `configure`::
dojob = dojobber.DoJobber()
dojob.configure(RootJob, verbose=True, ....)
# or
dojob.configure(RootJob, debug=True, ....)
Setting `verbose` will show a line of check/run/recheck status
to stderr, as well as any failure output from rechecks, such as::
FindTVRemote.check: fail
FindTVRemote.run: pass
FindTVRemote.recheck: pass
TurnOnTV.check: fail
TurnOnTV.run: pass
...
Using `debug` will additionally show a full stacktrace of
any failure of check/run/recheck phases.
Development Debugging
---------------------
When writing your DoJobber code you may want to turn on
the developer debugging capabilities. This is enabled
when DoJobber is initialized by passing the `dojobber_loglevel`
keyword argument::
import logging
dojob = DoJobber(dojobber_loglevel=logging.DEBUG)
DoJobber's default is to show `CRITICAL` errors only.
Acceptable levels are those defined in the logging module.
This can help identify problems when writing your code,
such as passing a non-iterable as a `DEPS` variable,
watching as your Job graph is created from the
classes, etc.
Examples
========
The ``tests/dojobber_example.py`` script in the source directory is
fully-functioning suite of tests with numerous comments strewn
throughout.
See Also
========
`Bri Hatch <>`_ gave a talk
about DoJobber at LinuxFestNorthwest in 2018. You can find his
`presentation <>`_
on his website, and the
`presentation video <>`_ is
available on YouTube.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/dojobber/ | CC-MAIN-2022-05 | refinedweb | 2,269 | 57.16 |
How, jboss, oracle) one is
1) Express and 2) MDM(All master data... schema to my express application how can i link with the mdm schema.How the pojos...Hibernate Dear sir
Thanks for your previous answers its one-to-one relationships
Hibernate one-to-one relationships How does one to one relationship work in Hibernate?
Hibernate Mapping One-to-One
Hibernate provides facility of mapping. You can do mapping through hbm.xml or through annotation
Hibernate
Hibernate Can we write more than one hibernate.cfg.xml file... ? if so how can we call and use it.?
can we connect to more than one DataBase from a single Hibernate program
java - Hibernate
java HI guys can any one tell me,procedure for executing spring and hibernate in myeclipse ide,plz very urgent for me,thank's in advance. .../hibernate/runninge-xample.shtml
Thanks Give me a brief description about Hibernate?How we will use it in programming?give me one example
HIBERNATE
HIBERNATE What is mean by cache? How many types are there give me one example
hibernate - Hibernate
hibernate wts the use of inverse in hibernate?
can any one explain the association mapping(1-1,1-many,many-1,many-many code problem - Hibernate
in Build path.
I think it is unable to create session Object.
Can u help me.....How can i solve this problem.
Am i need to Kept any other jars in path. Hi friend,
I thinks, add hibernate-annotation.jar
if you have any
Hibernate - Hibernate
Hibernate when I run Hibernate Code Generation wizard in eclipse I'm... getting this error. please let me know, how to resolve.../hibernate/index.shtml
Hope that it will be helpful for you.
Thanks
Hibernate
Hibernate Please tell me the difference between hibernate and jdbc
select query using hibernate
select query using hibernate Hi,
can any one tell me to select records from table using hibernate.
Thanks
Please visit the following link:
Hibernate Tutorials
The above link will provide you different please give me the link where i can freely download hibernate software(with dependencies)
Learn hibernate, if you want to learn hibernate, please visit the following link:
Hibernate Tutorials
how can i get output pls urget tell me
how can i get output pls urget tell me HTTP Status 500 -
type...: Cannot find bean: "helloWorldForm" in any scope...
javax.servlet.jsp.JspException: Cannot find bean: "helloWorldForm" in any scope
Can anybody tell me how to resolve this issue?
Can anybody tell me how to resolve this issue? java.lang.Exception: Exception :
java.lang.Exception: Generic Errors = java.util.MissingResourceException: Can't find bundle for base name Connection, locale en_US
Can you tell me how it worked?
Can you tell me how it worked? public class Ball
{
private static final int DIAMETER = 30;
int x = 0;
int y = 0;
int xa = 1;
int ya = 1;
private... Rectangle(x, y, DIAMETER, DIAMETER);
}
}
How is the 'getBounds()' method called
hibernate
hibernate Is there any other way to call procedure in hibernate other than named query????? if we are using session object to get the connection then why hibernate we can directly call by using a simple java class??????? please
Tell me - Struts
Struts tutorial to learn from beginning Tell me, how can i learn the struts from beginning One-to-one Relationships
-to-one relationships in
Hibernate.
In next section we will learn how to create and run the one-to-many mapping in
Hibernate.
Download the code example... Hibernate One-to-one Relationships
Hibernate application - Hibernate
Hibernate application Hi,
Could you please tell me how to implement hibernate application in eclipse3.3.0
Thanks,
Kalaga. Hi Friend,
Please visit the following link:...()){
System.out.println("No any data!");
}
else{
while(it.hasNext....
Can you provide me Hibernate configuration tutorial?
Can you provide me Hibernate configuration tutorial? Hello There,
I want to learn about Hibernate configuration, Can anyone provide me with hibernate configuration tutorial>
Thanks
Hibernate one-to-many relationships.
Hibernate one-to-many relationships. How does one-to-many relationships works in hibernate
How to learn hibernate in 24 hours?
How to learn hibernate in 24 hours? Hi,
I am assigned a new project... of Hibernate in 24 hours but just wanted to learn the basic concepts so that I can start working on the Hibernate after 2 days.
Is there any tutorial of learning
please tell me
:iterate>
</logic:notEmpty>
</logic:present>
how can i...please tell me
<tr>
<td><html:hidden<
About Hibernate - Hibernate
About Hibernate What is Hibernate? How can i learn it fast. ...://"></a>And run all... object/relational persistence and query service for Java. Hibernate lets you develop
Hibernate
Hibernate Can we rename hibernate.cfg.xml file name to somename.cfg.xml?
if so how can we use
Please tell me how can i convert string to timer
Please tell me how can i convert string to timer Please tell me how can i convert string to timer
hibernate - Hibernate
hibernate hi friends i had one doubt how to do struts with hibernate in myeclipse ide
its urgent
hibernate - Hibernate
hibernate how to write join quaries using hql Hi satish
I am sending links to you where u can find the solution regarding your query:
please any one can help me to write a code for this question?
please any one can help me to write a code for this question? 1) Copy one file content to other?
2) Count the number of words in a file
hi - Hibernate
;
}
}
}
pls let me know how to delete one particular record using this code...hi hi all,
I am new to hibernate.
could anyone pls let me know...() and delete()
* operations can directly support Spring container-managed Criteria Count Distinct Example
Criteria Count Distinct. Can anyone tell me the best place to learn Hibernate Criteria....
This tutorial will teach you how to use the count distinct in Hibernate?
Check the Latest...Hibernate Criteria Count Distinct Example I am learning Hibernate
How to get hibernate configuration from sessionfactory?
How to get hibernate configuration from sessionfactory? Hi,
I need to get hibernate configuration from sessionfactory, how can i do it?
Thanks... about Hibernate 4 here.
Let me know if you still face any problem
hibernate on netbeans - Hibernate
hibernate on netbeans is it possible for me to run the hibernate program on Netbeans IDE
hibernate - Hibernate
the application I got an exception that it antlr..... Exception.Tell me the answer plz.If any application send me thank u (in advance). Hi friend....
For read more information: Overview
will be able to understand the Hibernate framework.
Hibernate is one of the most... association and join example
relationships. You will also learn how to present these relationships in
Hibernate... the relationship then you can use any
of the following annotations as per your requirement...Example program of Associations and Joins in Hibernate framework
hibernate - Hibernate
|timestamp)?,(property|many-to-one|one-to-one|component|dynamic-component|properties|any... Hibernate;
import org.hibernate.Session;
import org.hibernate.*;
import...
Hi Radhika,
i think, you hibernate configuration
how to run jsp appication in eclipse
not contain any resources that can run on server". plase tell me i m new to handle eclipse, tell me step by step solution
Nasreen Ustad...how to run jsp appication in eclipse after setting the tomcat query - Hibernate Interview Questions
Hibernate query Hi,In my one interview i face one question that is how to write query without using hql and sql.If any body knows please give the answer thank u
HIBERNATE- BASICS
services offered, the
significance of the surging interest in Hibernate can... troublesome and tedious.It can be said that any
design which makes us dependent... tools like OJB, JDO and Hibernate
can be used not only in EJB containers
Please , can any one answer this Question?
Please , can any one answer this Question? How to cummunicate one web application to another web application in different places
spring hibernate
table through jsp using spring an hibernate....and the fields in the registration jsp are in different tables???can any one help or is there any sample code... the following link:
Hibernate compilation - Hibernate
Hibernate compilation please explain how to compile and run Hibernate at command prompt ( in dos) with sample example
Complete Hibernate 3.0 and Hibernate 4 Tutorial
of Hibernate ORM
Framework
Setup Hibernate Environment - Learn How you can setup... you can use Native SQL with hibernate. You will learn how to use Native... will show you how to run the example code.
Hibernate Introduction
please tell me about command line arguments in java?
please tell me about command line arguments in java? please tell me... arguments are the arguments which are sent to the program being called. You can take any number of command-line arguments which may be necessary for the program
Advertisements
If you enjoyed this post then why not add us on Google+? Add us to your Circles | http://roseindia.net/tutorialhelp/comment/7950 | CC-MAIN-2015-35 | refinedweb | 1,487 | 57.98 |
Python program to find out or detect the mobile number from the given paragraph or string
In this tutorial, we will see how to find out or detect the mobile/phone number from a given paragraph in the Python program. To solve this problem, we will use the re module in the Python program. A paragraph will be given by the user and we will find out the mobile number by using the Python programming language. Before using the re module, we will learn a little bit about re module.
what is the re module?
Python has an inbuilt re module which allows us to solve the various problem based on pattern matching and string manipulation.
Detect or find out the mobile number from the given paragraph or string in Python
We will take any paragraph from which we have to find a mobile number.
Python program:-
import re paragraph='My name is Bipin Kumar. I am from East Champaran district. Currently, I am pursuing Mechanical Engineering from Motihari College of Engineering, Motihari. My contact number is +919852458339. I love to spend my time to learn the Python program.' Phonenumber=re.compile(r'\+\d\d\d\d\d\d\d\d\d\d\d\d') m=Phonenumber.search(paragraph) print('mobile number-',m.group())
Output:-
mobile number- +919852458339
- Initially, we have included the re module in the program by using the import function. In the re module the symbol d use for matching the integer value from a given string.
- we all know that the mobile number is of twelve digits so the d symbol in the above code we have taken twelve.
- After doing these all steps we have finally printed the mobile number from the given paragraph.
- Here, I have taken a small paragraph if you want to find the mobile number from a long paragraph then no problem just you have to replace the paragraph.
So Guy’s, I hope you find it useful.
Also read: | https://www.codespeedy.com/python-program-to-find-out-or-detect-the-mobile-number-from-the-given-paragraph-or-string/ | CC-MAIN-2020-50 | refinedweb | 329 | 72.05 |
Extern variables: belong to the External storage class and are stored in the main memory. extern is used when we have to refer a function or variable that is implemented in other file in the same project. The scope of the extern variables is Global.
Example:
#include <stdio.h>
extern int x;
int main()
{
printf("value of x %d", x);
return 0;
}
int x = 3;
Here, the program written in file above has the main function and reference to variable x. The file below has the declaration of variable x. The compiler should know the datatype of x and this is done by extern definition.
Global variables: are variables which are declared above the main( ) function. These variables are accessible throughout the program. They can be accessed by all the functions in the program. Their default value is zero.
Example:
#include <stdio.h>
int x = 0;/*Variable x is a global variable. It can be accessed throughout the program */void increment(void) {
x = x + 1;
printf("\n value of x: %d", x);}
int main(){printf("\n value of x: %d", x);
increment(); | http://ecomputernotes.com/what-is-c/types-and-variables/what-is-scope-storage-allocation-of-extern-and-global-variables | CC-MAIN-2018-17 | refinedweb | 182 | 75.81 |
Difference between revisions of "Getting high with lenny"
From Linux-VServer
Revision as of 10:38, 2 October 2008
Getting High with Lenny
The aim here is to set up some high available services on Debian Lenny (at this moment October 1st still due to be released)
There is a lot of buzz going on for a while now about virtualisation and High Availability and while Vserver is very well capable for this job the number of documented examples compared to some other virtualisation techniques are a little lacking so i thought i'd do my share.
I prefer to use Vserver for the "virtualisation" because of its configurability, shared memory and cpu resources and basically the raw speed. DRBD8 and Heartbeat should take care of the availability magic in case a machine shuts down unexpectedly. In my experience it takes a few seconds to have several Vservers fail over to another machine with this setup.
The main attempt here is to give a single working example without going to much in to the details of every option, the scenario is relatively simple but different variations can be made.
For this set up we will have
** 2 machines ** both machines have 1 single large DRBD partition ** primary/seconday there is always 1 machine active and 1 on standby ** 1 LVM partition per Vserver on top of the DRBD partition, for quota support from within the guest and LVM snapshots ** the Vservers /etc/vserver and /var/lib/vservers directories will be placed on the DRBD partition.
In case the main machine that runs the Vservers goes down, the synchronized second machine should take over and automatically start the Vservers.
Basically this is an on-line RAID solution that can keep your services running in case of hardware failure, it is NOT a back-up replacement.
The cost for this setup is that you always have 1 idle machine standby, this cost can be justified by the fact that Linux-Vserver enables you to make full use of the 1 machine that is running, you also could consider to run this on a little less expensive (reliable) hardware.
Also note that i will be using R1 style configuration for heartbeat, R1 style can be considered to be depreciated when using Heartbeat2 but i could not get my head around the R2 xml configuration, so if you want R2 you might want to have a look here. Fail-over)
The partitioning looks as follows
c0d0p1 Boot Primary Linux ext3 10001.95 c0d0p5 Logical Linux swap / Solaris 1003.49 c0d0p6 Logical Linux 280325.77
* machine1 will use the following names.
* hostname = node1
* IP number = 192.168.1.100
* is primary for r0 on disk c0d0p6
* physical volume on r0 is /dev/drbd0
* volume group on /dev/drbd0 is called drbdvg0
* machine2 will use the following names.
* hostname = node2
* IP number = 192.168.1.200
* is secondary for r0 on disk c0d0p6
The Volume Group and the Physical Volume will be identical on node2 if this one becomes the primary for r0.
Loadbalance-Failover the network cards
Maybe not very specific to Vserver, Heartbeat or DRBD, but loadbalancing your network cards for failover is always usefull. Some more indepth details by Carla Schroder can be found here. [[1]] I did not do it for the DRBD crossover cable between the nodes while this is actually highly recomended. We need both mii-tool and ethtool.
apt-get install ethtool ifenslave-2.6
nano /etc/modprobe.d/arch/i386
To load the modules with the correct options at boot time.
alias bond0 bonding options bond0 mode=balance-alb miimon=100
And set the interfaces eth0 and eth1 as slaves to bond0, also eth2 is set here for the crossover cable for the DRBD connection to the fail over machine.
nano /etc/network/interfaces
# This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback # The primary network interface auto bond0 iface bond0 inet static address 123.123.123.100 netmask 255.255.255.0 network 123.123.123.0 broadcast 123.123.123.255 gateway 123.123.123.1 # dns-* options are implemented by the resolvconf package, if installed dns-nameservers 123.123.123.45 dns-search example.com up /sbin/ifenslave bond0 eth0 eth1 down ifenslave -d bond0 eth0 eth1 auto eth2 iface eth2 inet static address 192.168.1.100 netmask 255.255.255.0
This way the system needs to be rebooted before the changes take effect, otherwise you should load the drivers and ifdown eth0 and eth1 first before ifup bond0 but i'm planning to install a new kernel anyway in the next step.
Install the Vserver packages
apt-get install linux-image-2.6-vserver-686-bigmem util-vserver vserver-debiantools
As usual a reboot is needed to boot this kernel.
With Etch i found that the Vserver kernel often ended up as second in the grub list, not so in Lenny but to be safe check the kernel stanza in /boot/grub/menu.lst especially when doing this from a remote location.
Install DRBD8, LVM2 and Heartbeat
apt-get install drbd8-modules-2.6-vserver-686-bigmem drbd8-module-source lvm2 heartbeat-2
not sure about this, but DRBD always needed to be compiled against the running kernel, is this still the case with the kernel specific modules? I did not check but it would be good to know in case of a kernel upgrade.
Build DRBD8
Although packages are available in the repositorie for DRBD8, the purpose of these packages is that you can built it easily from source and patch the running kernel.
To do this we just issue this command
m-a a-i drbd8
And to load it into the kernel..
depmod -ae
modprobe drbd
Configure DRBD8
Now that we have the essentials installed we can configure DRBD. Again, i will not go in to the details of all the options here so check out the default config and to find a match for your set up.
mv /etc/drbd.conf /etc/drbd.conf.original
nano /etc/drbd.conf
global { usage-count no; } common { syncer { rate 100 { degr-wfc-timeout 120; # 2 minutes. } disk { on-io-error detach; } net { after-sb-0pri disconnect; after-sb-1pri disconnect; after-sb-2pri disconnect; rr-conflict disconnect; } syncer { rate 100M; al-extents 257; } on node1 { device /dev/drbd0; disk /dev/cciss/c0d0p6; address 192.168.1.100:7788; meta-disk internal; } on node2 { device /dev/drbd0; disk /dev/cciss/c0d0p6; address 192.168.1.200:7788; meta-disk internal; } }
Before we start DRBD we change some permissions, otherwise it will ask for it. So on both nodes
chgrp haclient /sbin/drbdsetup chmod o-x /sbin/drbdsetup chmod u+s /sbin/drbdsetup chgrp haclient /sbin/drbdmeta chmod o-x /sbin/drbdmeta chmod u+s /sbin/drbdmeta
Create the DRBD devices
On both nodes
node1
drbdadm create-md r0
node2
drbdadm create-md r0
node1
drbdadm up r0
node2
drbdadm up r0
The following should be done on the node that will be the primary!
On node1
drbdadm -- --overwrite-data-of-peer primary r0
watch cat /proc/drbd should show you something like this
version: 8.0.13 (api:86/proto:86) GIT-hash: ee3ad77563d2e87171a3da17cc002ddfd1677dbe build by phil@fat-tyre, 2008-08-04 15:28:07 0: cs:SyncSource st:Primary/Secondary ds:UpToDate/Inconsistent C r--- ns:62059328 nr:0 dw:3298052 dr:58770141 al:2102 bm:3641 lo:1 pe:261 ua:251 ap:0 [===>................] sync'ed: 22.1% (208411/267331)M finish: 4:04:44 speed: 14,472 (12,756) K/sec resync: used:1/61 hits:4064317 misses:5172 starving:0 dirty:0 changed:5172 act_log: used:0/257 hits:822411 misses:46655 starving:110 dirty:44552 changed:2102
Configure LVM2
<note important> LVM will normally scan all available devices under /dev, but since /dev/cciss/c0d0p6 and /dev/drbd0 are basically the same this will lead to errors where LVM reads and writes the same data to both devices. So to limit it to scan /dev/drbd devices only we do the following on both nodes.
</note>
cp /etc/lvm/lvm.conf /etc/lvm/lvm.conf.original
nano /etc/lvm/lvm.conf
#filter = [ "a/.*/" ] filter = [ "a|/dev/drbd|", "r|.*|" ]
to re-scan with the new settings on both nodes
vgscan
Create the Physical Volume
The following only needs to be done on the node that is the primary!!
On node1
pvcreate /dev/drbd0
Create the Volume Group
The following only needs to be done on the node that is the primary!!
One node1
vgcreate drbdvg0 /dev/drbd0
Create the Logical Volume
Yes, again only on the node that is primary!!!
For this example about 50GB, this leaves plenty of space to expand the volumes or to add extra volumes later on.
On node1
lvcreate -L50000 -n web drbdvg0
Then we put a file system on the logical volumes
mkfs.ext3 /dev/drbdvg0/web
create the directory where we want to mount the Vservers
mkdir -p /VSERVERS/web
and mount the volume group to the mount point
mount -t ext3 /dev/drbdvg0/web /VSERVERS/web/
Get informed
Offcourse we want to be informed later on by heartbeat in case a node goes down, so we install postfix to send the mail.
This should be done on both nodes
apt-get install postfix mailx
and go for the defaults, "internet site" and node1.example.com"
We don't want postfix to listen to all interfaces,
nano /etc/postfix/main.cf
and change the line at the bottom to read like this, otherwise we get into trouble with postfix blocking port 25 for all the Vservers later.
inet_interfaces = loopback-only
Heartbeat
Get aquinted
Add the other node in the hosts file of both nodes, this way Heartbeat knows who is who.
so for node1 do
nano /etc/hosts
and add node2
192.168.1.200 node2
Get intimate
Set up some keys on both boxes so we can ssh login without a password (defaults, no passphrase)
ssh-keygen
then copy over the public keys
scp /root/.ssh/id_rsa.pub 192.168.1.100:/root/.ssh/authorized_keys
scp /root/.ssh/id_rsa.pub 192.168.1.200:/root/.ssh/authorized_keys
Configure Heartbeat
Without the ha.cf file Heartbeat wil not start, this should only be done on 1 of the nodes.
nano /etc/ha.d/ha.cf
autojoin none #crm on #enables heartbeat2 cluster manager - we want that! use_logd on logfacility syslog keepalive 1 deadtime 10 warntime 10 udpport 694 auto_failback on #resources move back once node is back online mcast bond0 239.0.0.43 694 1 0 bcast eth2 node node1 #hostnames of the nodes node node2
This one also on 1 of the nodes
nano /etc/ha.d/authkeys
auth 3 3 md5 failover ## this is just a string, enter what you want ! auth 3 md5 uses md5 encryption
chmod 600 /etc/ha.d/authkeys
<note> We will be using heartbeat R1-style configuration here simply because i don't understand the R2 xml based syntax. </note> We only did the above 2 config files on 1 node but we need it on both, heartbeat can do that for us.
/usr/lib/heartbeat/ha_propagate
Heatbeat behavior
After above 2 files are set, the haresources is where we want to be to control Heartbeats behaviour. This is an example for 1 Vserver that we will set up later on.
nano /etc/ha.d/haresources
node1 drbddisk::r1 LVM::drbdvg1 Filesystem::/dev/drbdvg1/web::/VSERVERS/web::ext3 vserver-web SendArp::123.123.123.125/bond0 MailTo::randall@songshu.org::DRBDFailure
The above will default the Vserver named web to node1 and specify the mount points, the vserver-web script will start and stop heartbeat, the sendarp is for notifying the network that this IP can be found somewhere else then before. (have added the SendArp an extra time below for better result)
Another example for more than 1 Vserver, We only specify 1 default node here for all Vservers and the same DRBD disk and Volume Group, the individual start scripts and mount points are specified separately, mind the \, its all in 1 line. the last mail command is only needed once.
node1 \ drbddisk::r0 \ LVM::drbdvg0 \ Filesystem::/dev/drbdvg0/web::/VSERVERS/web::ext3 \ Filesystem::/dev/drbdvg0/ns1::/VSERVERS/ns1::ext3 \ Vserver-web \ Vserver-ns1 \ SendArp::123.123.123.125/bond0 \ SendArp::123.123.123.126/bond0 \ MailTo::randall@songshu.org::DRBDFailure
start/stop script
The vserver-web script as specified to be called by heartbeat above is basically a demolished version of the original R2 style agent by Martin Fick from here.
What i did is remove the sensible top part and replace "$OCF_RESKEY_vserver" with the specific Vserver name, also added an extra
/etc/ha.d/resource.d/SendArp 123.123.123.126/bond0 start
to the start part because i had various results when done by Heartbeat in the first tests i did, not sure if it is still needed but i guess it doesn't hurt.
nano /etc/ha.d/resource.d/Vserver-web
#!/bin/sh # # License: GNU General Public License (GPL) # Author: Martin Fick <mogulguy@yahoo.com> # Date: 04/19/07 # Version: 1.1 # # This script manages a VServer instance # # It can start or stop a VServer # # usage: $0 {start|stop|status|monitor|meta-data} # # # OCF parameters are as below # OCF_RESKEY_vserver # ####################################################################### # Initialization: # #. /usr/lib/heartbeat/ocf-shellfuncs # # #<resource-agent # <version>1.0</version> # <longdesc lang="en"> #This script manages a VServer instance. #It can start or stop a VServer. # </longdesc> # <shortdesc lang="en">OCF Resource Agent compliant VServer script.</shortdesc> # # <parameters> # # <parameter name="vserver" unique="1" required="1"> # <longdesc lang="en"> #The vserver name is the name as found under /etc/vservers # </longdesc> # <shortdesc lang="en">VServer Name</shortdesc> # <content type="string" default="" /> # </parameter> # # </parameters> # # <actions> # <action name="start" timeout="2m" /> # <action name="stop" timeout="1m" /> # <action name="monitor" depth="0" timeout="1m" interval="5s" start- # <action name="status" depth="0" timeout="1m" interval="5s" start- # <action name="meta-data" timeout="1m" /> # </actions> #</resource-agent> #END #} vserver_reload() { vserver_stop || return vserver_start } vserver_stop() { # # Is the VServer already stopped? # vserver_status [ $? -ne 0 ] && return 0 /usr/sbin/vserver "web" "stop" vserver_status [ $? -ne 0 ] && return 0 return 1 } vserver_start() { vserver_status [ $? -eq 0 ] && return 0 /usr/sbin/vserver "web" "start" vserver_status /etc/ha.d/resource.d/SendArp 123.123.123.125/bond0 start } vserver_status() { /usr/sbin/vserver "web" "status" rc=$? if [ $rc -eq 0 ]; then echo "running" return 0 elif [ $rc -eq 3 ]; then echo "stopped" else echo "unknown" fi return 7 } vserver_monitor() { vserver_status } vserver_usage() { echo $USAGE >&2 } vserver_info() { cat - <<!INFO Abstract=VServer Instance takeover Argument=VServer Name Description: A Vserver is a simulated server which is fairly hardware independent so it can be easily setup to run on several machines. Please rerun with the meta-data command for a list of \\ valid arguments and their defaults. !INFO } # # Start or Stop the given VServer... # if [ $# -ne 1 ] ; then vserver_usage exit 2 fi case "$1" in start|stop|status|monitor|reload|info|usage) vserver_$1 ;; meta-data) meta_data ;; validate-all|notify|promote|demote) exit 3 ;; *) vserver_usage ; exit 2 ;; esac
To make this file executable by Heartbeat
chmod a+x /etc/ha.d/resource.d/Vserver-web
not needed????
There is some more interesting discussion going on here, Advanced_DRBD_mount_issues) , for those who have multiple Vservers on multiple DRBD devices. Not sure if it also applies for this setup but i'm using it without any drawbacks at the moment.
Below is a changed version of option 4 by Christian Balzer
nano /etc/ha.d/resource.d/drbddisk
stop) # Kill off any vserver mounts that might hog this VNSPACE=/usr/sbin/vnamespace for CTX in `/usr/sbin/vserver-stat | tail -n +2 | # exec, so the exit code of drbdadm propagates exec $DRBDADM secondary $RES
Create a Vserver
Note that we already have mounted the LVM partition on /VSERVERS/web in an earlier step, we're going to place both the /var and /etc directories on the mountpoint and symlink to it, this way the complete Vserver and its config are available on the other node when mounted.
mkdir -p /VSERVERS/web/etc
mkdir -p /VSERVERS/web/barrier/var
When making the Vserver it will be in the default location /var/lib/vservers/web and its config in /etc/vservers/web
newvserver --hostname web --domain example.com --ip 123.123.123.125/24 --dist etch --mirror --interface bond0
enter the root password
Create a normal user account now? <No>
Choose software to install: <Ok>
On node1 we move the Vserver directories to the LVM volume on the DRBD disks and make symlinks from the normal locations.
On node1
mv /etc/vservers/web/* /VSERVERS/web/etc/
rmdir /etc/vservers/web/
ln -s /VSERVERS/web/etc /etc/vservers/web
mv /var/lib/vservers/web/* /VSERVERS/web/barrier/var
rmdir /var/lib/vservers/web/
ln -s /VSERVERS/web/barrier/var /var/lib/vservers/web
We need to set the same symlinks on node2, but the we need the Vserver directories available there first. The mounting should be handled by heartbeat by now so we make our resources move to the other machine.
On node1
/etc/init.d/heartbeat stop
On node2
ln -s /VSERVERS/web/etc /etc/vservers/web
ln -s /VSERVERS/web/barrier/var /var/lib/vservers/web
On node1
/etc/init.d/heartbeat start
Vserver web start
and enjoy! | http://www.linux-vserver.org/index.php?title=Getting_high_with_lenny&diff=3383&oldid=3382 | CC-MAIN-2014-41 | refinedweb | 2,912 | 51.99 |
What is Material UI?
Material UI is a React component library for building web interfaces faster and with ease. It features a large set of commonly used UI components so that developers can focus on adding functionality to applications instead of spending so much time on UI implementation. It makes use of principles from the material design guide created by Google, and with over 59k stars and 1,800 contributors on GitHub, it is one of the most loved UI libraries for React developers.
How to get started with Material UI
This article assumes that you’ll be using create-react-app or any of the other React toolchains. If you’ll be setting up your own toolchain and build pipeline, be sure to include a plugin or loader for loading fonts.
To get started, install create-react-app:
/ with npm npx create-react-app font-app //with yarn yarn create-react-app font-app
To use material UI in your application, install it via npm or yarn:
// with npm npm install @material-ui/core // with yarn yarn add @material-ui/core
Then, add some UI components to work within App.js:
import React from 'react'; import Button from '@material-ui/core/Button'; import Typography from '@material-ui/core/Typography'; import './App.css'; function App() { return ( <div className="App"> <div> <Typography variant="h2" gutterBottom> Welcome to React </Typography> <Button variant="contained" color="secondary">Ready To Go</Button> </div> </div> ); } export default App;
Using your browser’s inspector to inspect the button and header, you’ll see that they’re rendered using the default font family of Roboto. So, how do we change that?
How to add custom fonts to your Material UI project
Below, we’ll go through three different ways to add any font of your choice to your Material UI project.
Method 1: Use Google Fonts CDN
Head over to Google Fonts and select a font-family of your choice. I’ll be using the Chilanka Cursive font. Copy the CDN link and add it to the
<head> of the
public/index.html file:
<link href="" rel="stylesheet">
To be able to use the fonts, you’ll have to initialize it using the
CreateMuiTheme which is an API provided by Material UI that generates a custom theme based on the options received and
ThemeProvider which is a component used to inject custom themes into your application.
Add this to your App.js file:
import { ThemeProvider, createMuiTheme } from '@material-ui/core/styles'; const theme = createMuiTheme({ typography: { fontFamily: [ 'Chilanka', 'cursive', ].join(','), },});
Then wrap your components with the default Material UI
ThemeProvider component, passing into it a theme props. The value of the theme props should be the name of your defined theme:
<ThemeProvider theme={theme}> <div className="App"> <div> <Typography variant="h2" gutterBottom> Welcome to React </Typography> <Button variant="contained" color="secondary">Ready To Go</Button> </div> </div> </ThemeProvider>
Inspecting the components now with the browser inspector tool, you’ll find that the font family has changed to Chilanka.
Method 2: Self host fonts using google-webfonts-helper
There are some benefits of self-hosting your fonts. It is significantly faster and your fonts load offline.
Google-webfonts-helper is an amazing tool that makes self-hosting fonts hassle-free. It provides font files and font-face declarations based on the fonts, charsets, styles, and browser support you select.
Simply search for any google-font on it and select the desired font weights. I’ll be using Josefin-sans.
Copy the resulting font-face declarations into
src/index.css file. You can customize the font file location — the default assumes
../fonts/. We’ll be using
./fonts cos we’ll be placing downloaded fonts in the
src/fonts directory.
Finally, download your files. Unzip them, and place them in your project, in the appropriate location
src/fonts
Like before, you’ll have to define the font-family using the
CreateMuiTheme and wrap your components with the
ThemeProvider component:
const theme = createMuiTheme({ typography: { fontFamily: [ 'Josefin Sans', 'sans-serif', ].join(','), },});
Inspecting now, you should see that the font family has changed to Josefin Sans.
Method 3: Self host fonts using the Typefaces NPM packages
Typefaces is a collection of NPM packages for google-fonts and some other open-source typefaces created by Kyle Matthews. Fonts, just like other dependencies, can be added to a project by installing it with NPM.
This is my favorite method because all web(site|app) dependencies should be managed through NPM whenever possible.
Simply search through the repo for your choice of typeface and click on the font folder to find the necessary npm installation command. I’ll go with Cormorant:
npm install typeface-cormorant
Then, import the package into your project’s entry file –
src/index.js in our case:
import "typeface-cormorant";
Also, like before, you’ll have to define your font-family using the CreateMuiTheme and wrap your components with the Themeprovider component:
const theme = createMuiTheme({ typography: { fontFamily: [ 'Cormorant', 'serif', ].join(','), },});
Inspecting now, you’ll see that the font family has changed to Cormorant.
Bonus
What if you want to define different fonts for our header and button. Say a primary font and a secondary font?
All you need to do is define two theme constants and wrap the intended components with the Themeprovider component, each with a theme prop of the corresponding font.
For example, if you want to use the Cormorant font for the heading and Josefin-sans for the button, you’ll first define two themes for the Cormorant and Josefin-sans fonts respectively:
const headingFont = createMuiTheme({ typography: { fontFamily: [ 'Cormorant', 'serif', ].join(','), },}); const buttonFont = createMuiTheme({ typography: { fontFamily: [ 'Josefin Sans', 'sans-serif', ].join(','), },});
Then, wrap target components with a
ThemeProvider component of the required font like below:
function App() { return ( <div className="App"> <div> <ThemeProvider theme={headingFont}> <Typography variant="h2" gutterBottom> Welcome to React </Typography> </ThemeProvider> <ThemeProvider theme={buttonFont}> <Button variant="contained" color="secondary"> Ready To Go </Button> </ThemeProvider> </div> </div> ); }
And Viola! You’ll see now that the heading is rendered with the Cormorant font and the button with the Josefin Sans font.
Conclusion
In this article, we covered three ways to add custom fonts to a Material UI project without much hassle. We also looked at how to define separate fonts for different components. I hope reading this has helped answer whatever questions you had on the topic.
Now, go on and build something great! “3 ways to add custom fonts to your Material…”
What version of Material UI are you using in this blog post?
Version ^4.11.0 | https://blog.logrocket.com/3-ways-to-add-custom-fonts-to-your-material-ui-project/ | CC-MAIN-2021-04 | refinedweb | 1,089 | 51.78 |
MoviePy is a python library, which can help us to operate video. In this tutorial, we will introduce how to get video duration with it. You can learn and do by following our tutorial.
Install moviepy
pip install moviepy
Import libraries
from moviepy.editor import VideoFileClip import datetime
Create a VideoFileClip object with video file
video = 'D:\\demo.mp4' clip = VideoFileClip(video)
Get video druation
duration = clip.duration print("video duration is "+ str(duration) + " seconds")
The output is:
video duration is 856.86 seconds
Convert duration seconds to hour, minute and second
video_time = str(datetime.timedelta(seconds = int(duration))) print(video_time)
The duration of this video is: 0:14:16
This value is correct.
| https://www.tutorialexample.com/best-practice-to-python-get-video-duration-with-moviepy-python-tutorial/ | CC-MAIN-2021-31 | refinedweb | 114 | 51.85 |
Get the highlights in your inbox every week.
Failure is an option in Perl 6
Failure is an option in Perl 6
In the eighth article in this series comparing Perl 5 to Perl 6, learn about their differences in creating and handling exceptions.
Subscribe now
This is the eighth in a series of articles about migrating code from Perl 5 to Perl 6. This article looks at the differences in creating and handling exceptions between Perl 5 and Perl 6.
The first part of this article describes working with exceptions in Perl 6, and the second part explains how you can create your own exceptions and how failure is an option in Perl 6.
Exception-handling phasers
In Perl 5, you can use eval to catch exceptions in a piece of code. In Perl 6, this functionality is covered by try:
In Perl 5, you can also use the return value of eval in an expression:
This works the same way in Perl 6 for try:
# Perl 6
my $foo = try { 42 / $something }; # Nil if $something is 0
and it doesn't even have to be a block:
# Perl 6
my $foo = try 42 / $something; # Nil if $something is 0
In Perl 5, if you need finer control over what to do when an exception occurs, you can use special signal handlers $SIG{__DIE__} and $SIG{__WARN__}.
In Perl 6, these are replaced by two exception-handling phasers, which due to their scoping behaviour must always be specified with curly braces. These exception-handling phasers (in the following table) are applicable only to the surrounding block, and you can have only one of each type in a block.
Catching exceptions
The $SIG{__DIE__} pseudo-signal handler in Perl 5 is no longer recommended. There are several competing CPAN modules that provide try/catch mechanisms (such as: Try::Tiny and Syntax::Keyword::Try). Even though these modules differ completely in implementation, they provide very similar syntax with only very minor semantic differences, so they're a good way to compare Perl 6 and Perl 5 features.
In Perl 5, you can catch an exception only in conjunction with a try block:
Perl 6 doesn't require a try block. The code inside a CATCH phaser will be called whenever an exception is thrown in the immediately surrounding lexical scope:
Again, you do not need a try statement to catch exceptions in Perl 6. You can use a try block on its own, if you want, but it's just a convenient way to disregard any exceptions thrown inside that block.
Also, note that $_ will be set to the Exception object inside the CATCH block. In this example, execution will continue with the statement after the one that caused the Exception to be thrown. This is achieved by calling the resume method on the Exception object. If the exception is not resumed, it will be thrown again and possibly caught by an outer CATCH block (if there is one). And if there are no outer CATCH blocks, the exception will result in program termination.
The when statement makes it easy to check for a specific exception:
# Perl 6
{
CATCH {
when X::NYI { # Not Yet Implemented exception thrown
say "aw, too early in history";
.resume;
}
default {
say "WAT?";
.rethrow; # throw the exception again
}
}
X::NYI.new(feature => "Frobnicator").throw; # caught, resumed
now / 0; # caught, rethrown
say "back to the future";
}
# aw, too early in history
# WAT?
# Attempt to divide 1234.5678 by zero using /
In this example, only X::NYI exceptions will resume; all the others will be thrown to any outer CATCH block and will probably result in program termination. And we'll never go back to the future.
Catching warnings
If you do not want any warnings to emanate when a piece of code executes, you can use the no warnings pragma in Perl 5:
In Perl 6, you can use a quietly block:
The quietly block will catch any warnings that emanate from that block and disregard them.
If you want finer control on which warnings you want to see, you can select the warning categories you want enabled or disabled with use warnings or no warnings, respectively, in Perl 5. For example:
If you want to have finer control in Perl 6, you will need a CONTROL phaser.
CONTROL
The CONTROL phaser is very much like the CATCH phaser, but it handles a special type of exception called the "control exception." A control exception is thrown whenever a warning is generated in Perl 6, which you can catch with the CONTROL phaser. This example will not show warnings for using uninitialized values in expressions:
There are currently no warning categories defined in Perl 6, but they are being discussed for future development. In the meantime, you will have to check for the actual message of the control exception CX::Warn type, as shown above.
The control exception mechanism is used for quite a lot of other functionality in addition to warnings. The following statements (in alphabetical order) also create control exceptions:
Control exceptions generated by these statements will also show up in any CONTROL phaser. Luckily, if you don't do anything with the given control exception, it will be rethrown when the CONTROL phaser is finished and ensure its intended action is performed.
Failure is an option
In Perl 5, you need to prepare for a possible exception by using eval or some version of try when using a CPAN module. In Perl 6, you can do the same with try (as seen before).
But Perl 6 also has another option: Failure, which is a special class for wrapping an Exception. Whenever a Failure object is used in an unanticipated way, it will throw the Exception it is wrapping. Here is a simple example:
The open function in Perl 6 returns an IO::Handle if it successfully opens the requested file. If it fails, it returns a Failure. This, however, is not what throws the exception—if we actually try to use the Failure in an unanticipated way, then the Exception will be thrown.
There are only two ways of preventing the Exception inside a Failure to be thrown (i.e., anticipating a potential failure):
- Call the .defined method on the Failure
- Call the .Bool method on the Failure
In either case, these methods will return False (even though technically the Failure object is instantiated). Apart from that, they will also mark the Failure as "handled," meaning that if the Failure is later used in an unanticipated way, it will not throw the Exception but simply return False.
Calling .defined or .Bool on most other instantiated objects will always return True. This gives you an easy way to find out if something you expected to return a "real" instantiated object returned something you can really use.
However, it does seem like a lot of work. Fortunately, you don't have to explicitly call these methods (unless you really want to). Let's rephrase the above code to more gently handle not being able to open the file:
# Perl 6
my $handle = open "non-existing file";
say "tried to open the file";
if $handle { # "if" calls .Bool, True on an IO::Handle
say "we opened the file";
.say for $handle.lines; # read/show all lines one by one
}
else { # opening the file failed
say "could not open file";
}
say "but still in business";
# tried to open the file
# could not open file
# but still in business
Throwing exceptions
As in Perl 5, the simplest way to create an exception and throw it is to use the die function. In Perl 6, this is a shortcut to creating an X::AdHoc Exception and throwing it:
There are some subtle differences between die in Perl 5 and Perl 6, but semantically they are the same: they immediately stop an execution.
Returning with a failure
Perl 6 has added the fail function. This will immediately return from the surrounding subroutine/method with the given Exception: if a string is supplied to the `fail` function (rather than an `Exception` object), then an X::AdHoc exception will be created.
Suppose you have a subroutine taking one parameter, a value that is checked for truthiness:
# Perl 6
sub maybe-alas($expected) {
fail "Not what was expected" unless $expected;
return 42;
}
my $a = maybe-alas(666);
my $b = maybe-alas("");
say "values gathered";
say $a; # ok
say $b; # will throw, because it has a Failure
say "still in business"; # will not be shown
# values gathered
# 42
# Not what was expected
# in sub maybe-alas at ...
Note that you do not have to provide try or CATCH: the Failure will be returned from the subroutine/method in question as if all is normal. Only if the Failure is used in an unanticipated way will the Exception embedded in it be thrown. An alternative way of handling this would be:
# Perl 6
sub maybe-alas($expected) {
fail "Not what was expected" unless $expected;
return 42;
}
my $a = maybe-alas(666);
my $b = maybe-alas("");
say "values gathered";
say $a ?? "got $a for a" !! "no valid value returned for a";
say $b ?? "got $b for b" !! "no valid value returned for b";
say "still in business";
# values gathered
# got 42 for a
# no valid value returned for b
# still in business
Note that the ternary operator ?? !! calls .Bool on the condition, so it effectively disarms the Failure that was returned by fail.
You can think of fail as syntactic sugar for returning a Failure object:
Creating your own exceptions
Perl 6 makes it very easy to create your own (typed) Exception classes. You just need to inherit from the Exception class and provide a message method. It is customary to make custom classes in the X:: namespace. For example:
# Perl 6
class X::Frobnication::Failed is Exception {
has $.reason; # public attribute
method message() {
"Frobnication failed because of $.reason"
}
}
You can then use that exception in your code in any die or fail statement:
which you can check for inside a CATCH block and introspect if necessary:
# Perl 6
CATCH {
when X::Frobnicate::Failed {
if .reason eq 'too much interference' {
.resume # too much interference is ok
}
}
} # all others will re-throw
You are completely free in how you set up your Exception classes; the only thing the class needs to provide a method called 'message' that should return a string. How that string is created is entirely up to you, as long as the method returns a string. If you prefer working with error codes, you can:
# Perl 6
my @texts =
"unknown error",
"too much interference",
;
my constant TOO_MUCH_INTERFERENCE = 1;
class X::Frobnication::Failed is Exception {
has Int $.reason = 0;
method message() {
"Frobnication failed because of @texts[$.reason]"
}
}
As you can see, this quickly becomes more elaborate, so your mileage may vary.
Summary
Catching exceptions and warnings are handled by phasers in Perl 6, not by eval or signal handlers, as in Perl 5. Exceptions are first-class objects in Perl 6.
Perl 6 also introduces the concept of a Failure object, which embeds an Exception object. If the Failure object is used in an unanticipated way, the embedded Exception will be thrown.
You can easily check for a Failure with if, ?? !! (which checks for truthiness by calling the .Bool method) and with (which checks for definedness by calling the .defined method).
You can also create Exception classes very easily by inheriting from the Exception class and providing a message method.
3 Comments, Register or Log in to post a comment.
Very informative post...Thanks for sharing...
i like python try catch. In perl 6 difficult to understand what is happening. Maybe over time I will understand how it works.
Thank you for continuing to try! | https://opensource.com/article/18/11/failure-option-perl-6 | CC-MAIN-2021-04 | refinedweb | 1,958 | 59.84 |
Are you sure?
This action might not be possible to undo. Are you sure you want to continue?
Department of the Treasury Internal Revenue Service
1120-F
Name
U.S. Income Tax Return of a Foreign Corporation
For calendar year 1995, or tax year beginning , 1995, and ending , 19
OMB No. 1545-0126
See separate instructions.
Employer identification number
Please type or print
Number, street, and room or suite no. (see page 6 of instructions)
Check applicable boxes: Initial return Amended return Change of address
City or town, state and ZIP code, or country
Final return
A Country of incorporation B Foreign country under whose laws the income reported on this return is subject to tax C Date incorporated D The corporation’s books and records are maintained by: Name Address ZIP code or country E If the corporation had an agent in the United States at any time during the tax year, enter: Kind of agent Name Address
G Check method of accounting:
(3) Other (specify)
(1)
Cash
(2)
Accrual Yes No
H Did the corporation file a U.S. income tax return for the preceding tax year? I Was the corporation at any time during the tax year engaged in a trade or business in the United States? J Did the corporation at any time during the tax year have a permanent establishment in the United States for purposes of applying section 894(b) and any applicable tax treaty between the United States and a foreign country? If “Yes,” enter the name of the foreign country: K Is the corporation a foreign personal holding company? (See section 552 for definition.) If “Yes,” have you filed Form 5471? (Sec. 6035) L Did the corporation have any transactions with related parties? If “Yes,” you may have to file Form 5472 (section 6038A and section 6038C).
F Refer to the list on the last page of the instructions and state the
corporation’s principal: (1) Business activity code number (2) Business activity (3) Product or service
Enter number of Forms 5472 attached Note: Additional information is required at the bottom of pages 2 and 5.
Computation of Tax Due or Overpayment
1 2 3 4 5 6 a b c e f g h i 7 8 9 10 Tax from Section I, line 11, page 2 Tax from Section II, Schedule J, line 9, page 4 Tax from Section III (add lines 6 and 10 on page 5) Personal holding company tax (attach Schedule PH (Form 1120))—see page 6 of instructions Total tax. Add lines 1 through 4 Payments: 1994 overpayment credited to 1995 1995 estimated tax payments Less 1995 refund applied for on Form 4466 Tax deposited with Form 7004 Credit from regulated investment companies (attach Form 2439) Credit for Federal tax on fuels (attach Form 4136). See instructions U.S. income tax paid or withheld at source (add line 12, page 2, and amounts from Forms 8288-A and 8805 (attach Forms 8288-A and 8805)) Total payments. Add lines 6d through 6h Estimated tax penalty (see page 7 of instructions). Check if Form 2220 is attached Tax due. If line 6i is smaller than the total of lines 5 and 7, enter amount owed Overpayment. If line 6i is larger than the total of lines 5 and 7, enter amount overpaid Enter amount of line 9 you want: Credited to 1996 estimated tax 6a 6b 6c ( ) Bal 6d 6e 6f 6g 6h 6i 7 8 9 Refunded 10 1 2 3 4 5
Date Date
Title Check if selfemployed EIN ZIP code Cat. No. 11470I Form Preparer’s social security number
For Paperwork Reduction Act Notice, see page 1 of separate instructions.
1120-F
(1995)
Form 1120-F (1995)
Page
2
SECTION I.—Certain Gains, Profits, and Income From U.S. Sources That Are NOT Effectively Connected With the Conduct of a Trade or Business in the United States (See instructions.)
If you are required to complete Section II or are using Form 1120-F as a claim for refund of tax withheld at source, include below ALL income from U.S. sources that is NOT effectively connected with the conduct of a trade or business in the United States. Otherwise, include only those items of income on which the U.S. income tax was not fully paid at the source. The rate of tax on each item of gross income listed below is 30% (4% for the gross transportation tax) or such lower rate specified by tax treaty. No deductions are allowed against these types of income. Fill in treaty rates where applicable. If the corporation claimed a lower treaty rate, also complete Item W, page 5.
Name of treaty country, if any
(a) Nature of income (b) Gross income (c) Rate of tax (%) (d) Amount of tax (e) Amount of U.S. income tax paid or withheld at the source
1 Interest 2 Dividends 3 Rents 4 Royalties 5 Annuities 6 Gains from disposal of timber, coal, or domestic iron ore with a retained economic interest (attach supporting schedule) 7 Gains from sale or exchange of patents, copyrights, etc. 8 Fiduciary distributions (attach supporting schedule) 9 Gross transportation income (see instructions) 10 Other fixed or determinable annual or periodic gains, profits, and income
4
11 Total. Enter here and on line 1, page 1 12 Total. Enter here and include on line 6h, page 1
Additional Information Required (continued from page 1)
M Is the corporation a personal holding company? (See section 542 for definition.) N Is the corporation a controlled foreign corporation? (See section 957 for definition.) O Is the corporation a personal service corporation? (See page 8 of instructions for definition.) P Enter tax-exempt interest received or accrued during $ the tax year (see instructions) Q Did the corporation at the end of the tax year own, directly or indirectly, 50% or more of the voting stock of a U.S. corporation? (See section 267(c) for rules of attribution.) If “Yes,” attach a schedule showing (1) name and identifying number of such U.S. corporation; (2) percentage owned; and (3) taxable income or (loss) before NOL and special deductions of such U.S. corporation for the tax year ending with or within your tax year. R If the corporation has a net operating loss (NOL) for the tax year and is electing to forego the carryback period, check here Yes No S Enter the available NOL carryover from prior tax years. (Do not reduce it by any deduction on line 30a, $ page 3.) T Is the corporation a subsidiary in a parent-subsidiary controlled group? If “Yes,” enter the name and employer identification number of the parent corporation U Did any individual, partnership, corporation, estate, or trust at the end of the tax year own, directly or indirectly, 50% or more of the corporation’s voting stock? (See section 267(c) for attribution rules.) If “Yes,” complete the following: (1) Attach a schedule showing the name and identifying number. (Do not include any information already entered in T above). (2) Enter percentage owned Note: Additional information is required at the bottom of page 5. Yes No
Form 1120-F (1995)
Page
3
SECTION II.—Income Effectively Connected With the Conduct of a Trade or Business in the United States (See instructions.)
IMPORTANT—Fill in all applicable lines and schedules. If you need more space, see Attachments on page 5 of instructions. 1c 1a Gross receipts or sales b Less returns and allowances c Bal 2 2 Cost of goods sold (Schedule A, line 8) 3 3 Gross profit (subtract line 2 from line 1c) 4 4 Dividends (Schedule C, line 14) 5 5 Interest 6 6 Gross rents 7 7 Gross royalties 8 8 Capital gain net income (attach Schedule D (Form 1120)) 9 9 Net gain or (loss) from Form 4797, Part II, line 20 (attach Form 4797) 10 10 Other income (see page 9 of instructions—attach schedule) 11 Total income. Add lines 3 through 10 11 12 12 Compensation of officers (Schedule E, line 4). Deduct only amounts connected with a U.S. business 13 13 Salaries and wages (less employment credits) 14 14 Repairs and maintenance 15 15 Bad debts 16 16 Rents 17 17 Taxes and licenses 18 18 Interest deduction allowable under Regulations section 1.882-5 19 19 Charitable contributions (see page 11 of instructions for 10% limitation) 20 20 Depreciation (attach Form 4562) 21 Less depreciation claimed on Schedule A and elsewhere on return 21 22 22 Balance (subtract line 21 from line 20) 23 23 Depletion 24 24 Advertising 25 25 Pension, profit-sharing, etc., plans 26 26 Employee benefit programs 27 27 Other deductions (see page 12 of instructions—attach schedule) 28 28 Total deductions. Add lines 12 through 27 29 29 Taxable income before NOL deduction and special deductions (subtract line 28 from line 11) 30 Less: a Net operating loss deduction (see page 13 of instructions) 30a b Special deductions (Schedule C, line 15) 30b 30c Deductions (See instructions for limitations on deductions.)
Income
31 Taxable income or (loss). Subtract line 30c from line 29
31
Schedule A
1 2 3 4 5 6 7 8 9a
Cost of Goods Sold (See instructions.)
1 Inventory at beginning of year 2 Purchases 3 Cost of labor 4 Additional section 263A costs (see page 13 of instructions—attach schedule) 5 Other costs (attach schedule) 6 Add lines 1 through 5 7 Inventory at end of year 8 Cost of goods sold. Subtract line 7 from line 6. Enter here and on Section II, line 2 Check all methods used for valuing closing inventory: (1) Cost as described in Regulations section 1.471-3 (2) Lower of cost or market as described in Regulations section 1.471-4 Other (Specify method used and attach explanation.) (3) b Check if there was a writedown of subnormal goods as described in Regulations section 1.471-2(c) c Check if the LIFO inventory method was adopted this tax year for any goods If checked, attach Form 970. d If the LIFO inventory method was used for this tax year, enter percentage (or amounts) of closing 9d inventory computed under LIFO Yes Yes No No
e Do the rules of section 263A (for property produced or acquired for resale) apply to the corporation? f Was there any change in determining quantities, cost, or valuations between opening and closing inventory? If “Yes,” attach explanation.
Form 1120-F (1995)
Page
4
Schedule C
Dividends and Special Deductions (See instructions.)
1 Dividends from less-than-20%-owned domestic corporations that are subject to the 70% deduction (other than debt-financed stock) 2 Dividends from 20%-or-more-owned domestic corporations that are subject to the 80% deduction (other than debt-financed stock) 3 Dividends on debt-financed stock of domestic and foreign corporations (section 246A) 4 Dividends on certain preferred stock of less-than-20%-owned public utilities 5 Dividends on certain preferred stock of 20%-or-more-owned public utilities 6 Dividends from less-than-20%-owned foreign corporations that are subject to the 70% deduction 7 Dividends from 20%-or-more-owned foreign corporations that are subject to the 80% deduction 8 Total. Add lines 1 through 7. See page 14 of instructions for limitation 9 0ther dividends from foreign corporations not included on lines 3, 6, and 7 10 Foreign dividend gross-up (section 78) 11 IC-DISC and former DISC dividends not included on lines 1, 2, or 3 (section 246(d)) 12 Other dividends 13 Deduction for dividends paid on certain preferred stock of a public utility 14 Total dividends. Add lines 1 through 12. Enter here and on line 4, page 3 15 Total deductions. Add lines 8 and 13. Enter here and on line 30b, page 3
(a) Dividends received
(b) %
(c) Special deductions: (a) (b)
70 80
see instructions
42 48 70 80
Schedule E
Compensation of Officers (Complete Schedule E only if total receipts (line 1a plus lines 4 through 10 of Section II) are $500,000 or more. See Line 12. Compensation of officers on page 10 of instructions.)
(b) Social security number (c) Percent of time devoted to business Percent of corporation stock owned (d) Common (e) Preferred (f) Amount of compensation
(a) Name of officer
1
% % % % % % % Total compensation of officers Compensation of officers claimed on Schedule A and elsewhere on this return Subtract line 3 from line 2. Enter the result here and on line 12, page 3
% % % % % % %
% % % % % % %
2 3 4 1 2a
Schedule J
Tax Computation (See page 15 of instructions.)
b
3 4a b
Check if the corporation is a member of a controlled group (see sections 1561 and 1563) Important: Members of a controlled group, see instructions on page 15. If the box on line 1 is checked, enter the corporation’s share of the $50,000, $25,000, and $9,925,000 taxable income bracket amounts (in that order): (2) $ (3) $ (1) $ Enter the corporation’s share of: $ (1) Additional 5% tax (not more than $11,750) $ (2) Additional 3% tax (not more than $100,000) Income tax. Check this box if the corporation is a qualified personal service corporation (see page 16 of the instructions) 4a Foreign tax credit (attach Form 1118) Nonconventional source fuel credit Check: 4b QEV credit (attach Form 8834)
3
c General business credit. Enter here and check which forms are attached: 3800 3468 5884 6765 6478 8586 8830 8826 8835 8844 4c 8845 8847 8846 4d d Credit for prior year minimum tax (attach Form 8827) 5 Total credits. Add lines 4a through 4d 6 Subtract line 5 from line 3 7 Recapture taxes. Check if from: Form 4255 Form 8611 8a Alternative minimum tax (attach Form 4626) b Environmental tax (attach Form 4626) 9 Total tax under section 882(a). Add lines 6 through 8b. Enter here and on line 2, page 1
5 6 7 8a 8b 9
Form 1120-F (1995)
Page
5
SECTION III.—Branch Profits Tax and Tax on Excess Interest (See instructions beginning on page 17.) Part I—Branch Profits Tax
1 2 3 4a b c d Enter the amount from Section II, line 29 Enter total adjustments made to get effectively connected earnings and profits. (Attach a schedule showing the nature and amount of adjustments.) (See instructions.) Effectively connected earnings and profits. Combine line 1 and line 2. Enter the result here Enter U.S. net equity at the end of the current tax year. (Attach schedule.) Enter U.S. net equity at the end of the prior tax year. (Attach schedule.) Increase in U.S. net equity. If line 4a is greater than or equal to line 4b, subtract line 4b from line 4a. Enter the result here and skip to line 4e Decrease in U.S. net equity. If line 4b is greater than line 4a, subtract line 4a from line 4b. Enter the result here 1 2 3 4a 4b 4c 4d
e Non-previously taxed accumulated effectively connected earnings and profits. Enter excess, if any, of effectively connected earnings and profits for preceding tax years beginning after 1986 over any dividend equivalent amounts for those tax years 5 Dividend equivalent amount. Subtract line 4c from line 3. Enter the result here. If zero or less, enter -0-. If no amount is entered on line 4c, add the lesser of line 4d or line 4e to line 3 and enter the total here Branch profits tax. Multiply line 5 by 30% (or lower treaty rate if the corporation is a qualified resident or otherwise qualifies for treaty benefits). Enter here and include on line 3, page 1. (See instructions.) Also complete Items W and X below
4e
5
6
6 7a 7b 7c
Part II—Tax on Excess Interest
7a Enter the interest from Section II, line 18 b Enter the interest apportioned to the effectively connected income of the foreign corporation that is capitalized or otherwise nondeductible c Add lines 7a and 7b Enter the branch interest (including capitalized and other nondeductible interest). (See instructions for definition.) If the interest paid by the foreign corporation’s U.S. trade or business was increased because 80% or more of the foreign corporation’s assets are U.S. assets, check this box 9a Excess interest. Subtract line 8 from line 7c. If zero or less, enter -0b If the foreign corporation is a bank, enter the excess interest treated as interest on deposits. Otherwise, enter -0-. (See instructions.) c Subtract line 9b from line 9a 10 Tax on excess interest. Multiply line 9c by 30% or lower treaty rate (if the corporation is a qualified resident or otherwise qualifies for treaty benefits). (See instructions.) Enter here and include on line 3, page 1. Also complete Items W and X below
Additional Information Required (continued from page 2) Yes No Yes No
8
8 9a 9b 9c
10
V
Is the corporation claiming a reduction in, or exemption from, the branch profits tax due to: (1) A complete termination of all U.S. trades or businesses? (2) The tax-free liquidation or reorganization of a foreign corporation? (3) The tax-free incorporation of a U.S. trade or business? If (1) applies or (2) applies and the transferee is domestic, attach Form 8848. If (3) applies, attach the statement required by Regulations section 1.884-2T(d)(5).
W Is the corporation taking a position on this return that a U.S. tax treaty overrules or modifies an Internal Revenue law of the United States thereby causing a reduction of tax? If “Yes,” complete and attach Form 8833. Note: Failure to disclose a treaty-based return position may result in a $10,000 penalty (see section 6712). X If the corporation is claiming it is a qualified resident of its country of residence for purposes of computing its branch profits tax and excess interest tax, check the basis for that claim: Stock ownership and base erosion test Publicly traded test Active trade or business test Private letter ruling
Form 1120-F (1995)
Page
6
Additional schedules to be completed for Section II or Section III (See instructions.) Beginning of tax year End of tax year Schedule L Balance Sheets
ASSETS 1 Cash 2a Trade notes and accounts receivable b Less allowance for bad debts 3 Inventories 4 U.S. government obligations 5 Tax-exempt securities (see instructions) 6 Other current assets (attach schedule) 7 Loans to stockholders 8 Mortgage and real estate loans 9 Other investments (attach schedule) 10a Buildings and other fixed STOCKHOLDERS’ EQUITY 16 Accounts payable 17 Mtges., notes, bonds payable in less than 1 year 18 Other current liabilities (attach schedule) 19 Loans from stockholders 20 Mtges., notes, bonds payable in 1 year or more 21 Other liabilities (attach schedule) 22 Capital stock: a Preferred stock b Common stock 23 Paid-in or capital surplus 24 Retained earnings—Appropriated (attach schedule) 25 Retained earnings—Unappropriated 26 Less cost of treasury stock 27 Total liabilities and stockholders’ equity
(a) (b) (c) (d)
(
)
(
)
( (
) )
( (
) )
(
)
(
)
(
)
(
)
Note: The corporation is not required to complete Schedules M-1 and M-2 below if the total assets on Schedule L, line 15, column (d) are less than $25,000. Schedule M-1 Reconciliation of Income or (Loss) per Books With Income per Return
1 2 3 4 Net income (loss) per books Federal income tax Excess of capital losses over capital gains Income subject to tax not recorded on books this year (itemize): 7 Income recorded on books this year not included on this return (itemize): a Tax-exempt interest $ Deductions on this return not charged against book income this year (itemize): a Depreciation $ b Contributions carryover $ Add lines 7 and 8 Income (line 29, page 3)—line 6 less line 9 Distributions: a Cash b Stock c Property Other decreases (itemize): Add lines 5a through 6 Balance at end of year (line 4 less line 7)
8
5 Expenses recorded on books this year not deducted on this return (itemize): $ a Depreciation b Contributions carryover $ c Travel and entertainment $ 6 Add lines 1 through 5
9 10
Schedule M-2
Analysis of Unappropriated Retained Earnings per Books (Schedule L, line 25)
5
1 Balance at beginning of year 2 Net income (loss) per books 3 Other increases (itemize):
6 7 8
4 Add lines 1, 2, and 3
Printed on recycled paper | https://www.scribd.com/document/540992/US-Internal-Revenue-Service-f1120f-1995 | CC-MAIN-2018-30 | refinedweb | 3,412 | 57.81 |
Introduction
In our windows applications we are commonly using modal windows. Let’s remind the idea. Using windows forms, once a window is created, we can choose to show it in a modal manner (form.ShowDialog()). The window then becomes THE front window between all the other windows of our application. Moreover, all the other windows seem to be disabled. To be exact, the other windows are not responding to any input event anymore (keyboard, mouse, etc), but the are still able to respond to other events like paint.
The user must close the modal window by validating or canceling to come back to the previous state. You can repeat this model and then have a stack of modal windows.
This mechanism also exists using Windows Presentation Foundation. Let’s remind that WPF windows are “real” windows even if they are hosting a DirectX surface.
Therefore, WPF brings a bunch of new functionalities that are mainly taking advantage of the control tree (events, datacontext, datatemplates, styles, resources, commandbindings, etc). So it’s quite interesting to stay in the same window having an unique control tree.
Moreover, the natural vectorial capabilities of WPF let us imagine a complete nested world inside a single window, recreating a workspace with its own internal windows logic, like we know in games interfaces.
Obviously, you cannot define a child control as being modal. In this article, I will try to offer a solution to simulate this behavior.
How to block controls ?
The first step is to disable all the child controls of a same container, excepted the front one.
Disabling the input interaction will be easily done using:
control.IsEnabled = false;
Let’s imagine a method that would add a UserControl on the top of a child control collection of a Grid control, disabling existing children:
void NavigateTo(UserControl uc)
{
foreach (UIElement item in modalGrid.Children)
item.IsEnabled = false;
modalGrid.Children.Add(uc);
}
To “close” the UserControl as we could close a modal window, re-enabling the previous child control we could write:
void GoBackward()
{
modalGrid.Children.RemoveAt(modalGrid.Children.Count – 1);
UIElement element = modalGrid.Children[modalGrid.Children.Count – 1];
element.IsEnabled = true;
}
This part is done. Those two methods allow to simulate a stack of graphic controls with a modal behavior. This solution supports pushing multiple controls.
That was the easy part. The next step is more complex.
How to block the calling code ?
Using Windows Forms, calling form.ShowDialog() is blocking.
This means that the following instructions will only be executed when the modal windows will close and return its modal value.
if (f.ShowDialog() == DialogResult.Ok)
{
//Action OK
}
else
{
//Action not OK
}
The following actions will only be executed when the modal window is closed, creating a sequential execution, simple and comfortable for the developer.
Creating such a behavior in a single window using WPF is really too complex, almost impossible. Though we will try to simulate it.
We want to run an action at the moment when the modal control is closed. We will use a delegate to represent this action. This delegate will be invoked by the one that is closing the modal control. We will offer him a boolean to represent the modal result.
public delegate void BackNavigationEventHandler(bool dialogReturn);
Thanks to anonymous methods, we will keep a very comparable syntax that we had with windows forms:
NavigateTo(new UserControl1(), delegate(bool returnValue) {
if (returnValue)
MessageBox.Show(“Return value == true“);
else
MessageBox.Show(“Return value == false“);
});
The NavigateTo() method now accepts a second parameter we will have to store somewhere to call it later when closing the control.
As this method will have to support successive calls, an unique value will not be enough to store this delegate. We will use a stack to keep all these delegates:
private Stack<BackNavigationEventHandler> _backFunctions
= new Stack<BackNavigationEventHandler>();
The NavigateTo() implementation becomes:
void NavigateTo(UserControl uc, BackNavigationEventHandler backFromDialog)
{
foreach (UIElement item in modalGrid.Children)
item.IsEnabled = false;
modalGrid.Children.Add(uc);
_backFunctions.Push(backFromDialog);
}
We now need to get the delegate back from the stack (Pop) when calling GoBackward().
void GoBackward(bool dialogReturnValue)
{
modalGrid.Children.RemoveAt(modalGrid.Children.Count – 1);
UIElement element = modalGrid.Children[modalGrid.Children.Count – 1];
element.IsEnabled = true;
BackNavigationEventHandler handler= _backFunctions.Pop();
if (handler != null)
handler(dialogReturnValue);
}
The one that is closing the control just need to call GoBackward(true); or GoBackward(false);
Make the access global
Last step, it would be useful to provide a global access to these two methods across the application. Doing such, any UserControl could easily call NavigateTo() to push a control and GoBackward() to close it, without knowing the modal context.
Let’s group these functionnalities into an interface:
public interface IModalService
{
void NavigateTo(UserControl uc, BackNavigationEventHandler backFromDialog);
void GoBackward(bool dialogReturnValue);
}
In our sample, we will simply implement this interface in our main window “Window1”. It’s quite a natural choice since “modalGrid” is contained in Window1.
A public static scope will provide a global access to the interface:
public class GlobalServices
{
public static IModalService ModalService
{
get
{
return (IModalService) Application.Current.MainWindow;
}
}
}
Here we are !
We can now call anywhere in our code:
GlobalServices.ModalService.NavigateTo(new UserControl1(), delegate(bool returnValue)
{
if (returnValue)
MessageBox.Show(“Return value == true“);
else
MessageBox.Show(“Return value == false“);
});
and
GlobalServices.ModalService.GoBackward(true);
Conclusion
Windows Presentation Foundation graphic possibilities are incredible. We now have to create smart ergonomic solutions to take advantage of this engine.
Anonymous methods are really surprising. Strange at first use, they are bringing new and incredible possibilities.
C# 3.0 is coming really soon…
Excuse my English I speak Spanish
I found the sample very good but I like to know how to call a method or set or get a property no from the usercontrol
but from the container that hold the control
I am developing a app that can load many usercontrol but I like to know how I can acces from my app how to interop with the control
Thank you
Reiner reinerra@hotmail.com
Congratulations, You have kept this very simple. How much more complex would it be to write a single method
similar to ShowDialog() that doesn’t return a value until the Control is closed? Then it could be called from a procedural program that would receive the input data at the end of the call before it proceeds to its next call. Using LOTS of modal Forms to sequentially enter data is not much more attractive than writing a console program. Console programs are pretty ugly, to say the least. I’ve tried threading without any success.
Hi William,
It’s very complex. Blocking the UI is very different from blocking a single call in a console program. In a console program, the UI is "static" and does not have to respond to any else than the user. A "windowed" application (WinForms or WPF) has to maintain UI interaction even if the window is "frozened" by a modal call. So you can’t block the main thread as you could do with a console program.
Maybe we could interact with the WPF dispatcher to simulate a modal behavior like winforms does with the windows message handler. I will try to have a look on such a solution.
So in the brave new world of WPF, you have to jump through hoops do something as routine as creating a modal dialog. This is progress?? I must be missing something…
Hi Marky,
Imagine you are calling a method M() and you want M() to sleep for a while. During this time, you want your application to run normally except the caller of M() that must wait for M() to finish. All this with a single thread of course. This is not a WPF pb. I recommand you to read the ShowDialog() code from winforms or WPF (using reflector). Doing it between controls inside a single window is more complex.
Thanks for the sample Mitsu.
Marky-
Please realize that the WPF Window object has a .ShowDialog[1] method (as well as just a normal .Show) which gives you a modal window.
I believe Mitsu’s example is interesting in cases where you would like to embed a dialog inside an existing Window or Page. For example, when running in an XBAP (a WPF app running in the browser), you can’t show other Windows (popups!) so this is useful.
Thanks,
Rob Relyea | Program Manager, WPF & Xaml Language Team
robrelyea.com | /blog | /wpf | /xaml
[1]
Thanks for great hint,
Isn’t any way to close dialog, if user clicked outside it, for example clicking outside a messagebox closes it?
It’s not an usual use of modal window but you can do it easilly. you have to catch the click on the last ‘frame’ and add a common behavior to close the window (with a cancel information I can imagine).
Note: if you want to get the click information on a particular area, this area must be painted, even with a transparent Brush (Color = Transparent). (no event if no content/no brush)
Good Work done,
You also remove drawback of memory leak while calling showdialog method
Simulate Modal Windows Inside WPF Window Using Anonymous Methods You can read the original post in Mitsu's
Nice sample, clean interface.
Do you also have a nice solution for managing the focus when simulating modal window behaviour? By default the focus will not be limited to the "popup-control" I suppose (actually I only disabled the direct parent in the visual tree, not all "siblings" because I may like to scroll through some lists while still one part of the dialog is "blocked" by the pseudo-modal control).
All tries to focus the first input-element of the popup-control did only sometimes succeed, probably due to race conditions. When breakpoint is set before calling the focus-method, it works, when running at normal speed, it has no effect.
Hi Mitsu
It’s just great what you have done simulating modal window in WPF Application.
I was wondering…. do you think is it posible to create a sort of control that you as developer could drag and drop from toolbox to accelerate your control creation within a WPF form?
Hi mitsu.
I apologize for my english, I am only a french guy 😉
When I read your post, I find it really easy understanding, but when I had think about how to do it I did not imagine how to succeed.
Than you so much for that post! Keep on trying to help guys like me…
Bye little genius
Why not just use nested message pumps to create modal controls…/WPF-Modal-Controls-Via-DispatcherFrame-%28Nested-Message-Pumps%29.aspx
aoa i used this in my project but on user control my all button ,textbox,image apear too large i have set the width height also but its not woking.
mean how i make items(button,texbox) more precise and commertial look | https://blogs.msdn.microsoft.com/mitsu/2007/05/07/how-to-simulate-modal-windows-inside-a-single-wpf-window-using-anonymous-methods/ | CC-MAIN-2016-30 | refinedweb | 1,819 | 55.95 |
Sean, Colm could you please hold off on doing any changes to the
Canonicalizers for a day or two. Those were the classes that most
heavily used the == so I have some local changes here that I'll be
submitting a patch for quite soon.
On 8/5/10 10:16 AM, bugzilla@apache.org wrote:
>
>
> Summary: exc-c14n damages namespaces of XML
> Product: Security
> Version: Java 1.4.2
> Platform: All
> OS/Version: All
> Status: NEW
> Severity: normal
> Priority: P2
> Component: Canonicalization
> AssignedTo: security-dev@xml.apache.org
> ReportedBy: aklitzing@gmail.com
>
>
> The canonicalizer (java) with exc-c14n produces an invalid XML document here.
> It removes a namespace from an attribute that is still used in that element. It
> attach an example xsd and xml file.
> If I use canonicalize this xml file with exc-c14n it will remove the namespace
> xmlns:xs="". So the attribute
> ns:type="xs:string" won't be valid afterwards.
> Even if I add the namespace to the root element (bla:document) it will be
> removed.
>
> Validated with xmllint --noout --schema example.xsd example.xml
>
> Is this really correct for this canonicalization method to damage the xml file?
>
--
Chad La Joie
trusted identities, delivered | http://mail-archives.apache.org/mod_mbox/santuario-dev/201008.mbox/%3C4C5ACAEC.1070802@itumi.biz%3E | CC-MAIN-2018-47 | refinedweb | 200 | 51.24 |
:
from collections import defaultdict import csv episodes = defaultdict(list) with open("data/import/sentences.csv", "r") as sentences_file: reader = csv.reader(sentences_file, delimiter=',') reader.next() for row in reader: episodes[row[1]].append(row[4]) for episode_id, text in episodes.iteritems(): episodes[episode_id] = "".join(text) corpus = [] for id, episode in sorted(episodes.iteritems(), key=lambda t: int(t[0])): corpus.append(episode)
corpus contains 208 entries (1 per episode), each of which is a string containing the transcript of that episode. Next it's time to train our TF/IDF model which is only a few lines of code:
from sklearn.feature_extraction.text import TfidfVectorizer tf = TfidfVectorizer(analyzer='word', ngram_range=(1,3), min_df = 0, stop_words = 'english')
The most interesting parameter here is ngram_range - we're telling it to generate 2 and 3 word phrases along with the single words from the corpus.
e.g. if we had the sentence "Python is cool" we'd end up with 6 phrases - 'Python', 'is', 'cool', 'Python is', 'Python is cool' and 'is cool'.
Let's execute the model against our corpus:
tfidf_matrix = tf.fit_transform(corpus) feature_names = tf.get_feature_names() >>> len(feature_names) 498254 >>> feature_names[50:70] [u'00 does sound', u'00 don', u'00 don buy', u'00 dressed', u'00 dressed blond', u'00 drunkenly', u'00 drunkenly slurred', u'00 fair', u'00 fair tonight', u'00 fall', u'00 fall foliage', u'00 far', u'00 far impossible', u'00 fart', u'00 fart sure', u'00 friends', u'00 friends singing', u'00 getting', u'00 getting guys', u'00 god']
So we're got nearly 500,000 phrases and if we look at tfidf_matrix we'd expect it to be a 208 x 498254 matrix - one row per episode, one column per phrase:
>>> tfidf_matrix <208x498254 sparse matrix of type '<type 'numpy.float64'>' with 740396 stored elements in Compressed Sparse Row format>
This is what we've got although under the covers it's using a sparse representation to save space. Let's convert the matrix to dense format to explore further and find out why:
dense = tfidf_matrix.todense() >>> len(dense[0].tolist()[0]) 498254:
episode = dense[0].tolist()[0] phrase_scores = [pair for pair in zip(range(0, len(episode)), episode) if pair[1] > 0] >>> len(phrase_scores) 4823:
>>> sorted(phrase_scores, key=lambda t: t[1] * -1)[:5] [(419207, 0.2625177493269755), (312591, 0.19571419072701732), (267538, 0.15551468983363487), (490429, 0.15227880637176266), (356632, 0.1304175242341549)]
The first value in each tuple is the phrase's position in our initial vector and also corresponds to the phrase's position in feature_names which allows us to map the scores back to phrases. Let's look up a couple of phrases:
>>> feature_names[419207] u'ted' >>> feature_names[312591] u'olives' >>> feature_names[356632] u'robin'
Let's automate that lookup:
sorted_phrase_scores = sorted(phrase_scores, key=lambda t: t[1] * -1) for phrase, score in [(feature_names[word_id], score) for (word_id, score) in sorted_phrase_scores][:20]: print('{0: <20} {1}'.format(phrase, score)) ted 0.262517749327 olives 0.195714190727 marshall 0.155514689834 yasmine 0.152278806372 robin 0.130417524234 barney 0.124411751867 lily 0.122924977859 signal 0.103793246466 goanna 0.0981379875009 scene 0.0953423604123 cut 0.0917336653574 narrator 0.0864622981985 flashback 0.078295921554 flashback date 0.0702825260177 ranjit 0.0693927691559 flashback date robin 0.0585687716814 ted yasmine 0.0585687716814 carl 0.0582101172888 eye patch 0.0543650529797 lebanese 0.0543650529797
We see all the main characters names which aren't that interested - perhaps they should be part of the stop list - but 'olives' which is where the olive theory is first mentioned. I thought olives came up more often but a quick search for the term suggests it isn't mentioned again until Episode 9 in Season 9:
$ grep -rni --color "olives" data/import/sentences.csv | cut -d, -f 2,3,4 | sort | uniq -c 16 1,1,1 3 193,9,9
'yasmine' is also an interesting phrase in this episode but she's never mentioned again:
$ grep -h -rni --color "yasmine" data/import/sentences.csv 49:48,1,1,1,"Barney: (Taps a woman names Yasmine) Hi, have you met Ted? (Leaves and watches from a distance)." 50:49,1,1,1,"Ted: (To Yasmine) Hi, I'm Ted." 51:50,1,1,1,Yasmine: Yasmine. 53:52,1,1,1,"Yasmine: Thanks, It's Lebanese." 65:64,1,1,1,"[Cut to the bar, Ted is chatting with Yasmine]" 67:66,1,1,1,Yasmine: So do you think you'll ever get married? 68:67,1,1,1,"Ted: Well maybe eventually. Some fall day. Possibly in Central Park. Simple ceremony, we'll write our own vows. But--eh--no DJ, people will dance. I'm not going to worry about it! Damn it, why did Marshall have to get engaged? (Yasmine laughs) Yeah, nothing hotter than a guy planning out his own imaginary wedding, huh?" 69:68,1,1,1,"Yasmine: Actually, I think it's cute." 79:78,1,1,1,"Lily: You are unbelievable, Marshall. No-(Scene splits in half and shows both Lily and Marshall on top arguing and Ted and Yasmine on the bottom mingling)" 82:81,1,1,1,Ted: (To Yasmine) you wanna go out sometime? 85:84,1,1,1,[Cut to Scene with Ted and Yasmine at bar] 86:85,1,1,1,Yasmine: I'm sorry; Carl's my boyfriend (points to bartender)
It would be interesting to filter out the phrases which don't occur in any other episode and see what insights we get from doing that. For now though we'll extract phrases for all episodes and write to CSV so we can explore more easily:
with open("data/import/tfidf_scikit.csv", "w") as file: writer = csv.writer(file, delimiter=",") writer.writerow(["EpisodeId", "Phrase", "Score"]) doc_id = 0 for doc in tfidf_matrix.todense(): print "Document %d" %(doc_id) word_id = 0 for score in doc.tolist()[0]: if score > 0: word = feature_names[word_id] writer.writerow([doc_id+1, word.encode("utf-8"), score]) word_id +=1 doc_id +=1
And finally a quick look at the contents of the CSV:
$ tail -n 10 data/import/tfidf_scikit.csv 208,york apparently laughs,0.012174304095213192 208,york aren,0.012174304095213192 208,york aren supposed,0.012174304095213192 208,young,0.013397275854758335 208,young ladies,0.012174304095213192 208,young ladies need,0.012174304095213192 208,young man,0.008437685963000223 208,young man game,0.012174304095213192 208,young stupid,0.011506395106658192 208,young stupid sighs,0.012174304095213192
About the author
Mark Needham is a Developer Relations Engineer for Neo4j, the world's leading graph database. | https://markhneedham.com/blog/2015/02/15/pythonscikit-learn-calculating-tfidf-on-how-i-met-your-mother-transcripts/ | CC-MAIN-2019-13 | refinedweb | 1,086 | 69.68 |
Hi,
I am trying to build program which uses include files from another folder. I use -I key to specifiy path to that folder, but if I use quotes in file name it fails to build on GPU (it is ok on CPU). Without quotes I cannot use path with spaces. Is this a known restriction or a bug?
System: Win7 x64; GPU Driver: 9.18.10.3165
Log:
Platform: Intel(R) OpenCL
Device: Intel(R) HD Graphics 4000
Units: 16
Kernel file: kernel_median.cl
- Kernel build log ------------
-I "C:\_app_data\Intel\IPP\ipp\opencl\" -D BLK_X=128 -D BLK_Y=8
:11:10: fatal error: 'clpp.cl' file not found
#include "clpp.cl"
^
error: front end compiler failed build.
-------------------------------
Error -11 in clBuildProgram: CL_BUILD_PROGRAM_FAILURE | http://software.intel.com/en-us/forums/topic/429848 | CC-MAIN-2014-10 | refinedweb | 124 | 68.97 |
With the big unknowns answered, I am ready to dive into converting the ICE Code Editor to Dart. The unknowns were the ability to interact with JavaScript libraries from Dart (js-interop does this with aplomb) and the ability to read existing localStorage data, which is compressed.
The Dart version of the ICE code editor will be a pub package. I have been messing about in that directory with some exploratory code, but I think now is the time to think about layout. Well, maybe not so much think about it as follow the layout documentation
So I make the sub-directories that I believe that I will need:
➜ ice-code-editor git:(master) ✗ mkdir docs lib packages test exampleAnd create the metadata files:
➜ ice-code-editor git:(master) ✗ touch README.md LICENSE pubspec.yaml
While I am at it, I tell git to ignore the pubspec.lock file as well as the packages directory:
➜ ice-code-editor git:(master) ✗ cat <<-YAML > .gitignore heredocd> packages heredocd> pubspec.lock heredocd> YAMLTo sanity check my layout, I start work in the
examplesub-directory. I create a simple
app.dartweb server and a
publicsub-directory underneath that to hold an
index.htmlweb page:
<head> <script src="js>I will create
ice.dartback in the
<PACKAGE_ROOT>/libdirectory in a bit, but first I struggle with the location for the ACE code editor JavaScript files. I love Dart and all, but there is no way that I am going to attempt to reproduce ACE in Dart. Instead, I will continue to call the JavaScript code from Dart. But where to put the JavaScript code?
The pub documentation would seem to suggest that I put it in
<PACKAGE_ROOT>/web, but I have no idea how to source that from a
<script>tag if I follow that approach. I table that for now and include the JavaScript directly in
<PACKAGE_ROOT>/example/public/js/ace. This is not a long term solution as anyone wanting to use the ICE code would also have to manually copy ACE into their application. Still, this ought to allow me to verify that everything else is in place.
Now, I can switch to
<PACKAGE_ROOT>/libto create the
ICE.Full()constructor that is used in the example page. In
ice.dart, I add:
import 'package:js/js.dart' as js; class ICE { ICE.Full(el) { var context = js.context; context.ace.edit(el); } }With that, I can start the sample app up, load the homepage in Dartium and I see:
Nice! It just works. I have myself a decent start on the overall structure of my package. But where to put those JavaScript files?
The answer to that would seem to come from the
browserpackage, which bundles the
dart.jsJavaScript file. If the Dart maintainers feel no compunction about including JavaScript directly in
<PACKAGE_ROOT>/lib, then why should I? So I move the
acesub-directory under
<PACKAGE_ROOT>/lib/ace. This ensures that ACE will be bundled with ICE (and that the ACE JavaScript API will be fixed in the package).
With that, I can modify the example page to point to the bundled ACE:
<head> <script src="packages/ice_code_editor>And everything still works.
This seems like a good stopping point for tonight. I will pick back up with some tests tomorrow.
Day #736 | https://japhr.blogspot.com/2013/04/initial-layout-for-dart-version-of-ice.html | CC-MAIN-2017-47 | refinedweb | 550 | 66.23 |
Avoiding code bloat
Nathan has worked on the ISO/ANSI C++ Standard since 1993, designing most of what is in Chapter 22 of the Draft Standard. He can be contacted at.
The (Draft) Standard C++ Library is filled with useful templates, including extended versions of those found in the Standard Template Library (STL). These templates offer great flexibility, yet they are optimized for best performance in the common case. Not only can you use these templates directly, but they can stand as examples of effective design, and as a source of inspiration for ways to make your own components efficient and flexible.
Some of the ways that they offer flexibility involve "empty" classes -- classes.
Due to an unfortunate detail of the language definition, instances of empty classes usually occupy storage. In members of other classes, this overhead can make otherwise small objects large enough to be unusable in some applications. If this overhead couldn't be avoided in the construction of the Standard Library, the cost of the library's flexibility would drive away many of its intended users. But the optimization techniques used in the Standard library limit this overhead, and can be equally useful in your own code.
Empty Member Bloat
In the Standard C++ Library, each STL Container constructor takes as a parameter, and copies, an allocator object..
The listings resent simplified code for a possible implementation of these components. In Listing One, the standard default allocator allocator<> has only function members. In Listing Two, the generic list<> container template keeps a private alloc_ member, copied from its constructor argument. The list<> constructor is declared "explicit" so that the compiler will not use it as an automatic conversion (this is a recent language feature).
The member list<>::alloc_ usually occupies. Wasted space makes slower programs.
Empty Objects
How can this overhead be avoided? First, you need to know why the overhead is there. The Standard C++ language definition says:
A class with an empty sequence of [data] members and base class objects is an empty class. Complete objects and member subobjects of an empty class type shall have nonzero size.
Why must objects with no member data occupy storage? Consider Listing Three. If sizeof(Bar) were zero, f.b and the elements of f.a[] might all have the same address. If you were keeping track of separate objects by their addresses, f.b and f.a[0] would appear to be the same object. The C++ standards(a).
How can you avoid this overhead? The Draft says (in a footnote) that "a base class subobject of an empty class type may have zero size." In other words, if you declared Baz2 like this,
struct Baz2 : Bar { int* p; };
then a compiler is allowed to reserve zero bytes for the empty base class Bar. Hence, sizeof(Baz2) can be just 4 on most architectures; see Figure 1(b).
Compiler implementers are not required to do this optimization, and many don't -- yet. However, you can expect that most standard-conforming compilers will, because the efficiency of so many components of the Standard Library (not only the containers) depends on it.
Eliminating Bloat
This principle eliminates the space overhead. How do you apply it? Let's consider how to fix the implementation of our example template list<>. You could just derive from Alloc, as in Listing Four, and it would work, mostly. Code in the list<> member functions would get storage by calling this->allocate() instead of alloc_.allocate(). However, the Alloc type supplied by the user is allowed to have virtual members, and these could conflict with a list<> member. (Imagine a private member void list<>::init(), and a virtual member bool Alloc::init().)
A better approach is to package the allocator with a list<> data member, such as the pointer to the first list node (as in Listing Five), so that the allocator's interface cannot leak out.
Now, list<> members get storage by calling head_.allocate(), and mention the first list element by calling head_.p. This works perfectly, there's no unnecessary overhead of any kind, and users of list<> can't tell the difference. Like most good optimizations, it makes the implementation a bit messier, but doesn't affect the interface.
A Packaged Solution
There is still room for improvement and, as usual, the improvement involves a template. In Listing Six, I've packaged the technique so that it is clean and easy o use.
The declaration of list<> in Listing Seven is no bigger or messier than the unoptimized version I Four.
Finally
This technique can be used with any compiler that has sturdy template support. Not all C++ compilers support the empty-base optimization yet (the Sun, HP, IBM, and Microsoft compilers do), but the technique costs nothing extra even on those that don't. When you get a standard-conforming compiler, it probably will do the optimization, and if your code uses this technique, it will automatically become more efficient.
Acknowledgment
Fergus Henderson contributed an essential refinement to this technique.
Listing One
template <class T>class allocator { // an empty class static T* allocate(size_t n) { return (T*) ::operator new(n * sizeof T); } . . . };
Listing Two
template <class T, class Alloc = allocator<T> >class list { Alloc alloc_; struct Node { . . . }; Node* head_; public: explicit list(Alloc const& a = Alloc()) : alloc_(a) { . . . } . . . };
Listing Three
struct Bar { };struct Foo { struct Bar a[2]; struct Bar b; }; Foo f;
Listing Four
template <class T, class Alloc = allocator<T> >class list : private Alloc { struct Node { . . . }; Node* head_; public: explicit list(Alloc const& a = Alloc()) : Alloc(a) { . . . } . . . };
Listing Five
template <class T, class Alloc = allocator<T> >class list { struct Node { . . . }; struct P : public Alloc { P(Alloc const& a) : Alloc(a), p(0) { } Node* p; }; P head_; public: explicit list(Alloc const& a = Alloc()) : head_(a) { . . . } . . . };
Listing Six
template <class Base, class Member>struct BaseOpt : Base { Member m; BaseOpt(Base const& b, Member const& mem) : Base(b), m(mem) { } };
Listing Seven
template <class T, class Alloc = allocator<T> >class list { struct Node { . . . }; BaseOpt<Alloc,Node*> head_; public: explicit list(Alloc const& a = Alloc()) : head_(a,0) { . . . } . . . }; </p>
DDJ | http://www.drdobbs.com/cpp/the-empty-member-c-optimization/184410250 | CC-MAIN-2015-40 | refinedweb | 1,019 | 56.25 |
Last week, I talked about engineering new features from columns you already have (I used Haversine's Formula as an example). This week, I'd like to discuss using log-transformations to test whether the underlying patterns in the data is caused by exponential behavior:
y=A*B^x or perhaps a power model:
y=A*x^n. NOTE: This is not about using log-transformations to assist with normalizing the data, for that, you'll need to look here or here. This topic is near and dear to my heart, because it was how I met my very first tutoring client, which cascaded into my own business over ten years. I was working at a school, and one of my fellow teachers (she specialized in Chemistry & Bio, but often helped with math at the high school level) was doing some tutoring on the side, and she ran into a group project that was focusing on exactly this topic. She wasn't feeling very confident, and knew I wasn't doing anything that night, because we had just talked about it earlier that day. So she called me, asking for help, and she paid me 100% of what she made from the family that hour. Later on, she moved away, and "gifted" me with a referral to work with that same family.
Fast-forward 10 years, I'm no longer a tutor, but I have decided to take my mathematical expertise and pivot into computers, specifically, data science. Let's tie it all together. Imagine we've done some basic analysis on a few columns in pandas, and we're trying to decide on a column-by-column basis, which growth pattern is in play. Let's start with the necessary imports:
import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns # Sidenote: if you haven't tried using the Seaborn library, it's wonderful! I highly recommend that you give it a try. from scipy import stats from exp_or_poly import mystery_scatter # Secret function I wrote, don't cheat and look! %matplotlib inline
Here is some imaginary data. This function returns a pandas dataframe which we will save as
df
df = mystery_scatter(n=500, random=False) df.describe()
Mysteries #1, #2, and #3 all appear quite similar, and of course #4 certainly appears linear. We could just run a bunch of different regression models on columns 1, 2, and 3, and keep track of the R2 values, then just pick the best. In theory, this should work (and with computing power being so cheap, it's certainly not a bad idea), but I'd like to give us a more precise way to decide, instead of the "shotgun" approach where you just try all possible models. The mathematics for why the following strategy works is rather elegant, as it involves taking complicated equations and turning them into the simple linear form:
y=m*x+b. We won't be delving into the algebra here, but you can go here instead. For now, we'll just practice how to make it all happen in Python.
The basic idea is as follows: when looking at data that curves upward as the x variable increases, we can plot two different scatter plots, and test each of them for linearity. Whichever scatterplot is closest to a straight line will tell us which underlying growth pattern is on display. If
log(x) vs log(y) becomes linear, then the underlying pattern came from power model growth, but if
x vs log(y) gives the more linear scatterplot, then the original data was probably from an exponential model.
First, let's restrict our dataframe to only the curved lines:
curved_data = df.drop('linear', axis=1)
Now, we'll build a function that can display the graphs side-by-side, for comparison.
def linearity_test(df, x=0): '''Take in a dataframe, and the index for data along the x-axis. Then, for each column, display a scatterplot of the (x, log(y)) and also (log(x), log(y))''' df_x = df.iloc[:, x].copy() df_y = df.drop(df.columns[x], axis=1) for col in df_y.columns: plt.figure(figsize=(18, 9), ) # Is it exponential? plt.subplot(1,2,1) plt.title('Exp Test: (X , logY)') sns.regplot(df_x, np.log(df[col])) # Is it power? plt.subplot(1,2,2) plt.title('Pwr Test: (logX , logY)') sns.regplot(np.log(df_x), np.log(df[col])) plt.suptitle(f'Which Model is Best for {col}?') plt.show() plt.savefig(col+'.png') plt.close() print('') linearity_test(curved_data)
Well,
mystery_1 seems a bit inconclusive, we'll come back to that later.
mystery_2 certainly seems to have a linear relationship on the Exp Test, but NOT for the Power Test, which means the growth pattern for that column was caused by exponential growth, and
mystery_3 is the opposite, it is very obviously linear for the Pwr Test, but not for Exp Test. Let's peek under the hood and take a look at the function I used to build this data:
import numpy as np import matplotlib.pyplot as plt import seaborn as sns import pandas as pd def mystery_scatter(n=100, var=200, random=False): '''Randomly generate data, one of which is quadratic, another is exponential, and a third is linear. Plot all sets of data side-by-side, then return a pandas dataframe''' if random == False: np.random.seed(7182818) '''Generate some data, not much rhyme or reason to these numbers, just wanted everything to be on the same scale and the two curved datasets should be hard to tell apart''') '''Graph the plots''' plt.figure(figsize=(14, 14), ) plt.subplot(2,2,1) sns.scatterplot(x, y1) plt.title('Mystery #1') plt.subplot(2,2,2) sns.scatterplot(x, y2) plt.title('Mystery #2') plt.subplot(2,2,3) sns.scatterplot(x, y3) plt.title('Mystery #3') plt.subplot(2,2,4) sns.scatterplot(x, y4) plt.title('Linear') plt.suptitle('Mystery Challenge: \nOne of these is Exponential, one is Power, \nand the other is Quadratic, but which is which?') plt.show() plt.close() df = pd.DataFrame([x, y1, y2, y3, y4]).T df.columns = ['x', 'mystery_1', 'mystery_2', 'mystery_3', 'linear'] df.sort_values('x', inplace=True) df.reset_index(drop=True, inplace=True) return df
in particular, look at how the data was generated:)
So we were right about
mystery_2 (which was the same as y2) being caused by exponential growth, it was
y2 = 5000*1.03^x.
Meanwhile,
mystery_1 was
y1 = 11.45*x^2 - 234.15*x + 5000, definitely built from a quadratic equation, which is somewhat related to the Power Model, but not exactly the same. Technically, the
log(x) vs log(y) test only forces linearity for Power Models. In order to find the exact solution for this one, we would rule out exponential growth and power growth first, then start applying the "reiterative common differences" strategy to determine the degree of the polynomial, but we won't be getting in to that here.
Refocusing again on power growth: it looks like
mystery_3 was generated with
y3 = 5000*x^1.9 which definitely qualifies as a power model.
P.S. If you're wondering what the extra
np.random.normal(...) stuff at the back part of each of those lines is doing, it's just adding in some random noise so the scatter plots wouldn't be too "perfect".
Discussion (0) | https://practicaldev-herokuapp-com.global.ssl.fastly.net/upwardtrajectory/something-is-growing-and-it-s-growing-very-fast-but-how-fast-1li6 | CC-MAIN-2021-43 | refinedweb | 1,242 | 63.29 |
Demystifying Crucial Statistics in Python
If you have little experience in applying machine learning algorithm, you would have discovered that it does not require any knowledge of Statistics as a prerequisite.
However, knowing some statistics can be beneficial to understand machine learning technically as well intuitively. Knowing some statistics will eventually be required when you want to start validating your results and interpreting them. After all, when there is data, there are statistics. Like Mathematics is the language of Science. Statistics is one of a kind language for Data Science and Machine Learning.
Statistics is a field of mathematics with lots of theories and findings. However, there are various concepts, tools, techniques, and notations are taken from this field to make machine learning what it is today. You can use descriptive statistical methods to help transform observations into useful information that you will be able to understand and share with others. You can use inferential statistical techniques to reason from small samples of data to whole domains. Later in this post, you will study descriptive and inferential statistics. So, don't worry.
Before getting started, let's walk through ten examples.
Source: Statistical Methods for Machine Learning
Isn't that fascinating?
This post will give you a solid background in the essential but necessary statistics required for becoming a good machine learning practitioner.
In this post, you will study:
- Introduction to Statistics and its types
- Statistics for data preparation
- Statistics for model evaluation
- Gaussian and Descriptive stats
- Variable correlation
- Non-parametric Statistics
You have a lot to cover, and all of the topics are equally important. Let's get started!
Introduction to Statistics and its types:
Let's briefly study how to define statistics in simple terms.
Statistics is considered a subfield of mathematics. It refers to a multitude of methods for working with data and using that data to answer many types of questions.
When it comes to the statistical tools that are used in practice, it can be helpful to divide the field of statistics into two broad groups of methods: descriptive statistics for summarizing data, and inferential statistics for concluding samples of data (Statistics for Machine Learning (7-Day Mini-Course)).
- Descriptive Statistics: Descriptive statistics are used to describe the essential features of the data in a study. They provide simple summaries about the sample and the measures. Together with simple graphics analysis, they form the basis of virtually every quantitative analysis of data. The below infographic provides a good summary of descriptive statistics:
Source: IntellSpot
Inferential Statistics: Inferential statistics are methods that help in quantifying properties of the domain or population from a tinier set of obtained observations called a sample. Below is an infographic which beautifully describes inferential statistics:
Source: Analytics Vidhya
In the next section, you will study the use of statistics for data preparation.
Statistics for data preparation:
Statistical methods are required in the development of train and test data for your machine learning model.
This includes techniques for:
- Outlier detection
- Missing value imputation
- Data sampling
- Data scaling
- Variable encoding
A basic understanding of data distributions, descriptive statistics, and data visualization is required to help you identify the methods to choose when performing these tasks.
Let's analyze each of the above points briefly.
Outlier detection:
Let's first see what an outlier is.
An outlier is considered an observation that appears to deviate from other observations in the sample. The following figure makes the definition more prominent.
Source: MathWorks
You can spot the outliers in the data as given the above figure.
Many machine learning algorithms are sensitive to the range and distribution of attribute values in the input data. Outliers in input data can skew and mislead the training process of machine learning algorithms resulting in longer training times, less accurate models and ultimately more mediocre results.
Identification of potential outliers is vital for the following reasons:
An outlier could indicate the data is bad. In example, the data maybe coded incorrectly, or the experiment did not run correctly. If it can be determined that an outlying point is, in fact, erroneous, then the value that is outlying should be removed from the analysis. If it is possible to correct that is another option.
In a few cases, it may not be possible to determine whether an outlying point is a bad data point. Outliers could be due to random variation or could possibly indicate something scientifically interesting. In any event, you typically do not want to just delete the outlying observation. However, if the data contains significant outliers, you may need to consider the use of robust statistical techniques.
So, outliers are often not good for your predictive models (Although, sometimes, these outliers can be used as an advantage. But that is out of the scope of this post). You need the statistical know-how to handle outliers efficiently.
Missing value imputation:
Well, most of the datasets now suffer from the problem of missing values. Your machine learning model may not get trained effectively if the data that you are feeding to the model contains missing values. Statistical tools and techniques come here for the rescue.
Many people tend to discard the data instances which contain a missing value. But that is not a good practice because during that course you may lose essential features/representations of the data. Although there are advanced methods for dealing with missing value problems, these are the quick techniques that one would go for: Mean Imputation and Median Imputation.
It is imperative that you understand what mean and median are.
Say, you have a feature X1 which has these values - 13, 18, 13, 14, 13, 16, 14, 21, 13
The mean is the usual average, so I'll add and then divide:
(13 + 18 + 13 + 14 + 13 + 16 + 14 + 21 + 13) / 9 = 15
Note that the mean, in this case, isn't a value from the original list. This is a common result. You should not assume that your mean will be one of your original numbers.
The median is the middle value, so first, you will have to rewrite the list in numerical order:
13, 13, 13, 13, 14, 14, 16, 18, 21
There are nine numbers in the list, so the middle one will be the (9 + 1) / 2 = 10 / 2 = 5th number:
13, 13, 13, 13, 14, 14, 16, 18, 21
So the median is 14.
Data sampling:
Data is considered the currency of applied machine learning. Therefore, its collection and usage both are equally significant.
Data sampling refers to statistical methods for selecting observations from the domain with the objective of estimating a population parameter. In other words, sampling is an active process of gathering observations with the intent of estimating a population variable.
Each row of a dataset represents an observation that is indicative of a particular population. When working with data, you often do not have access to all possible observations. This could be for many reasons, for example:
- It may be difficult or expensive to make more observations.
- It may be challenging to gather all the observations together.
- More observations are expected to be made in the future.
Many times, you will not have the right proportion of the data samples. So, you will have to under-sample or over-sample based on the type of problem.
You perform under-sampling when the data samples for a particular category are very high compared to other meaning you discard some of the data samples from the category where they are higher. You perform over-sampling when the data samples for a particular type are decidedly lower compared to the other. In this case, you generate data samples.
This applies to multi-class scenarios as well. (A Gentle Introduction to Statistical Sampling and Resampling).
Data Scaling:
Often, the features of your dataset may widely vary in ranges. Some features may have a scale of 0 to 100 while the other may have ranges of 0.01 - 0.001, 10000- 20000, etc.
This is very problematic for efficient modeling. Because a small change in the feature which has a lower value range than the other feature may not have a significant impact on those other features. It affects the process of good learning. Dealing with this problem is known as data scaling.
There are different data scaling techniques such as Min-Max scaling, Absolute scaling, Standard scaling, etc.
Variable encoding:
At times, your datasets contain a mixture of both numeric and non-numeric data. Many machine learning frameworks like
scikit-learn expect all the data to be present in all numeric format. This is also helpful to speed up the computation process.
Again, statistics come for saving you.
Techniques like Label encoding, One-Hot encoding, etc. are used to convert non-numeric data to numeric.
It's time to apply the techniques!
You have covered a lot of theory for now. You will apply some of these to get the real feel.
You will start off by applying some statistical methods to detect Outliers.
You will use the
Z-Score index to detect outliers, and for this, you will investigate the Boston House Price dataset. Let's start off by importing the dataset from sklearn's utilities, and as you go along, you will start the necessary concepts.
import pandas as pd import numpy as np from sklearn.datasets import load_boston # Load the Boston dataset into a variable called boston boston = load_boston()
# Separate the features from the target x = boston.data y = boston.target
To view the dataset in a standard tabular format with the all the feature names, you will convert this into a
pandas dataframe.
# Take the columns separately in a variable columns = boston.feature_names # Create the dataframe boston_df = pd.DataFrame(boston.data) boston_df.columns = columns boston_df.head()
It is a common practice to start with univariate outlier analysis where you consider just one feature at a time. Often, a simple box-plot of a particular feature can give you good starting point. You will make a box-plot using
seaborn and you will use the
DIS feature.
import seaborn as sns sns.boxplot(x=boston_df['DIS']) import matplotlib.pyplot as plt plt.show()
<matplotlib.axes._subplots.AxesSubplot at 0x8abded0>
To view the box-plot, you did the second import of
matplotlib since
seaborn plots are displayed like ordinary matplotlib plots.
The above plot shows three points between 10 to 12, these are outliers as they're are not included in the box of other observations. Here you analyzed univariate outlier, i.e., you used DIS feature only to check for the outliers.
Let's proceed with Z-Score now.
"The Z-score is the signed number of standard deviations by which the value of an observation or data point is above the mean value of what is being observed or measured." - Wikipedia
The idea behind Z-score is to describe any data point regarding their relationship with the Standard Deviation and Mean for the group of data points. Z-score is about finding the distribution of data where the mean is 0, and the standard deviation is 1, i.e., normal distribution.
Wait! How on earth does this help in identifying the outliers?
Well, while calculating the Z-score you re-scale and center the data (mean of 0 and standard deviation of 1) and look for the instances which are too far from zero. These data points that are way too far from zero are treated as the outliers. In most common cases the threshold of 3 or -3 is used. In example, say the Z-score value is greater than or less than 3 or -3 respectively. This data point will then be identified as an outlier.
You will use the
Z-score function defined in
scipy library to detect the outliers.
from scipy import stats z = np.abs(stats.zscore(boston_df)) print(z)
[[0.41771335 0.28482986 1.2879095 ... 1.45900038 0.44105193 1.0755623 ] [0.41526932 0.48772236 0.59338101 ... 0.30309415 0.44105193 0.49243937] [0.41527165 0.48772236 0.59338101 ... 0.30309415 0.39642699 1.2087274 ] ... [0.41137448 0.48772236 0.11573841 ... 1.17646583 0.44105193 0.98304761] [0.40568883 0.48772236 0.11573841 ... 1.17646583 0.4032249 0.86530163] [0.41292893 0.48772236 0.11573841 ... 1.17646583 0.44105193 0.66905833]]
It is not possible to detect the outliers by just looking at the above output. You are more intelligent! You will define the threshold for yourself, and you will use a simple condition for detecting the outliers that cross your threshold.
threshold = 3 print(np.where(z > 3))
(array([ 55, 56, 57, 102, 141, 142, 152, 154, 155, 160, 162, 163, 199, 200, 201, 202, 203, 204, 208, 209, 210, 211, 212, 216, 218, 219, 220, 221, 222, 225, 234, 236, 256, 257, 262, 269, 273, 274, 276, 277, 282, 283, 283, 284, 347, 351, 352, 353, 353, 354, 355, 356, 357, 358, 363, 364, 364, 365, 367, 369, 370, 372, 373, 374, 374, 380, 398, 404, 405, 406, 410, 410, 411, 412, 412, 414, 414, 415, 416, 418, 418, 419, 423, 424, 425, 426, 427, 427, 429, 431, 436, 437, 438, 445, 450, 454, 455, 456, 457, 466], dtype=int32), array([ 1, 1, 1, 11, 12, 3, 3, 3, 3, 3, 3, 3, 1, 1, 1, 1, 1, 1, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 5, 3, 3, 1, 5, 5, 3, 3, 3, 3, 3, 3, 1, 3, 1, 1, 7, 7, 1, 7, 7, 7, 3, 3, 3, 3, 3, 5, 5, 5, 3, 3, 3, 12, 5, 12, 0, 0, 0, 0, 5, 0, 11, 11, 11, 12, 0, 12, 11, 11, 0, 11, 11, 11, 11, 11, 11, 0, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11], dtype=int32))
Again, a confusing output! The first array contains the list of row numbers and the second array contains their respective column numbers. For example,
z[55][1] have a Z-score higher than 3.
print(z[55][1])
3.375038763517309
So, the 55th record on column
ZN is an outlier. You can extend things from here.
You saw how you could use Z-Score and set its threshold to detect potential outliers in the data. Next, you will see how to do some missing value imputation.
You will use the famous Pima Indian Diabetes dataset which is known to have missing values. But before proceeding any further, you will have to load the dataset into your workspace.
You will load the dataset into a DataFrame object data.
data = pd.read_csv("",header=None) print(data.describe())
0 1 2 3 4 5 \ count 768.000000 768.000000 768.000000 768.000000 768.000000 768.000000 mean 3.845052 120.894531 69.105469 20.536458 79.799479 31.992578 std 3.369578 31.972618 19.355807 15.952218 115.244002 7.884160 min 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 25% 1.000000 99.000000 62.000000 0.000000 0.000000 27.300000 50% 3.000000 117.000000 72.000000 23.000000 30.500000 32.000000 75% 6.000000 140.250000 80.000000 32.000000 127.250000 36.600000 max 17.000000 199.000000 122.000000 99.000000 846.000000 67.100000 6 7 8 count 768.000000 768.000000 768.000000 mean 0.471876 33.240885 0.348958 std 0.331329 11.760232 0.476951 min 0.078000 21.000000 0.000000 25% 0.243750 24.000000 0.000000 50% 0.372500 29.000000 0.000000 75% 0.626250 41.000000 1.000000 max 2.420000 81.000000 1.000000
You might have already noticed that the column names are numeric here. This is because you are using an already preprocessed dataset. But don't worry, you will discover the names soon.
Now, this dataset is known to have missing values, but for your first glance at the above statistics, it might appear that the dataset does not contain missing values at all. But if you take a closer look, you will find that there are some columns where a zero value is entirely invalid. These are the values that are missing.
Specifically, the below columns have an invalid zero value as the minimum:
- Plasma glucose concentration
- Diastolic blood pressure
- Triceps skinfold thickness
- 2-Hour serum insulin
- Body mass index
Let's confirm this by looking at the raw data, the example prints the first 20 rows of data.
data.head(20)
Clearly there are 0 values in the columns 2, 3, 4, and 5.
As this dataset has missing values denoted as 0, so it might be tricky to handle it by just using the conventional means. Let's summarize the approach you will follow to combat this:
- Get the count of zeros in each of the columns you saw earlier.
- Determine which columns have the most zero values from the previous step.
- Replace the zero values in those columns with
NaN.
- Check if the NaNs are getting appropriately reflected.
- Call the fillna() function with the imputation strategy.
# Step 1: Get the count of zeros in each of the columns print((data[[1,2,3,4,5]] == 0).sum())
1 5 2 35 3 227 4 374 5 11 dtype: int64
You can see that columns 1,2 and 5 have just a few zero values, whereas columns 3 and 4 show a lot more, nearly half of the rows.
# Step -2: Mark zero values as missing or NaN data[[1,2,3,4,5]] = data[[1,2,3,4,5]].replace(0, np.NaN) # Count the number of NaN values in each column print(data.isnull().sum())
0 0 1 5 2 35 3 227 4 374 5 11 6 0 7 0 8 0 dtype: int64
Let's get sure at this point of time that your NaN replacement was a hit by taking a look at the dataset as a whole:
# Step 4 data.head(20)
You can see that marking the missing values had the intended effect.
Up till now, you analyzed essential trends when data is missing and how you can make use of simple statistical measures to get a hold of it. Now, you will impute the missing values using Mean Imputation which is essentially imputing the mean of the respective column in place of missing values.
# Step 5: Call the fillna() function with the imputation strategy data.fillna(data.mean(), inplace=True) # Count the number of NaN values in each column to verify print(data.isnull().sum())
0 0 1 0 2 0 3 0 4 0 5 0 6 0 7 0 8 0 dtype: int64
Excellent!
This DataCamp article effectively guides you in implementing data scaling as a data preprocessing step. Be sure to check it out.
Next, you will do variable encoding.
Before that, you need a dataset which actually contains non-numeric data. You will use the famous Iris dataset for this.
# Load the dataset to a DataFrame object iris iris = pd.read_csv("",header=None)
# See first 20 rows of the dataset iris.head(20)
You can easily convert the string values to integer values using the LabelEncoder. The three class values (Iris-setosa, Iris-versicolor, Iris-virginica) are mapped to the integer values (0, 1, 2).
In this case, the fourth column/feature of the dataset contains non-numeric values. So you need to separate it out.
# Convert the DataFrame to a NumPy array iris = iris.values # Separate Y = iris[:,4]
# Label Encode string class values as integers from sklearn.preprocessing import LabelEncoder label_encoder = LabelEncoder() label_encoder = label_encoder.fit(Y) label_encoded_y = label_encoder.transform(Y)
Now, let's study another area where the need for elementary knowledge of statistics is very crucial.
Statistics for model evaluation:
You have designed and developed your machine learning model. Now, you want to evaluate the performance of your model on the test data. In this regards, you seek help of various statistical metrics like Precision, Recall, ROC, AUC, RMSE, etc. You also seek help from multiple data resampling techniques such as k-fold Cross-Validation.
Statistics can effectively be used to:
- Estimate a hypothesis accuracy
- Determine the error of two hypotheses
- Compare learning algorithms using McNemar's test
It is important to note that the hypothesis refers to learned models; the results of running a learning algorithm on a dataset. Evaluating and comparing the hypothesis means comparing learned models, which is different from evaluating and comparing machine learning algorithms, which could be trained on different samples from the same problem or various problems.
Let's study Gaussian and Descriptive statistics now.
Introduction to Gaussian and Descriptive stats:
A sample of data is nothing but a snapshot from a broader population of all the potential observations that could be taken from a domain or generated by a process.
Interestingly, many observations fit a typical pattern or distribution called the normal distribution, or more formally, the Gaussian distribution. This is the bell-shaped distribution that you may be aware of. The following figure denotes a Gaussian distribution:
Source: HyperPhysics
Gaussian processes and Gaussian distributions are whole another sub-fields unto themselves. But, you will now study two of the most essential ingredients that build the entire world of Gaussian distributions in general.
Any sample data taken from a Gaussian distribution can be summarized with two parameters:
- Mean: The central tendency or most likely value in the distribution (the top of the bell).
- Variance: The average difference that observations have from the mean value in the distribution (the spread).
The term variance also gives rise to another critical term, i.e., standard deviation, which is merely the square root of the variance.
The mean, variance, and standard deviation can be directly calculated from data samples using
numpy.
You will first generate a sample of 100 random numbers pulled from a Gaussian distribution with a mean of 50 and a standard deviation of 5. You will then calculate the summary statistics.
First, you will import all the dependencies.
# Dependencies from numpy.random import seed from numpy.random import randn from numpy import mean from numpy import var from numpy import std
Next, you set the random number generator seed so that your results are reproducible.
seed(1)
# Generate univariate observations data = 5 * randn(10000) + 50
# Calculate statistics print('Mean: %.3f' % mean(data)) print('Variance: %.3f' % var(data)) print('Standard Deviation: %.3f' % std(data))
Mean: 50.049 Variance: 24.939 Standard Deviation: 4.994
Close enough, eh?
Let's study the next topic now.
Variable correlation:
Generally, the features that are contained in a dataset can often be related to each other which is very obvious to happen in practice. In statistical terms, this relationship between the features of your dataset (be it simple or complex) is often termed as correlation.
It is crucial to find out the degree of the correlation of the features in a dataset. This step essentially serves you as feature selection which concerns selecting the most important features from a dataset. This step is one of the most vital steps in a standard machine learning pipeline as it can give you a tremendous accuracy boost that too within a lesser amount of time.
For better understanding and to keep it more practical let's understand why features can be related to each other:
- One feature can be a determinant of another feature
- One feature could be associated with another feature in some degree of composition
- Multiple features can combine and give birth to another feature
Correlation between the features can be of three types: - Positive correlation where both the feature change in the same direction, Neutral correlation when there is no relationship of the change in the two features, Negative correlation where both the features change in opposite directions.
Correlation measurements form the fundamental of filter-based feature selection techniques. Check this article if you want to study more about feature selection.
You can mathematically the relationship between samples of two variables using a statistical method called Pearson’s correlation coefficient, named after the developer of the method, Karl Pearson.
You can calculate the Pearson's correlation score by using the
corr() function of
pandas with the
method parameter as
pearson. Let's study the correlation between the features of the Pima Indians Diabetes dataset that you used earlier. You already have the data in good shape.
# Data data.head()
# Create the matrix of correlation score between the features and the label scoreTable = data.corr(method='pearson')
# Visulaize the matrix data.corr(method='pearson').style.format("{:.2}").background_gradient(cmap=plt.get_cmap('coolwarm'), axis=1)
You can clearly see the Pearson's correlation between all the features and the label of the dataset.
In the next section, you will study non-parametric statistics.
Non-parametric statistics:
A large portion of the field of statistics and statistical methods is dedicated to data where the distribution is known.
Non-parametric statistics comes in handy when there is no or few information available about the population parameters. Non-parametric tests make no assumptions about the distribution of data.
In the case where you are working with nonparametric data, specialized nonparametric statistical methods can be used that discard all information about the distribution. As such, these methods are often referred to as distribution-free methods.
Bu before a nonparametric statistical method can be applied, the data must be converted into a rank format. Statistical methods that expect data in a rank format are sometimes called rank statistics. Examples of rank statistics can be rank correlation and rank statistical hypothesis tests. Ranking data is exactly as its name suggests.
A widely used nonparametric statistical hypothesis test for checking for a difference between two independent samples is the Mann-Whitney U test, named for Henry Mann and Donald Whitney.
You will implement this test in Python via the
mannwhitneyu() which is provided by
SciPy.
# The dependencies that you need from scipy.stats import mannwhitneyu from numpy.random import rand # seed the random number generator seed(1)
# Generate two independent samples data1 = 50 + (rand(100) * 10) data2 = 51 + (rand(100) * 10) # Compare samples stat, p = mannwhitneyu(data1, data2) print('Statistics = %.3f, p = %.3f' % (stat, p)) # Interpret alpha = 0.05 if p > alpha: print('Same distribution (fail to reject H0)') else: print('Different distribution (reject H0)')
Statistics = 4077.000, p = 0.012 Different distribution (reject H0)
alpha is the threshold parameter which is decided by you. The
mannwhitneyu() returns two things:
statistic: The Mann-Whitney U statistic, equal to min(U for x, U for y) if alternative is equal to None (deprecated; exists for backward compatibility), and U for y otherwise.
pvalue: p-value assuming an asymptotic normal distribution.
If you want to study the other methods of Non-parametric statistics, you can do it from here.
The other two popular non-parametric statistical significance tests that you can use are:
That calls for a wrap up!
You have finally made it to the end. In this article, you studied a variety of essential statistical concepts that play very crucial role in your machine learning projects. So, understanding them is just important.
From mere an introduction to statistics, you took it to statistical rankings that too with several implementations. That is definitely quite a feat. You studied three different datasets, exploited
pandas and
numpy functionalities to the fullest and moreover, you used
SciPy as well. Next are some links for you if you want to take things further:
Following are the resources I took help from for writing this blog:
- Machine Learning Mastery mini course on Statistics
- A Gentle Introduction to Statistical Sampling and Resampling
-
- Statistical Learning course by Stanford University
Let me know your views/queries in the comments section. Also, check out DataCamp's course on "Statistical Thinking in Python" which is very practically aligned. | https://www.datacamp.com/community/tutorials/demystifying-crucial-statistics-python | CC-MAIN-2021-49 | refinedweb | 4,635 | 55.34 |
I
I have a stored procedure which is built on PIVOT fuction. I am using the stored procedure to generate a report.
some time the stored procedure will generate the variable no of colomns, in ssrs dataet is updating with variable no of colomns, at that time report is not supporting.
i am using table control in SSRS. some one suggested me matrix control, but i need to change my stored procedure to work with matirx control.
i dont want to change the stored procedure.
can any one suggest me how to resolve this in SSRS.
Hi,
I want periodically to insert data from an excel file into a relational table using a SSIS package. However, each time the number of columns in excel may vary. For example, the 1st time the excel file may have 100 rows the second 105 rows etc.
Can this be done in SSIS and if yes what is the solution?
Thanks in advance,
CK
Hi there, I've just started looking into CLR stored procedures.
I'd like to create a stored procedure that takes in a variable number of parameters in the same way that
sp_executesql does.
I've tried creating the CLR proc as follows;
using System;
using System.Data.SqlTypes;
using System.Data.SqlClient;
using Microsoft.SqlServer.Server;
public class StoredProcedures
{
[Microsoft.SqlServer.Server.SqlProcedure]
public static void PriceSum(params SqlString[] parms)
{
}
}
but this results in the following error;
Error 1 Column, parameter, or variable #1: Cannot find data type SqlString[]. StringFactory
so I'm guessing it can't be done.
Is there a way to mimic sp_executesql ?
I'm trying to use a variable in the DATEADD(dd, @dt1, GETDATE()). This always throws an error, it can't convert the variable properly, but even when I do something like DATEADD(dd, CONVERT(int, @dt1), GETDATE()), still error.
Any help is appreciated!
I have a schema where one of my column in xml can appear 0 or 1 time.
That is fairly easy when it is a subquery. But I don't know how to do it in a main query.
Here is an example of what I want:
<Foo>
<Id>3</Id>
</Foo>
<Foo>
<Id>4</Id>
<Name>Bar</Name>
</Foo.()
Retrieving a variable from a previous!
Hall of Fame Twitter Terms of Service Privacy Policy Contact Us Archives Tell A Friend | http://www.dotnetspark.com/links/669-embeb-variable-number-videos.aspx | CC-MAIN-2017-30 | refinedweb | 392 | 67.15 |
- The Docker executor
- Workflow
- The
imagekeyword
- The
serviceskeyword
- Define image and services from
.gitlab-ci.yml
- Define image and services in
config.toml
- Define an image from a private Docker registry
- Accessing the services
- Configuring services
- Mounting a directory in RAM
- Build directory in service
- The builds and cache storage
- The persistent storage
- The persistent storage for builds
- The privileged mode
- The ENTRYPOINT
- How pull policies work
- Docker vs Docker-SSH (and Docker+Machine vs Docker-SSH+Machine)
The Docker executor
GitLab Runner can use Docker to run builds on user provided images. This is possible with the use of Docker executor.
The Docker executor when used with GitLab CI, connects to Docker Engine
and runs each build in a separate and isolated container using the predefined
image that is set up in
.gitlab-ci.yml and in accordance in
config.toml.
That way you can have a simple and reproducible build environment that can also run on your workstation. The added benefit is that you can test all the commands that we will explore later from your shell, rather than having to test them on a dedicated CI server.
Workflow
The Docker executor divides the build into multiple steps:
- Prepare: Create and start the services.
- Pre-build: Clone, restore cache and download artifacts from previous stages. This is run on a special Docker Image.
- Build: User build. This is run on the user-provided docker image.
- Post-build: Create cache, upload artifacts to GitLab. This is run on a special Docker Image.
The special Docker Image is based on Alpine Linux and contains all the tools required to run the prepare step the build: the Git binary and the Runner binary for supporting caching and artifacts. You can find the definition of this special image in the official Runner repository.
The
image keyword
The
image keyword is the name of the Docker image that is present in the
local Docker Engine (list all images with
docker images) or any image that
can be found at Docker Hub. For more information about images and Docker
Hub please read the Docker Fundamentals documentation.
In short, with
image we refer to the docker image, which will be used to
create a container on which your build will run.
If you don't specify the namespace, Docker implies
library which includes all
official images. That's why you'll see
many times the
library part omitted in
.gitlab-ci.yml and
config.toml.
For example you can define an image like
image: ruby:2.1, which is a shortcut
for
image: library/ruby:2.1.
Then, for each Docker image there are tags, denoting the version of the image.
These are defined with a colon (
:) after the image name. For example, for
Ruby you can see the supported tags at. If you
don't specify a tag (like
image: ruby),
latest is implied.
The
services keyword
The
services keyword defines just another Docker image that is run during
your build can see some widely used services examples in the relevant documentation of CI services examples.
How is service linked to the build
To better understand how the container linking works, read Linking containers together.
To summarize, if you add
mysql as service to your application, this image
will then be used to create a container that is linked to the build container.
According to the workflow this is the first step that is performed
before running the actual builds.
The service container for MySQL will be accessible under the hostname
mysql.
So, in order to access your database service you have to connect to the host
named
mysql instead of a socket or
localhost.
Define image and services in
config.toml
Look for the
[runners.docker] section:
[runners.docker] image = "ruby:2.1" services = ["mysql:latest", "postgres:latest"]
The image and services defined this way will be added to all builds run by
that Runner, so even if you don't define an
image inside
.gitlab-ci.yml,
the one defined in
config.toml will be used.
Define an image from a private Docker registry
Starting with GitLab Runner 0.6.0, you are able to define images located to private registries that could also require authentication.
All you have to do is be explicit on the image definition in
.gitlab-ci.yml.
image: my.registry.tld:5000/namepace/image:tag
In the example above, GitLab Runner will look at
my.registry.tld:5000 for the
image
namespace/image:tag.
If the repository is private you need to authenticate your GitLab Runner in the registry. Read more on using a private Docker registry.
Accessing the services
Let's say that you need a Wordpress instance to test some API integration with your application.
You can then use for example the tutum/wordpress as a service image in your
.gitlab-ci.yml:
services: - tutum/wordpress:latest
When the build is run,
tutum/wordpress will be started first and you will have
access to it from your build container under the hostname
tutum__wordpress
and
tutum-wordpress.
The GitLab Runner creates two alias hostnames for the service that you can use alternatively. The aliases are taken from the image name following these rules:
- Everything after
:is stripped
- For the first alias, the slash (
/) is replaced with double underscores (
__)
- For the second alias, the slash (
/) is replaced with a single dash (
-)
Using a private service image will strip any port given and apply the rules as
described above. A service
registry.gitlab-wp.com:4999/tutum/wordpress will
result in hostname
registry.gitlab-wp.com__tutum__wordpress and
registry.gitlab-wp.com-tutum-wordpress.. Secure variables are only passed to the build container.
Mounting a directory in RAM
You can mount a path in RAM using tmpfs. This can speed up the time required to test if there is a lot of I/O related work, such as with databases.
If you use the
tmpfs and
services_tmpfs options in the runner configuration, you can specify multiple paths, each with its own options. See the docker reference for details.
This is an example
config.toml to mount the data directory for the official Mysql container in RAM.
[runners.docker] # For the main container [runners.docker.tmpfs] "/var/lib/mysql" = "rw,noexec" # For services [runners.docker.services_tmpfs] "/var/lib/mysql" = "rw,noexec"
Build directory in service
Since version 1.5 GitLab Runner mounts a
/builds directory to all shared services.
See an issue:
PostgreSQL service example
See the specific documentation for using PostgreSQL as a service.
MySQL service example
See the specific documentation for using MySQL as a service.
The services health check
After the service is started, GitLab Runner waits some time for the service to be responsive. Currently, the Docker executor tries to open a TCP connection to the first exposed service in the service container.
You can see how it is implemented in this Dockerfile.
The builds and cache storage
The Docker executor by default stores all builds in
/builds/<namespace>/<project-name> and all caches in
/cache (inside the
container).
You can overwrite the
/builds and
/cache directories by defining the
builds_dir and
cache_dir options under the
[[runners]] section in
config.toml. This will modify where the data are stored inside the container.
If you modify the
/cache storage path, you also need to make sure to mark this
directory as persistent by defining it in
volumes = ["/my/cache/"] under the
[runners.docker] section in
config.toml.
Read the next section of persistent storage for more information.
The persistent storage
The Docker executor can provide a persistent storage when running the containers.
All directories defined under
volumes = will be persistent between builds.
The
volumes directive supports 2 types of storage:
<path>- the dynamic storage. The
<path>is persistent between subsequent runs of the same concurrent job for that project. The data is attached to a custom cache container:
runner-<short-token>-project-<id>-concurrent-<job-id>-cache-<unique-id>.
<host-path>:<path>[:<mode>]- the host-bound storage. The
<path>is bind to
<host-path>on the host system. The optional
<mode>can specify that this storage is read-only or read-write (default).
The persistent storage for builds
If you make the
/builds to be the host-bound storage, your builds will be stored in:
/builds/<short-token>/<concurrent-id>/<namespace>/<project-name>, where:
<short-token>is a shortened version of the Runner's token (first 8 letters)
<concurrent-id>is a unique number, identifying the local job ID on the particular Runner in context of the project
The privileged mode
The Docker executor supports a number of options that allows to fine tune the
build container. One of these options is the
privileged mode.
Use docker-in-docker with privileged mode
The configured
privileged flag is passed to the build container and all
services, thus allowing to easily use the docker-in-docker approach.
First, configure your Runner (config.toml) to run in
privileged mode:
[[runners]] executor = "docker" [runners.docker] privileged = true
Then, make your build script (
.gitlab-ci.yml) to use Docker-in-Docker
container:
image: docker:git services: - docker:dind build: script: - docker build -t my-image . - docker push my-image
The ENTRYPOINT
The Docker executor doesn't overwrite the
ENTRYPOINT of a Docker image.
That means that if your image defines the
ENTRYPOINT and doesn't allow to run
scripts with
CMD, the image will not work with the Docker executor.
With the use of
ENTRYPOINT it is possible to create special Docker image that
would run the build script in a custom environment, or in secure mode.
You may think of creating a Docker image that uses an
ENTRYPOINT that doesn't
execute the build script, but does execute a predefined set of commands, for
example to build the docker image from your directory. In that case, you can
run the build container in privileged mode, and make
the build environment of the Runner secure.
Consider the following example:
Create a new Dockerfile:
FROM docker:dind ADD / /entrypoint.sh ENTRYPOINT ["/bin/sh", "/entrypoint.sh"]
Create a bash script (
entrypoint.sh) that will be used as the
ENTRYPOINT:
#!/bin/sh dind docker daemon --host=unix:///var/run/docker.sock \ --host=tcp://0.0.0.0:2375 \ --storage-driver=vf & docker build -t "$BUILD_IMAGE" . docker push "$BUILD_IMAGE"
Push the image to the Docker registry.
Run Docker executor in
privilegedmode. In
config.tomldefine:
[[runners]] executor = "docker" [runners.docker] privileged = true
In your project use the following
.gitlab-ci.yml:
variables: BUILD_IMAGE: my.image build: image: my/docker-build:image script: - Dummy Script
This is just one of the examples. With this approach the possibilities are limitless.
How pull policies work
When using the
docker or
docker+machine executors, you can set the
pull_policy parameter which defines how the Runner will work when pulling
Docker images (for both
image and
services keywords).
Note: If you don't set any value for the
pull_policyparameter, then Runner will use the
alwayspull policy as the default value.
Now let's see how these policies work.
Using the
never pull policy
The
never pull policy disables images pulling completely. If you set the
pull_policy parameter of a Runner to
never, then users will be able
to use only the images that have been manually pulled on the docker host
the Runner runs on.
If an image cannot be found locally, then the Runner will fail the build with an error similar to:
Pulling docker image local_image:latest ... ERROR: Build failed: Error: image local_image:latest not found
When to use this pull policy?
This pull policy should be used if you want or need to have a full control on which images are used by the Runner's users. It is a good choice for private Runners that are dedicated to a project where only specific images can be used (not publicly available on any registries).
When not to use this pull policy?
This pull policy will not work properly with most of auto-scaled
Docker executor use cases. Because of how auto-scaling works, the
never
pull policy may be usable only when using a pre-defined cloud instance
images for chosen cloud provider. The image needs to contain installed
Docker Engine and local copy of used images.
Using the
if-not-present pull policy
When the
if-not-present pull policy is used, the Runner will first check
if the image is present locally. If it is, then the local version of
image will be used. Otherwise, the Runner will try to pull the image.
When to use this pull policy?
This pull policy is a good choice if you want to use images pulled from remote registries but you want to reduce time spent on analyzing image layers difference, when using heavy and rarely updated images. In that case, you will need once in a while to manually remove the image from the local Docker Engine store to force the update of the image.
It is also the good choice if you need to use images that are built and available only locally, but on the other hand, also need to allow to pull images from remote registries.
When not to use this pull policy?
This pull policy should not be used if your builds use images that are updated frequently and need to be used in most recent versions. In such situation, the network load reduction created by this policy may be less worthy than the necessity of the very frequent deletion of local copies of images.
This pull policy should also not be used if your Runner can be used by different users which should not have access to private images used by each other. Especially do not use this pull policy for shared Runners.
To understand why the
if-not-present pull policy creates security issues
when used with private images, read the
security considerations documentation.
Using the
always pull policy
The
always pull policy will ensure that the image is always pulled.
When
always is used, the Runner will try to pull the image even if a local
copy is available. If the image is not found, then the build will
fail with an error similar to:
Pulling docker image registry.tld/my/image:latest ... ERROR: Build failed: Error: image registry.tld/my/image:latest not found
Note: For versions prior to
v1.8, when using the
alwayspull policy, it could fall back to local copy of an image and print a warning:
Pulling docker image registry.tld/my/image:latest ... WARNING: Cannot pull the latest version of image registry.tld/my/image:latest : Error: image registry.tld/my/image:latest not found WARNING: Locally found image will be used instead.
That is changed in version
v1.8. To understand why we changed this and how incorrect usage of may be revealed please look into issue #1905.
When to use this pull policy?
This pull policy should be used if your Runner is publicly available and configured as a shared Runner in your GitLab instance. It is the only pull policy that can be considered as secure when the Runner will be used with private images.
This is also a good choice if you want to force users to always use the newest images.
Also, this will be the best solution for an auto-scaled configuration of the Runner.
When not to use this pull policy?
This pull policy will definitely not work if you need to use locally stored images. In this case, the Runner will skip the local copy of the image and try to pull it from the remote registry. If the image was build locally and doesn't exist in any public registry (and especially in the default Docker registry), the build will fail with:
Pulling docker image local_image:latest ... ERROR: Build failed: Error: image local_image:latest not found
Docker vs Docker-SSH (and Docker+Machine vs Docker-SSH+Machine)
Note: Starting with GitLab Runner 10.0, both docker-ssh and docker-ssh+machine executors are deprecated and will be removed in one of the upcoming releases.
We provided a support for a special type of Docker executor, namely Docker-SSH (and the autoscaled version: Docker-SSH+Machine). Docker-SSH uses the same logic as the Docker executor, but instead of executing the script directly, it uses an SSH client to connect to the build container.
Docker-ssh then connects to the SSH server that is running inside the container using its internal IP.
This executor is no longer maintained and will be removed in the near. | http://docs.gitlab.com/runner/executors/docker.html | CC-MAIN-2018-22 | refinedweb | 2,758 | 55.84 |
Many people have been frustrated by the fact that there seems to be no way to let the browser display arbitrary html generated by an Applet. Ideally what would be needed is something akin to the java.applet.AppletContext.showDocument() method, but which takes e.g. an InputStream instead of a URL. Since this does not exist here are some ways of getting the desired results that I'm aware of. The first one requires swing, which may mean a large download; the second one is more work but is probably smaller; the third requires help from a server; and finally the fourth one only works under Netscape (version 3.0b7 and later) and uses Javascript.
Swing's JEditoryPane is capable of displaying html 3.0. If you are willing to require the users to already have or to download swing (it's a couple MB) then this is the easiest option:
JEditorPane disp = new JEditorPane("text/html", html_text); disp.setEditable(false);
Actually some are starting to come out. One you can try and use is the JavaBrowser - this is really a mini web browser written in Java and therefore contains a simple (but usable) display engine. You should be able to extract the necessary parts from there. Another option is the ICE Browser which offers better html support, but is not free for commercial use. A third possiblity is to use Netscape's IFCs - this also contains an html displayer, but the whole thing is rather large. Note that the JavaBrowser and Netscape's IFCs only display simple HTML (something like HTML 1.0), so you can't display fancy things like tables (you'll have to use the ICE Browser or write your own code for that).
You can POST (or PUT) your html to a server (e.g. using URLConnection) and have the server put the html in a file; then use AppletContext.showDocument() to retrieve that file and have it displayed in a new window (or frame).
Another possibility for very short html is to put the html in the query string of a url which is fetched via a showDocument(); the url itself then points to a simple echo cgi-script which returns the query string.
If you have Netscape version 3.0b7 or later you can use the Javascript interface (see LiveConnect) or the "javascript:" URL to display html. To help you get started here is an example applet (Note: you must have Javascript enabled for these to work).
When you click on either button it will cause the browser to display
the html code in the corresponding text field as a new document.
The LiveConnect button does this via the LiveScript support,
and the showDocument() button via the applet context's
showDocument() method given a
javascript: URL.
Here is the whole Applet:
import java.awt.Event; import java.awt.Button; import java.awt.TextField; import java.awt.FlowLayout; import java.applet.Applet; import java.net.URL; import java.net.MalformedURLException; import netscape.javascript.JSObject; import netscape.javascript.JSException; public class JScriptExample extends Applet { private String text1 = "<HTML><HEAD><TITLE>Wow</TITLE></HEAD><BODY><H1>Gimme more!</H1></BODY></HTML>"; private String text2 = "<HTML><HEAD><TITLE>Wowee</TITLE></HEAD><BODY><H1>Gimme more!</H1></BODY></HTML> "; private TextField txt1, txt2; private JSObject win, doc; public void init() { setLayout(new FlowLayout(FlowLayout.LEFT)); txt1 = new TextField(text1, 40); txt2 = new TextField(text2, 40); add(txt1); add(new Button("LiveConnect")); add(txt2); add(new Button("showDocument()")); } public void start() { win = JSObject.getWindow(this); doc = (JSObject) win.getMember("document"); } public boolean action(Event evt, Object obj) { if (obj.equals("LiveConnect")) { Object[] args = { txt1.getText() }; doc.call("writeln", args); return true; } if (obj.equals("showDocument()")) { URL js; try { js = new URL("javascript:\"" + txt2.getText() + "\""); } catch (MalformedURLException mue) { return true; } getAppletContext().showDocument(js); return true; } return super.action(evt, obj); } }
Using LiveConnect basically what you do is get a handle on the current document (the getWindow() and getMember() methods) and then display the html with the Javascript writeln function (via the doc.call() method).
Since this uses the netscape.javascript package you must include the java_301.zip class archive in your classpath when compiling the applet. I actually just use a 'javac -classpath "java_301.zip:." ...'. Furthermore the Applet tag must include the MAYSCRIPT attribute:
<APPLET CODE="JScriptExample.class" WIDTH=500 HEIGHT=80 MAYSCRIPT> </APPLET>
Paul Houle also has a nice demonstration of how to use LiveConnect; if you have trouble with the above example you might want to try his instead. It's similar, except that is uses the eval() call to open a window, write to it, and then close it. | https://www.innovation.ch/java/HTTPClient/disp_html.html | CC-MAIN-2018-34 | refinedweb | 773 | 58.58 |
I.
Preparation
Here are some steps to take before writing any ASP.NET code:
- Create new ASP.NET MVC application.
- Download jQuery webcam plugin and extract it.
- Put jquery.webcam.js, jscam.swf and jscam_canvas_only.swf files to Scripts folder of web application.
Now we are ready to go.
Create webcam page
We start with creating default page of web application. I’m using Index view of Home controller.
@{ ViewBag. </script> <script> $("#Camera").webcam({ width: 320, height: 240, mode: "save", swffile: "@Url.Content("~/Scripts/jscam.swf")", onTick: function () { }, onSave: function () { }, onCapture: function () { webcam.save("@Url.Content("~/Home/Capture")/"); }, debug: function () { }, onLoad: function () { } }); </script> } <h2>Index</h2> <input type="button" value="Shoot!" onclick="webcam.capture();" /> <div id="Camera"></div>
We initialize webcam plugin in additional scripts block offered by layout view. To send webcam capture to server we have to use webcam plugin in save mode. onCapture event is the one where we actually give command to send captured image to server. Button with value “Shoot!” is the one we click at right moment.
Saving image to server hard disk
Now let’s save captured image to server hard disk. We add new action called Capture to Home controller. This action reads image from input stream, converts it from hex dump to byte array and then saves the result to disk.
Credits for String_To_Bytes2() method that I quickly borrowed go to Kenneth Scott and his blog posting Convert Hex String to Byte Array and Vice-Versa.
public class HomeController : Controller { public ActionResult Index() { return View(); } public void Capture() { var stream = Request.InputStream; string dump; using (var reader = new StreamReader(stream)) dump = reader.ReadToEnd(); var path = Server.MapPath("~/test.jpg"); System.IO.File.WriteAllBytes(path, String_To_Bytes2(dump)); } private byte[] String_To_Bytes2(string strInput) { int numBytes = (strInput.Length) / 2; byte[] bytes = new byte[numBytes]; for (int x = 0; x < numBytes; ++x) { bytes[x] = Convert.ToByte(strInput.Substring(x * 2, 2), 16); } return bytes; } }
Before running the code make sure you can write files to disk. Otherwise nasty access denied errors will come.
Testing application
Now let’s run the application and see what happens.
Whoops… we have to give permission to use webcam and microphone to Flash before we can use webcam. Okay, it is for our security.
After clicking Allow I was able to see picture that was forgot to protect with security message.
This tired hacker in dark room is actually me, so it seems like JQuery webcam plugin works okay :)
Conclusion
jQuery webcam plugin is simple and easy to use plugin that brings basic webcam functionalities to your web application. It was pretty easy to get it working and to get image from webcam to server hard disk. On ASP.NET side we needed simple hex dump conversion to make hex dump sent by webcam plugin to byte array before saving it as JPG-file.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/using-jquery-webcam-plugin | CC-MAIN-2016-07 | refinedweb | 486 | 59.4 |
Hi,), current values are got from floating-point reference code. So if they are designed with 16-bit fixed-point implementation in mind this is not my fault :) > > +/** > > + * \brief rouding function from reference code > > rouNding This... > > + * FIXME: found system replacement for it > > You mean "find" instead of "found"? ...this... > > +static inline int g729_round(float f) > > +{ > > + int i=ceil(f*2); > > + return i>>1; > > +} > > Uh, is that just ordinary "round to nearest", which probably is the > default for most FPUs anyway? ...and this were fixed by removing rounding routine. Was used in early code for maximum comply with fixed-point code's result. In current code rounding are beeing done only at final stage. And +/-1 in resulted PCM signal is not issue, imfo. > > +/** > > + * \brief pseudo random number generator > > + */ > > +static inline uint16_t g729_random(G729A_Context* ctx) > > +{ > > + return ctx->rand_seed = (uint16_t)(31821 * (uint32_t)ctx->rand_seed + 13849 + ctx->rand_seed); > > AFAICT rand_seed can not be negative, so why not just make rand_seed > unsigned instead of int and: > return ctx->rand_seed = 31822*ctx->rand_seed + 13849; Fixed as suggested, moreover previous implementation was wrong. > > +/** > > + * \brief Check parity bit (3.7.2) > > + * \param P1 Pitch delay first subframe > > + * \param P0 Parity bit for Pitch delay > > + * > > + * \return 1 if parity check is ok, 0 - otherwise > > + */ > > +int g729_parity_check(int P1, int P0) > > +{ > > + int P=P1>>2; > > + int S=P0&1; > > + int i; > > + > > + for(i=0; i<6; i++) > > + { > > + S ^= P&1; > > + P >>= 1; > > + } > > + S ^= 1; > > + return (!S); > > +} > > I am fairly certain that we already have a function for that, since I > have seen a much less naive implementation... > That may have been in MPlayer though, so the idea is: > P1 >>= 1; > P1 = (P1 & ~1) | (P0 & 1); > P1 ^= P1 >> 4; > P1 ^= P1 >> 2; > P1 ^= P1 >> 1; > return P1 & 1; I already seen above code somewhere in m/l too. Fixed as suggested. > > +/** > > + * \brief Decoding of the adaptive-codebook vector delay for first subframe (4.1.3) > > + * \param ctx private data structure > > + * \param P1 Pitch delay first subframe > > + * \param intT [out] integer part of delay > > + * \param frac [out] fractional part of delay [-1, 0, 1] > > + */ > > +static void g729_decode_ac_delay_subframe1(G729A_Context* ctx, int P1, int* intT, int* frac) > > +{ > > + /* if no parity error */ > > + if(!ctx->bad_pitch) > > + { > > + if(P1<197) > > + { > > + *intT=1.0*(P1+2)/3+19; > > + *frac=P1-3*(*intT)+58; > > + } > > + else > > + { > > + *intT=P1-112; > > + *frac=0; > > + } > > + } > > + else{ > > + *intT=ctx->intT2_prev; > > + *frac=0; > > + } > > + ctx->intT1=*intT; > > Why returning the value both via ctx and intT? > > > + //overflow can occure in 4.4k case > > "occur", without "e" Fixed. > > +/** > > + * \brief Decoding of the adaptive and fixed codebook gains from previous subframe (4.4.2) > > + * \param ctx private data structure > > + * \param gp pointer to variable receiving quantized fixed-codebook gain (gain pitch) > > + * \param gc pointer to variable receiving quantized adaptive-codebook gain (gain code) > > + */ > > +static void g729_get_gain_from_previous(G729A_Context *ctx, float* gp, float* gc) > > +{ > > + /* 4.4.2, Equation 93 */ > > + *gc=0.98*ctx->gain_code; > > + ctx->gain_code=*gc; > > + > > + /* 4.4.2, Equation 94 */ > > + *gp=FFMIN(0.9*ctx->gain_pitch, 0.9); > > + ctx->gain_pitch = *gp; > > +} > > Again, why returning the value both ways? I performance is an issue I > have the feeling there are better optimization-opportunities (not to > mention that the compiler should inline this anyway). Will be fixed. > > +static void g729_update_gain(G729A_Context *ctx) > > +{ > > + float avg_gain=0; > > + int i; > > + > > + /* 4.4.3. Equation 95 */ > > + for(i=0; i<4; i++) > > + avg_gain+=ctx->pred_vect_q[i]; > > + > > + avg_gain = FFMAX(avg_gain * 0.25 - 4.0, -14); > > + > > + for(i=3; i>0; i--) > > + ctx->pred_vect_q[i]=ctx->pred_vect_q[i-1]; > > Maybe use memmove? Not so sure if it's a good idea. > > > +static void g729_lp_synthesis_filter(G729A_Context *ctx, float* lp, float *in, float *out, float *filter_data) > > +{ > > + float* tmp_buf=av_mallocz((10+ctx->subframe_size)*sizeof(float)); > > Not exactly an inner loop, but malloc here does not look too nice to me, > maybe a fixed-sized buffer in the context would do (or does it even fit > on the stack)? Replaced with "float tmp_buf[SUBFRAME_MAX_SIZE+10] " > > +static float g729_get_signal_gain(G729A_Context *ctx, float *speech) > > +{ > > + int n; > > + float gain; > > + > > + gain=0; > > + for(n=0; n<ctx->subframe_size; n++) > > + gain+=speech[n]*speech[n]; > > > float gain = speech[0] * speech[0]; > > for (n = 1;... > > unless ctx->subframe_size can be 0. subframe_size never can be zero. Fixed as suggested. Moreover corr_t0 and corr_0 were using the same 100% exact code. All duplications were replaced to get_signal_gain call. > > +static void g729a_weighted_filter(G729A_Context *ctx, float* Az, float gamma, float *Azg) > > +{ > > + float gamma_tmp; > > + int n; > > + > > + gamma_tmp=gamma; > > + for(n=0; n<10; n++) > > + { > > + Azg[n]=Az[n]*gamma_tmp; > > + gamma_tmp*=gamma; > > + } > > gamma_pow or so would be more descriptive than gamma_tmp. Fixed. > > + float corellation, corr_max; > > correlation, 2r, 1l. Fixed. All fixes are done in local tree. Updated patch will be sent later as reply to Michael's mail. -- Regards, Vladimir Voroshilov mailto:voroshil at gmail.com JID: voroshil at gmail.com, voroshil at jabber.ru ICQ: 95587719 | http://ffmpeg.org/pipermail/ffmpeg-devel/2008-February/046070.html | CC-MAIN-2016-36 | refinedweb | 796 | 65.83 |
"Russ P." <Russ.Paielli at gmail.com> writes: >. No. I mean that using classes as a unit of access control is wrong. A class is a unit of behaviour, but that behaviour can (and often should) come from a number of places. Common Lisp gets this right. Classes define slots for their instances; slots are named by symbols. If you can write the symbol, you can access the slot. But symbols are managed by packages: packages can export some symbols and keep others internal; and they can import symbols from other packages. The same mechanism works for functions, variables, classes, types, macros, and all the other random namespaces that Lisp (like most languages) has. Python keeps access control separate from classes. And I think I'd like it to stay that way. (Lisp's package system also solves the problem that a class's -- and its superclasses' -- attributes and methods form a namespace which isn't well-managed in many languages. Since CL uses symbols for these, and symbols belong to packages, MDW::MUMBLE isn't the same symbol as RUSS-P::MUMBLE and so they name different slots.) -- [mdw] | https://mail.python.org/pipermail/python-list/2009-January/521159.html | CC-MAIN-2016-30 | refinedweb | 190 | 73.27 |
This document is also available in these non-normative formats: XML.
Copyright © 2006 W3C® (MIT, ERCIM, Keio), All Rights Reserved. W3C liability, trademark and document use rules apply.
This Finding addresses the question of whether or not adding new names to a (published) namespace is a sound practice.
This document has been produced by the W3C Technical Architecture Group (TAG). This finding addresses TAG issue nameSpaceState-48. The TAG approved this finding at its 3 January 2006 teleconference. Additional TAG findings, both approved and in draft state, may also be available.
Please send comments on this finding to the publicly archived TAG mailing list www-tag@w3.org (archive).
The terms MUST, SHOULD, and SHOULD NOT are used in this document in accordance with [RFC 2119].
Please send comments on this finding to the publicly archived TAG mailing list www-tag@w3.org (archive).. | http://www.w3.org/2001/tag/doc/namespaceState.html | crawl-001 | refinedweb | 144 | 67.35 |
Perhaps someone more fluent in Python‘s Multiprocessing Pool code could help me out. I am trying to connect to several hosts on my network simultaneously (N at any one time) over a socket connection and execute some RPC’s. As one host finishes, I want to add the next host into the Pool to run until all are complete.
I have a class, HClass, with some methods to do so, and a list of hostnames contained in hostlist. But I am failing to grok any of the docs.python.org examples for Pool to get this working.
A short snippet of code to illustrate what I’ve got so far:
hostlist = [h1, h2, h3, h4, ....] poolsize = 2 class HClass: def __init__(self, hostname="default"): self.hostname = hostname def go(self): # do stuff # do more stuff .... if __name__ == "__main__": objs = [HClass(hostname=current_host) for current_host in hostlist] pool = multiprocessing.pool(poolsize) results = pool.apply_async(objs.go())
So far I am blessed with this traceback:
Exception in thread Thread-2: 319, in _handle_tasks put(task) PicklingError: Can't pickle <type 'generator'>: attribute lookup __builtin__.generator failed
Where the process just hangs until I Control-C out of it.
Best answer
I would try to keep interprocess communication down to a minimum. It looks like all you really need to send is the hostname string:
for host in hostlist: pool.apply_async(worker, args = (host,), callback = on_return)
For example,
import multiprocessing as mp import time import logging logger = mp.log_to_stderr(logging.INFO) hostlist = ['h1', 'h2', 'h3', 'h4']*3 poolsize = 2 class HClass: def __init__(self, hostname="default"): self.hostname = hostname def go(self): logger.info('processing {h}'.format(h = self.hostname)) time.sleep(1) return self.hostname def worker(host): h = HClass(hostname = host) return h.go() result = [] def on_return(retval): result.append(retval) if __name__ == "__main__": pool = mp.Pool(poolsize) for host in hostlist: pool.apply_async(worker, args = (host,), callback = on_return) pool.close() pool.join() logger.info(result) | https://pythonquestion.com/post/python-multiprocessing-pool-iterating-over-objects-methods/ | CC-MAIN-2020-16 | refinedweb | 326 | 59.9 |
Arbitration and Translation, Part 2
Building on yesterday’s post, I’m going to try to explain how Windows copes with machines with strange resource translations. I’ll use two examples in this post, one related to I/O
port resources and one related to interrupts.
Just for convenience, I’ll duplicate the diagram from my last post, which diagramed the address space translations in a fairly complex multi-PCI-root machine.
Into such a machine, imagine that there’s a NIC plugged into the secondary root PCI bus and an UART plugged into the ISA/LPC bus, probably soldered onto the motherboard. The resulting PnP tree would look like this:
Of course, a fully populated PnP tree would be much more complicated. If you want to see the real thing, in full, look in Device Manager and choose “Show Devices by Connection.” (I took flack a few years ago for admitting that internally, we called this “Show as God Intended.” I still think of it that way, even though I understand why no user could use it that way.) Alternatively, you can see the same thing in the kernel debugger by typing “!devnode 0 1”.
For this example, assume the following things are true:
· The UART is not an ISA PnP device. It’s enumerated by the ACPI BIOS.
· The ACPI BIOS claims (through the _PRS object under the UART) that the device requires eight consecutive I/O ports, at one of several locations.
· The ACPI BIOS claims that the device can use one of two IRQs, 2 or 5.
· The ACPI BIOS contains a “control method” (labeled _SRS) which allows the ACPI driver to set the resources of the device.
· This device lies under the PCI root bus which is “Bus 0” in the example above. It has a native I/O port address space.
These things will cause the ACPI driver to respond to IRP_MN_QUERY_RESOURCE_REQUIREMENTS for this device with a structure that means “this device should be assigned one of three I/O port blocks which is eight bytes long and it needs one IRQ, which can be either 2 or 5, not shareable, edge triggered.”
For a full description on how this statement is constructed, see the documentation on IO_RESOURCE_REQUIREMENTS_LIST in the WDK. In short, I/O Resource Requirements lists are the “set of all possible sets of resources that a device could use.” For more detail on ACPI, see the spec.
As for the NIC, assume the following:
· It is a PCI device, not PCI-X or PCI Express. The upstream bridge is a PCIe to PCI-X bridge, which allows PCI devices to be plugged in.
· It has one PCI Base Address register and that BAR is of type “I/O,” implying that it must use the I/O address space. That BAR also implies that the registers of the NIC lie in a block that is 0x100 bytes long.
· It has a “1” in its Interrupt Pin register, implying that it will trigger its INTA signal with level-triggered semantics.
· This device lies under the PCI root “Bus 1” above. It has its I/O port space mapped into memory space.
These things will cause the PCI driver to respond to IRP_MN_QUERY_RESOURCE_REQUIREMENTS with “this device should be assigned one block of I/O ports which is naturally aligned and 0x100 bytes long. It can use any single IRQ, shareable and level-triggered.”
Upon receiving the response to these IRPs, the PnP manager starts trying to satisfy the requirements. To do this, it works its way toward the root of the PnP tree looking first for bus drivers which expose an “arbiter interface” for each device type. It also queries for a “translator interface.” I’ll cover arbiters in my next post. Today’s is really only about translators. But they’re somewhat intertwined, so I’ll define arbiters today as “something which knows about a specific resource type and knows the bus-local rules for deciding how these resources are allocated.” Allocating I/O ports on a PCI bus is different from allocating them on an ISA bus.
Once the PnP manager has searched to the root of the PnP tree, it will have found some interfaces.
The exact details have changed a little bit over the years and from release to release. I believe that I’ve accurately represented the state of affairs since Vista. Incidentally, you can see these in the debugger by typing “!translator” and “!arbiter.”
Translating from ISA to Interrupt Controller Input Pins
Since the ISA/LPC bridge devnode responded with an interrupt translator interface, the PnP manager needs to translate interrupts from ISA to the parent PCI. To really understand what this means, we need to have a little history lesson.
Thirtyish years ago, somebody at IBM decided that they were going to build a “personal computer” which had a single interrupt controller chip called the “8259 Programmable Interrupt Controller (PIC).” It had eight inputs. Each of these inputs was exposed in every expansion slot. The output pins were directly connected to the processor.
A few years later, some other guy at IBM designed the “IBM PC/AT.” When they built the AT, they used an 80286 processor which had a sixteen-bit expansion bus. They also added a few I/O devices. Since the expansion bus was wider, and since they needed more interrupt controller inputs now, they added a second 8259 to the machine. This second one was chained onto the first one. Its output pin was connected to IRQ 2 on the first one. Interestingly IRQ2 was still exposed in the older part of the expansion bus, so they connected that signal to Input 1 on the second PIC. So any old eight-bit device which was triggering the IRQ2 pin on the bus was actually going to cause IRQ9 to interrupt the processor.
Fast forward twenty-six or -seven years. We still have code to comprehend this, and it’s called a “translator interface for interrupts on the ISA devnode.”
The PnP manager invokes the translator from the ISA devnode and hands it two IO_RESOURCE_REQUIREMENTS, one saying “IRQ 2” and one saying “IRQ 5,” both edge-triggered and non-shareable. The ISA devnode modifies the first one to say IRQ 9. It leaves everything else alone.
The PnP manager keeps looking toward the root of the tree. The PCI driver really knows very little about interrupts. (This is because the PCI spec is nearly silent on the topic. Don’t get me started on how many years I’ve spent on filling that gap.) So the PCI driver doesn’t provide translator or arbiter interfaces for interrupts. The ACPI driver, on the other hand, knows quite a bit about interrupts, as the ACPI spec has quite a bit of text allowing BIOSes to describe the ways that the motherboard designer handled interrupts in a specific machine. So the ACPI driver exposes both interfaces.
The PnP manager, at this point, can stop translating interrupts from both devices because it has reached a common parent in the PnP with exposes an arbiter for interrupts. The arbiter is then invoked to choose which resources each device will be assigned. (Again, more on that in my next post.)
Translating from I/O Ports – Step 1
For both devices, the PnP manager starts looking for translators and arbiters for the device’s I/O port claims. It finds arbiters at the PCI layer, as PCI knows how to sub-allocate I/O port space to its children. Those rules are, thankfully, laid out quite clearly in the PCI spec, and aside from a few chipsets where the chipset designer didn’t think that the PCI spec applied to him, we can successfully figure out what configuration will work at that level.
Note that no translation has happened yet. We’re still talking about I/O ports as viewed on the buses which contain the devices, where the bus cycles will definitely be tagged as “I/O.”
Translation after Arbitration
Assume that for this example, the arbiters picked this set of choices:
UART: IRQ 9 and I/O ports 0x2040 through 0x2047
NIC: IRQ 11 and I/O ports 0x2000 through 0x20FF
No, that’s not a typo. Their I/O port claims actually seem like they overlap. This is fine, as they’re disjoint address spaces on different buses. (This can’t really happen on most PCs, but it can and does happen on some machines. See my last post.)
Now that the PnP manager has a resource assignment, it has to figure out how to present that choice to two separate audiences with two very different sets of needs. The first audience is the bus drivers. Now that we’ve chosen a resource set for each device, we need to program the devices so that they actually embody those choices. For the PCI device, this involves writing 0x2000 to its I/O BAR. For the LPC-attached UART, this involves executing the _SRS control method in the ACPI namespace underneath the UART device. Both of them need to be in bus-relative terms.
The second audience is the functional drivers, for the NIC and the UART. They don’t need to see the bus-relative view, as the driver can’t really directly generate bus traffic. The FDOs are made up of driver code running on the processor, so they need the processor-relative view of those resource claims.
To achieve that, I need to show you something we internally call the “checkmark diagram.” To truly understand this diagram, I have to apologize for the fact that, in house, all the PnP trees are drawn on whiteboards with the “root” at the top and the devices are leaves down at the bottom. This corresponds nicely with diagrams of physical machines where the processors and memory are at the top and the I/O devices hang down below like little appendages. The DDK/WDK tech writers convinced us that all public documentation should have the “root” of a “tree” firmly planted in the “ground.” Oh well.
I’ve already described steps 1 through 3. After arbitration, though, the PnP manager has to put these claims back in terms of the I/O bus. The only resource that went through translation on the way to arbitration was the IRQ for the UART. So now the translator interface from the ISA devnode reverses that process and changes that 9 back into a 2.
So the resulting “raw resource” assignments are now in bus-relative terms. They’re also now in terms of CM Resource Lists. Those are documented in the WDK, too. Again, in short, a CM Resource List is a single complete set of resources that a device either is using or could be using.
The raw resource lists for the devices are:
UART: IRQ 2 and I/O Ports 0x2040 through 0x2047
NIC: IRQ 11 and I/O Ports 0x2000 through 0x20ff
Lastly, the PnP manager goes back to toward the root of the PnP tree, passing the various resource assignments to any translators that may be at each node of the tree, trying to build a different CM Resource List, this time in terms of the processor.
The ISA devnode’s Interrupt translator immediately reverses itself again, and changes that 2 back into a 9. But there’s another interrupt translator in the tree, too, at the ACPI level. That translator is actually privy to some internal choices that the interrupt arbiter made, involving the IRQL and IDT entries (and in Windows 7 and later, IOMMU Interrupt Redirection Table entries) that the arbiter chose. So that translator can translate into processor-relative terms.
For the root PCI bus which maps its I/O Port space into processor memory, ACPI supplies an I/O Port translator interface. (It knows to do this based on contents of the ACPI namespace.)
Thus the “translated resource lists” for these end up looking like this:
UART: IRQL 11, Vector 0xb3, Affinity (target processor set) 0xF0 and I/O Ports 0x2040 through 0x2047
NIC: IRQL 10, Vector 0xa9, Affinity 0x0F and memory range 0x1’00002000 through 0x1’000020FF
Presenting Resources to Drivers
When all of this is complete, there are two CM Resource Lists in the PnP manager for the device. Both get sent as part of IRP_MN_START_DEVICE.
As explained in my last post, the driver contract is that the bus driver (or a bus filter like ACPI, sometimes) programs the device using the raw resources. The function driver calls MmMapIoSpace, IoConnectInterrupt, etc., using only the translated resources.
My next post will go into detail on what arbiters do.
- Jake Oshins | http://blogs.msdn.com/b/doronh/archive/2010/05/06/translation-and-windows.aspx | CC-MAIN-2014-49 | refinedweb | 2,117 | 70.23 |
[ ]
Doug Cutting resolved HADOOP-28:
--------------------------------
Resolution: Fixed
The JSP pages are now pre-compiled to servlets that can access package-private classes.
> webapps broken
> --------------
>
> Key: HADOOP-28
> URL:
> Project: Hadoop
> Type: Bug
> Components: mapred
> Reporter: Owen O'Malley
> Attachments: jspc.patch
>
> Changing the classes to private broke the webapps.
> The required public classes are:
> org.apache.hadoop.mapred.JobInProgress
> org.apache.hadoop.mapred.JobProfile
> org.apache.hadoop.mapred.JobStatus
> org.apache.hadoop.mapred.TaskTrackerStatus
> To fix, we need one of:
> 1. The classes need to be made public again
> 2. The functionality needs to be made available through the classes that are public
> 3. The webapps need to move into the mapred package.
--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
-
For more information on JIRA, see: | http://mail-archives.apache.org/mod_mbox/hadoop-common-dev/200602.mbox/%3C986277918.1139431303299.JavaMail.jira@ajax.apache.org%3E | CC-MAIN-2016-44 | refinedweb | 138 | 51.24 |
Everyone knows that debugging is twice as hard as writing a program in the first place. So if you're as clever as you can be when you write it, how will you ever debug it?
On the surface, this is a call to keep your code simple, but there is a corollary: debuggability is the limiting factor in software development. Whatever your metric for the “goodness” of your software - performance, features, size, velocity - if you double your effectiveness at debugging you’ll double the “goodness” of your software.
Over the last year or so, I shared on the gdbWatchPoint resource center numerous ways to help you increase the pace of your debugging.
In this article, I pick five of my favorite GDB topics that I believe can help you spend fewer hours debugging and write better software faster.
Let’s dive in.
1. GDB TUI Mode
GDB's command-line isn't intuitive, but if you spent some time discovering and learning the commands, then you'll find it very powerful.
I guess, one of the least known, but perhaps the most useful things in GDB is TUI mode, i.e. Text User Interface mode.
The trouble with the GDB command-line is that it doesn't give you the context of where you are in the program; for example, if the program hits a breakpoint, then GDB returns you just one line.
If you type
Ctrl-x-a (or
tui enable), then another text window opens; the source window shows the source file of the program, as well as the current line and the breakpoints. You’re now in GDB’s TUI mode.
It's not all sunshine though. Sometimes the screen messes up a bit but
Ctrl-l refreshes the screen.
It's unlikely that the source window alone provides you with all the information you need. Typing
Ctrl-x-2 helps you cycle through different text windows and layouts:
- source window only,
- assembly window only,
- split-view with source and assembly,
- split-view with source and registers, or
- split-view assembly and registers.
These windows are not all visible at the same time. The command window with the GDB prompt and GDB output is always visible.
Ctrl-x-1 returns the layout to the command window with the source or assembly pane.
Finally, you'll notice that the arrow-keys scroll the active window in TUI mode, which means that you can't use the up and down arrows to get to your previous commands as you normally would do in the GDB command-line. To move through your command history, use
Ctrl-p (previous command) and
Ctrl-n (next command) instead;
Ctrl-b and
Crtl-f respectively to move the cursor back (left) and forwards (right) to edit the line.
UDB Time Travel Debugger
Find and fix test failures in minutes - including C/C++ race conditions and memory corruptions
Learn more »
2. GDB Breakpoint Types
A breakpoint lets you stop the execution of the program at a time of your choice. It's an essential (and I reckon probably the most used) command in your debugging flow.
There are different breakpoint types.
Basic breakpoints
You set a basic breakpoint with the following command:
break [LOCATION] or just
b [LOCATION]
Where
LOCATION can be a line number, function name, or "*" and an address.
(gdb) break hello.c:7
Once your program hits a breakpoint, it waits for instruction. You can navigate or step through the program using
continue,
step i, and other commands. To speed up your typing, you can abbreviate all of these commands;
n for next,
c for continue,
si for step i - you get it.
Temporary breakpoints
A temporary breakpoint only stops the execution of the program once. When the program hits a temporary breakpoint, it deletes automatically, which saves you the trouble of managing and cleaning up obsolete breakpoints.
You set a temporary breakpoint with the following command:
tbreak [LOCATION]
(gdb) tbreak main
Tip: Using the
start command instead of the usual
run command sets a temporary breakpoint at
main().
Conditional breakpoints
Instead of stepping through the execution of the program until it hits the breakpoint again, you can specify a condition for a breakpoint that stops the execution if met. You can write pretty much any condition you want in the programming language of the program that you're debugging, which makes conditional breakpoints very powerful and efficient.
You set a conditional breakpoint with the following command:
break [LOCATION] if [CONDITION]
Here
[CONDITION] is a boolean expression, which, in GDB, is
TRUE if the result is nonzero; otherwise, it is
FALSE. The condition can include a function call, the value of a variable, or the result of any GDB expression.
(gdb) break my_func if i==5
You can make all breakpoint types conditional by adding the suffix,
if [CONDITION].
There is more you can do. You can set a condition on an existing breakpoint by using the breakpoint number as a reference.
condition [BREAKPOINT #] [CONDITION]
(gdb) condition 2 i==5
Typing
condition [BREAKPOINT #] removes the condition from a breakpoint.
REGEX breakpoints
The regex breakpoint command sets an unconditional breakpoint on all functions that match a regular expression (regex) and prints a list of all breakpoints it set. Once set, these breakpoints are treated just like the breakpoints you set with the normal break command. You can delete them, disable them, or make them conditional the same way as any other breakpoint.
You set a regex breakpoint with the following command:
rbreak REGEXP
(gdb) rbreak hello.c::my_func*
When debugging C++ programs,
rbreak is particularly useful for setting breakpoints on overloaded functions that are not members of any special classes.
rbreak saves you a lot of time typing and trying to remember exact function names.
In this recording, I cover the various breakpoint types, and how to use each of them in your debugging.
3. GDB’s tight integration with Python
GDB has tight integration with Python. You can do all kinds of smart things to help make detecting (and resolving) thorny bugs a breeze. Plus, there are lots of other tricks you can do to customize GDB to your particular project and debugging needs - for example, scripting routine actions, writing your own commands, and creating pretty-printers.
Not taking advantage of Python is something you may regret later, a missed opportunity to increase your debugging speed. It’s a small investment in time that pays back quickly and, over time, significantly.
You can start small and expand your Python resources over time.
A simple way to get started is by invoking the interpreter with typing
python. You can now use any Python code, for example, typing
print('Hello World'), followed by typing
end calls the python
print() function.
The Python code isn't passed to another process but runs inside the GDB process. Python truly integrates with GDB.
But to use the Python in GDB properly, you must import the GDB module, which gives you all you need.
(gdb) python import gdb
Switching to TUI mode, you can use the
.execute command to create a breakpoint, or even better, create an object instance of
gdb.breakpoint. We now can manipulate and interrogate this object, enable or disable it, etc.
Pretty much anything you can do on the GDB command-line you can do with a breakpoint from Python. You can append commands to a breakpoint, make it conditional, or make the breakpoint specific to a thread.
The same is true for most other commands in GDB; if you can do it from the GDB command-line, then you can do it from Python.
Tip: Consider making a
gdbinit per project, and committing it into your project’s source control so that everyone working on that project can benefit.
In this video, I dip into the Python integration with GDB to get you started. If you've struggled with imagining what Python can do, this one helps you out.
4. GDB Pretty-Printers
We all use structures and classes in the code we write, and GDB endeavors to display what these are but misses the contextual information; when you use a handful of members, and in particular unions, then interpreting these more complicated structures can get overwhelming.
When GDB prints a value, it checks whether there is a pretty-printer registered for that value first. If there is, then GDB uses that pretty-printer to display the value. Otherwise, the value prints in the usual way.
In the example below, I didn't have a pretty-printer for the
siginfo_t structure, which caused the print info command to return all the data in the structure, including expanding the unions it uses.
What do you think?
Messy and not easy to read.
I promise, creating a simple pretty-printer saves you much time staring at your computer screen.
Most pretty-printers comprise of two main elements:
- The lookup function to identify the value type, and
- the printer function itself.
To see how this works, let's write a basic pretty-printer in Python that returns the
si_signo value from the
siginfo_t structure for an interrupt signal that the program receives.
# Start off with defining the printer as a Python object.
class SiginfoPrinter:
# The constructor takes the value and stores it for later.
def _init_(self, val):
self.val = val
# The to_string method returns the value of the
# si_signo attribute of the directory.
def to_string(self):
signo = self.val[‘si_signo’]
return str(signo)
# Next, define the lookup function that returns the
# printer object when it receives a siginfo_t.
# The function takes the GDB value-type, which, in
# our example is used to look for the siginfo_t.
def my_pp_func(val):
if str(val.type)==‘siginfo_t’: return SiginfoPrinter(val)
# Finally, append the pretty-printer as object/ function to
# the list of registered GDB printers.
gdb.pretty_printers.append(my_pp_func)
# Our pretty-printer is now available when we debug
# the inferior program in GDB.
Now, run any program and hit
Ctrl-c to quit. The pretty-printer returns the value 2; the value for a
SIGINT.
(gdb) source prettyprint.py
(gdb) print info
$4 = 2
(gdb)
Much easier to read.
Of course, this is just a simple pretty-printer program, but the possibilities are endless. You can extract and print any value of a member in a structure with your pretty-printer.
In my video, I show you a quick way to pretty-print structures in GDB and build-out this basic pretty-printer some more. You'll learn that it isn’t difficult to extrapolate this handy printer to display whatever structure you want.
5. Reversible Debugging
To quote Kernighan once more:
Debugging involves backward reasoning, like solving murder mysteries. Something impossible occurred, and the only solid information is that it really did occur. So we must think backward from the result to discover the reasons.
Or in other words, we really want the debugger to be able to tell us what happened in the past - reality has diverged from your expectations, and debugging means figuring out at what point your expectations and reality diverged.
You need to know what your program actually did as opposed to what you expected it was going to do. This is why debugging typically involves reproducing the bug many times, slowly teasing out more and more information until you pin it down.
Reversible debugging takes away all that guesswork and trial and error; the debugger can tell you directly what just happened.
Not everyone knows about it, but GDB has built-in reversible debugging since release 7.0 (2009). It works pretty well, but you've to be ready to sacrifice performance (a lot of it); it's dramatically slower than any of the purpose-built time-travel debuggers out there like UDB.
But it's built right in, so why not use it?
Type
record to start reversible-debugging.
(gdb) record
You can use reverse commands to go backward, for example:
reverse-next (rn) - step the program backward, run through subroutine calls
reverse-nexti
(rni) - step backward one instruction, but run through called subroutines
reverse-step
(rs) - step the program backward until it reaches the beginning of a previous source line
reverse-step
(rsi) - step backward one instruction
reverse-continue
(rc) - continue the program but run it in reverse
reverse-finish
(rf) - execute the program backward until just before the point the memory changes
The challenge is that in most cases, you can't predict when the program faults, which means that you may have to run (and record) the program repeatedly until it eventually stalls.
Some GDB commands can help you out here.
Breakpoints and watchpoints work in reverse, which can help you, for example, to continue directly to a previous point in the program at which specific variable changes. Reverse watchpoints can be incredibly powerful. I know of several instances of bugs that alluded a developer for months or even years that were solved in a few hours with the power of reverse watchpoints.
Another cool GDB thing is that you can trigger a command (or series of commands) when the program hits a breakpoint.
Type
command [breakpoint #], where
[breakpoint #] is the identifier for your breakpoint.
command 1
Type commands for breakpoint(s) 1, one per line.
End with a line saying just "end".
record
continue
end
In this video, I demonstrate live on stage reversible-debugging in GDB. Check it out, it shows you step-by-step (in reverse) how to trace a fault in my program.
Try UDB for free
Step backwards in time in your program to view registers and memory with a Time Travel Debugger
Learn more »
That's it.
I shared five easy ways to reduce your debugging hours. Perhaps a little overwhelming now, but the good news is that you don't have to adopt all at once. That probably would have the opposite effect. Sometimes little tweaks to your usual debugging habits and routines can already make a big difference. Just start with small steps.
Take your time to read the different tutorials and watch the videos that I shared in this article and adopt the things that you feel can improve your debugging. And, make sure to share your takeaways with your project members and organization so everyone benefits.
To make sure you do not miss my next GDB tutorial, sign up for the gdbWatchPoint mailing now. | https://undo.io/resources/gdb-watchpoint/5-ways-reduce-debugging-hours/ | CC-MAIN-2022-05 | refinedweb | 2,397 | 61.77 |
I usually design my Python programs so that if a program needs to read or write to a file, the functions will take a filename argument that can be either a path string or a file-like object already open for reading / writing.
(I think I picked up this habit from Mark Pilgrim’s Dive Into Python, in particular chapter 10 about scripts and streams.)
This has the great advantage of making tests easier to write. Instead of having to create dummy temporary files on disk I can wrap strings in
StringIO() and pass that instead.
But the disadvantage is I then have a bit of boiler-plate at the top of the function:
def read_something(filename): # Tedious but not heinous boiler-plate if isinstance(filename, basestring): filename = open(filename) return filename.read()
The other drawback is that code doesn’t close the file it opened. You could have
filename.close() before returning but that will also close file-like objects that were passed in, which may not be what the caller wants. I think the decision whether to close the file belongs to the caller when the argument is a file-like object.
You could set a flag when opening the file, and then close the file afterwards if the flag is set, but that is yet more boiler-plate and quite ugly.
So here is a context manager which behaves like
open(). If the argument is a string it handles opening and closing the file cleanly. If the argument is anything else then it just reads the contents.
class open_filename(object): """Context manager that opens a filename and closes it on exit, but does nothing for file-like objects. """ def __init__(self, filename, *args, **kwargs): self.closing = kwargs.pop('closing', False) if isinstance(filename, basestring): self.fh = open(filename, *args, **kwargs) self.closing = True else: self.fh = filename def __enter__(self): return self.fh def __exit__(self, exc_type, exc_val, exc_tb): if self.closing: self.fh.close() return False
And then you use it like this:
from io import StringIO file1 = StringIO(u'The quick brown fox...') file2 = 'The quick brown fox' with open_filename(file1) as fh1, open_filename(file2) as fh2: foo, bar = fh1.read(), fh2.read()
If you always want the file to be closed on leaving the block you use the closing keyword argument set to
True (the default of
False means the file will only be closed if it was opened by the context manager).
file1 = StringIO(u'...jumps over the lazy dog.') assert file1.closed == False with open_filename(file1, closing=True) as fh: foo = fh.read() assert file1.closed == True
Today is my brother’s birthday. If I had asked him what he wanted for a present I am pretty certain he would have asked for a blog post about closing files in a computer programming language.
Pingback: Python:How to accept both filenames and file-like objects in Python functions? – IT Sprite | https://buxty.com/b/2012/06/a-context-manager-for-files-or-file-like-objects/ | CC-MAIN-2019-22 | refinedweb | 488 | 72.97 |
In this blog, I am going to explain pandas which is an open source library for data manipulation, analysis, and cleaning.
Pandas is a high-level data manipulation tool developed by Wes McKinney. The name Pandas is derived from the word Panel Data – an Econometrics from Multidimensional data. Pandas is built on the top of NumPy.
Five typical steps in the processing and analysis of data, regardless of the origin of data are load, prepare, manipulate, model, and analyze.
You can install pandas using pip or conda:
pip install pandas
or
conda install pandas
Start using pandas by importing it as:
import pandas as pd
Pandas deals with the following three data structures :
- Series
- DataFrame
- Panel
Series
Series is a value mutable, size immutable one-dimensional array like structure with homogeneous data.
- To create an empty series:
pd.Series()
Series([], dtype: float64)
- To create a series with default indexes:
import numpy as np
data = np.array([‘a’,’b’,’c’,’d’])
series = pd.Series(data)
print(series)
0 a
1 b
2 c
3 d
dtype: object
- To create a series with custom indexes:
pd.Series(data,index=[10,11,12,13])
print(series)
10 a
11 b
12 c
13 d
dtype: object
- To create a series using a dictionary:
data = {‘a‘ : 0., ‘b’ : 1., ‘c’ : 2.}
series = pd.Series(data)
print(series)
a 1.0
c 2.0
d 0.0
dtype: float64
- To create a series using a dictionary with custom indexes. Here, Index order is persisted and the missing element is filled with NaN:
data = {‘a’ : 0., ‘b’ : 1., ‘c’ : 2.}
series = pd.Series(data,index=[‘b’,’c’,’d’,’a’])
print(series)
b 1.0
c 2.0
d NaN
a 0.0
dtype: float64
- To access a single element of series:
print(series[0])
1.0
- To access multiple elements of the series:
print(series[[0,1]])
b 1.0
c 2.0
dtype: float64
DataFrame
DataFrames allows to store and manipulate tabular data in rows of observations and columns of variables. It is a size mutable, data mutable two-dimensional array with heterogeneous data. For example, an employee management system column can be employeeName and row can be names of employees. Pandas DataFrame can be created by loading the datasets from existing storage like SQL Database, CSV file, and Excel file. Pandas DataFrame can also be created from the lists, dictionary, and from a list of dictionary etc.
- To create a DataFrame from csv:
df = pd.read_csv(file)
- To create a DataFrame from the database:
df = pd.read_sql_query(query,conn)
- To create a DataFrame from excel:
df = pd.read_excel(file,sheetname=’ ‘)
- To create an empty DataFrame: FROM EXCEL:
print(pd.DataFrame())
Empty DataFrame
Columns: []
Index: []
- To create a DataFrame from the list:
print(pd.DataFrame([1,2]))
0
0 1
1 2
- To create a DataFrame with the given column names:
data = [[‘Joe’,10],[‘Bob’,12]]
print(pd.DataFrame(data,columns=[‘Name’,’Age’],dtype=float))
Name Age
0 Joe 10.0
1 Bob 12.0
- To filter the DataFrame:
df.loc[df[‘Name’] == ‘Joe’]
Name Age
0 Joe 10.0
- To create DataFrame from the Series:
print(pd.DataFrame( {‘one’ : pd.Series([1, 2, 3], index=[‘a’, ‘b’, ‘c’]), ‘two’ : pd.Series([1, 2, 3, 4], index=[‘a’, ‘b’, ‘c’, ‘d’])}))one two a 1.0 1 b 2.0 2 c 3.0 3 d NaN 4
A lot of operations can be performed on DataFrames like filtering, adding or deleting new rows and columns. Also, many functions and properties are used with DataFrames like – isnull(), head(), tail(), empty, axes, values, size, transpose.
Panel
The panel is a 3D container of data. The names for the 3 axes:
- items: axis 0, each item corresponds to a DataFrame contained inside.
- major_axis: axis 1, it is the rows of each of the DataFrames.
- minor_axis: axis 2, it is the columns of each of the DataFrames.
print(pd.Panel())
<class ‘pandas.core.panel.Panel’>
Dimensions: 0 (items) x 0 (major_axis) x 0 (minor_axis)
Items axis: None
Major_axis axis: None
Minor_axis axis: None
Thanks for reading! | https://blog.knoldus.com/data-analysis-using-python-pandas/ | CC-MAIN-2021-04 | refinedweb | 678 | 68.26 |
New attachment handling in couchdb-python module
Tuesday, September 09, 2008
Labels: couchdb, python 1 comments
I've been trying out the new methods for the Database class in couchdb-python, in the svn repository.
I tried put_attachment.
Guess you already have a couchdb database called blog. To get a reference to the db you have to...
from couchdb.schema import *
class Post(Document):
author = TextField()
subject = TextField()
content = TextField()
tags = ListField( TextField() )
comment_author = TextField(),
comment = TextField(),
comment_date = DateTimeField()
)))
date = DateTimeField()
from couchdb import Server
from datetime import datetime
import binascii
s = Server("")
s.create("blog")
blog = s["blog"]
p = Post( author = "Me", subject = "Whatever for the subject ", content = "Any content",date = datetime.now(), tags = ["Python", "Couchdb", "Blog"])
p.store(blog)
f = open("apythonfile.py", "rb")
foo = binascii.b2a_base64(f.read()) # this convert the content of the file to a encode 64 string
#put_attachment only works right now with an encoded string as I see and with a dictionary object for the document ( first parameter ) and not for a Document instance as used here , so...
blog = s["blog"]
adoc = blog[p.id]
and finally call the method...
blog.put_attachment( adoc, "apythonfile.py", foo, "text/python")
#you can't do blog.put_attachment( p, "apythonfile.py", f, "text/python") which I taught by reading the doc inside the method.
Heya, passing in f works fr me[tm]. I added a test to the suite to verify. Please report bugs to
Cheers
Jan
-- | http://batok.blogspot.com/2008/09/new-attachment-handling-in-couchdb.html | CC-MAIN-2017-13 | refinedweb | 240 | 68.57 |
What will you do if your data is a bit more complicated than a straight line? A good alternative for you is that you can use a linear model to fit in a nonlinear data. You can use the add powers of ever feature as the new features, and then you can use the new set of features to train a Linear Model. In Machine Learning, this technique is known as Polynomial Regression.
Let’s understand Polynomial Regression from an example. I will first generate a nonlinear data which is based on a quadratic equation. A quadratic equation is in the form of ax2+bx+c; I will first import all the necessary libraries then I will create a quadratic equation: import numpy.random as rnd np.random.seed(42)
Now let’s make a quadratic equation:
Code language: Python (python)Code language: Python (python)
m = 100 X = 6 * np.random.rand(m, 1) - 3 y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1) plt.plot(X, y, "b.") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_data_plot") plt.show()
Polynomial Regression
A straight line will never fit on a nonlinear data like this. Now, I will use the Polynomial Features algorithm provided by Scikit-Learn to transfer the above training data by adding the square all features present in our training data as new features for our model:
Code language: Python (python)Code language: Python (python)
from sklearn.preprocessing import PolynomialFeatures poly_features = PolynomialFeatures(degree=2, include_bias=False) X_poly = poly_features.fit_transform(X) X[0]
array([-0.75275929])
Code language: Python (python)Code language: Python (python)
X_poly[0]
array([-0.75275929, 0.56664654])
Now X_poly contains the original features of X plus the square of the features. Now we can use the Linear Regression algorithm to fit in our new training data:
Code language: Python (python)Code language: Python (python)
from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X_poly, y) lin_reg.intercept_, lin_reg.coef_
(array([1.78134581]), array([[0.93366893, 0.56456263]]))
Code language: Python (python)Code language: Python (python)
X_new=np.linspace(-3, 3, 100).reshape(100, 1) X_new_poly = poly_features.transform(X_new) y_new = lin_reg.predict(X_new_poly) plt.plot(X, y, "b.") plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.legend(loc="upper left", fontsize=14) plt.axis([-3, 3, 0, 10]) save_fig("quadratic_predictions_plot") plt.show()
You must know that when we have multiple features, the Polynomial Regression is very much capable of finding the relationships between all the features in the data. This is possible because the Polynomial Features adds all the combinations of features up to a provided degree.
Learning Curves in Polynomial Regression
If you use a high-degree Polynomial Regression, you will end up fitting the training data in a much better way than a Linear Regression. Let’s understand this with an example:
Code language: Python (python)Code language: Python (python)
from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)): polybig_features = PolynomialFeatures(degree=degree, include_bias=False) std_scaler = StandardScaler() lin_reg = LinearRegression() polynomial_regression = Pipeline([ ("poly_features", polybig_features), ("std_scaler", std_scaler), ("lin_reg", lin_reg), ]) polynomial_regression.fit(X, y) y_newbig = polynomial_regression.predict(X_new) plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width) plt.plot(X, y, "b.", linewidth=3) plt.legend(loc="upper left") plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", rotation=0, fontsize=18) plt.axis([-3, 3, 0, 10]) save_fig("high_degree_polynomials_plot") plt.show()
The high-degree Polynomial Regression model is overfitting the training data, where a linear model is underfitting it. So the model that will perform the best, in this case, is quadratic because the data is generated using a quadratic equation.
But you never know, what function is used in creating the data. So how you will decide how complex your model should be? How to analyse whether your model is overfitting or underfitting the data?
Also, Read: Gradient Descent Algorithm in Machine Learning.
A right way to generalise the performance of our model is to look at the learning curves. Learning curves are plots of the performance of a model on the training set and the validation set as a function of the size of the training set.
To generate learning curves, train the model several times on different size of subsets of the training data. Now let’s understand this with an example. The code below defines a function that can plot the learning curves of a model by using the training data:
Code language: Python (python)Code language: Python (python)
from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train[:m], y_train_predict)) val_errors.append(mean_squared_error(y_val, y_val_predict)) plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train") plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val") plt.legend(loc="upper right", fontsize=14) plt.xlabel("Training set size", fontsize=14) plt.ylabel("RMSE", fontsize=14)
Now let’s look at the learning curves of our model using the function that I created above:
Code language: Python (python)Code language: Python (python)
lin_reg = LinearRegression() plot_learning_curves(lin_reg, X, y) plt.axis([0, 80, 0, 3]) save_fig("underfitting_learning_curves_plot") plt.show()
So the output resulted in underfitting data. If your data is underfitting the training data, adding more instances will not help. You need to use a more sophisticated machine learning model or come up with some better features. Now let’s look at the learning curves of a 10th-degree polynomial regression model on the same data:
Code language: Python (python)Code language: Python (python)
from sklearn.pipeline import Pipeline polynomial_regression = Pipeline([ ("poly_features", PolynomialFeatures(degree=10, include_bias=False)), ("lin_reg", LinearRegression()), ]) plot_learning_curves(polynomial_regression, X, y) plt.axis([0, 80, 0, 3]) save_fig("learning_curves_plot") plt.show()
The learning rate are looking a bit likely to the previous ones, but there are some major differences here:
- The error on the training data is very much lower than the previous learning curves we explored using a linear regression model.
- There is a gap between the curves, which means that the performance of the model is very much better on the training data than the validation data.
I hope you liked this article on Polynomial Regression and learning curves in Machine Learning. Feel free to ask your valuable questions in the comments section below. You can also follow me on Medium to read more amazing articles. | https://thecleverprogrammer.com/2020/07/27/polynomial-regression-algorithm/ | CC-MAIN-2022-33 | refinedweb | 1,124 | 51.65 |
I am making a SpriteKit game for iOS, OSX, and tvOS. I am trying to use the accelerometer for my iOS target. I have the check for iOS on import of the CMMotionManager, but I can't seem to get the check to work when creating my motion manager property.
#if os(iOS)
import CMMotionManager
#endif
class MainPlayScene: TCScene, SKPhysicsContactDelegate {
//MARK: Properties
//Motion
@available(iOS 9, *) // Does not work, just trying things out....
var motionManager:CMMotionManager {
return motionManager
}
You use the same syntax that you used for your import statement. This is also what apple does in their sample game DemoBots.
#if os(iOS) var motionManager.... #endif #if os(tvOS) ... #endif
You can also do multiple checks and use else/else if
#if os(iOS) || os(tvOS) .... #elseif os(OSX) ... #endif ... // Code for other platforms #endif
How to determine device type from Swift? (OS X or iOS)
Just curious, any particular reason you make the motionManager property computed? | https://codedump.io/share/1M6ZkUx2vmAj/1/declare-property-only-on-ios | CC-MAIN-2017-30 | refinedweb | 158 | 57.77 |
The June CTP of VS 2005 includes Team Foundation Server. However, this is not the June CTP discussed in the Team Foundation MSDN forums and elsewhere. That CTP is now scheduled for July (it was the last week of June, so it didn’t change much). So, anywhere you’ve see a reference to the June CTP for TFS in the forums, think July CTP.
The July CTP of TFS will include quite a few performance improvements, protocol changes (which means the just-released June CTP won’t communicate properly with the upcoming July CTP of TFS), and nearly all of the code names are gone from assemblies, namespaces, and databases. We are upgrading our dogfood system to use what we are going to release as a July CTP of TFS, so it’s getting more time and effort than a normal CTP. The June CTP of VS is just a build from June 1 (50601.01).
So, if you are looking to install the latest Team Foundation Server and see how it has changed since beta 2, wait for the July release.
Can you say whether or not the July CTP will still use AD/AM?
Keith, AD/AM will not be in the July CTP.
Buck
Boy, where there some sore ears in Redmond today!
I’ve been busy with customer stuff these past few…
The CTP release this week to msdn, is that now the june or the july CTP??
The July CTP of TFS hasn’t been released yet. Look for it in the next couple of weeks. | https://blogs.msdn.microsoft.com/buckh/2005/06/16/skip-the-june-ctp-for-team-foundation/ | CC-MAIN-2016-36 | refinedweb | 262 | 78.59 |
af_find, af_cachefind, af_initattrs, af_getkey, af_dropkey, af_dropset, af_dropall - AtFS retrieve interface
#include <atfs.h> int)
af_find and af_cachefind retrieve ASOs by given attributes. af_find operates on source objects and af_cachefind only on derived objects. The keys of all found ASOs are returned in resultset. The keys returned in resultset are randomly ordered. af_find and af_cachefind expect resultset to be a pointer to an empty set structure. Both functions return the number of found ASOs.: empty list (first entry is a nil pointer) matches every ASO. "" (first entry is an empty string) matches every ASO that has no user defined attributes. name[=] matches, if a user defined attribute with the given name is present. name=value matches all ASOs that have a corresponding user defined attribute, that has at least the given value. af_getkey ("", "otto", "c", AF_BUSYVERS, AF_BUSYVERS, key) leads to the key of the file (busy version) named otto.c in the current directory. af_getkey ("", "otto", "c", AF_LASTVERS, AF_LASTVERS, key).
af_find returns the number of found ASOs. Upon error, -1 is returned and af_errno is set to the corresponding error number. | http://huge-man-linux.net/man3/af_retrieve.html | CC-MAIN-2018-13 | refinedweb | 180 | 68.67 |
Query for the next available network view ID number and allocate it (reserve).
This number can then be assigned to the network view of an instantiated object. The example below demonstrates a simple method to do this. Note that for this to work there must be a NetworkView attached to the object which has this script and it must have the script as its observed property. There must be a Cube prefab present also with a NetworkView which watches something (like the Transform of the Cube). The cubePrefab variable in the script must be set to that cube prefab. This is the simplest method of using AllocateViewID intelligently. This get more complicated if there were more than one NetworkView attached to the Cube which is to be instantiated.
using UnityEngine; using System.Collections;
public class ExampleClass : MonoBehaviour { public Transform cubePrefab; public NetworkView nView; void Start() { nView = GetComponent<NetworkView>(); } void OnGUI() { if (GUILayout.Button("SpawnBox")) { NetworkViewID viewID = Network.AllocateViewID(); nView.RPC("SpawnBox", RPCMode.AllBuffered, viewID, transform.position); } } [RPC] void SpawnBox(NetworkViewID viewID, Vector3 location) { Transform clone; clone = Instantiate(cubePrefab, location, Quaternion.identity) as Transform as Transform; NetworkView nView; nView = clone.GetComponent<NetworkView>(); nView.viewID = viewID; } } | https://docs.unity3d.com/es/2017.4/ScriptReference/Network.AllocateViewID.html | CC-MAIN-2021-31 | refinedweb | 194 | 50.02 |
string_splitter 0.1.0+1
string_splitter #
Utility classes for splitting strings and files into parts. Supports streamed parsing for handling long strings and large files.
Usage #
string_splitter has 2 libraries, [string_splitter] for parsing strings, and [string_splitter_io] for parsing files.
Parsing Strings #
import 'package:string_splitter/string_splitter.dart';
[StringSplitter] contains 3 static methods: [split], [stream], and [chunk].
Each method accepts a [String] to split, and [split] and [stream] accept lists of [splitters] and [delimiters] to be used to split the string, while [chunk] splits strings into a set numbers of characters per chunk.
[delimiters], if provided, will instruct the parser to ignore [splitters] contained within the delimiting characters. [delimiters] can be provided as an individual string, in which case the same character(s) will be used as both the opening and closing delimiters, or as a [List] containing 2 [String]s, the first string will be used as the opening delimiter, and the second, the closing delimiter.
// Delimiters must be a [String] or a [List<String>] with 2 children. List<dynamic> delimiters = ['"', ['<', '>']];
[split] and [stream] have 2 other options, [removeSplitters] and [trimParts].
[removeSplitters], if
true, will instruct the parser not to include the
splitting characters in the returned parts, and [trimParts], if
true, will trim the whitespace around each captured part.
[stream] and [chunk] both have a required parameter, [chunkSize], to set the number of characters to split each chunk into.
/// Splits [string] into parts, slicing the string at each occurrence /// of any of the [splitters]. static List<String> split( String string, { @required List<String> splitters, List<dynamic> delimiters, bool removeSplitters = true, bool trimParts = false, }); /// For parsing long strings, [stream] splits [string] into chunks and /// streams the returned parts as each chunk is split. static Stream<List<String>> stream( String string, { @required List<String> splitters, List<dynamic> delimiters, bool removeSplitters = true, bool trimParts = false, @required int chunkSize, }); /// Splits [string] into chunks, [chunkSize] characters in length. static List<String> chunk(String string, int chunkSize);
Streams return each set of parts in chunks, to capture the complete data set, you'll have to add them into a combined list as they're parsed.
Stream<List<String>> stream = StringSplitter.stream( string, splitters: [','], delimiters: ['"'], chunkSize: 5000, ); final List<String> parts = List<String>(); await for (List<String> chunk in stream) { parts.addAll(chunk); }
Parsing Files #
import 'package:string_splitter/string_splitter_io.dart';
[StringSplitterIo] also contains 3 static methods: [split], [splitSync], and [stream].
Rather than a [String] like [StringSplitter]'s methods, [StringSplitterIo]'s accept a [File], the contents of which will be read and parsed.
In addition to the parameters described in the section above, each method also
has a parameter to set the file's encoding, or in stream's case the decoder
itself, which all default to
UTF8.
/// Reads [file] as a string and splits it into parts, slicing the string /// at each occurrence of any of the [splitters]. static Future<List<String>> split( File file, { @required List<String> splitters, List<dynamic> delimiters, bool removeSplitters = true, bool trimParts = false, Encoding encoding = utf8, }); /// Synchronously reads [file] as a string and splits it into parts, /// slicing the string at each occurrence of any of the [splitters]. static List<String> splitSync( File file, { @required List<String> splitters, List<dynamic> delimiters, bool removeSplitters = true, bool trimParts = false, Encoding encoding = utf8, }); /// For parsing large files, [stream] streams the contents of [file] /// and returns the split parts in chunks. static Stream<List<String>> stream( File file, { @required List<String> splitters, List<dynamic> delimiters, bool removeSplitters = true, bool trimParts = false, Converter<List<int>, String> decoder, });
[0.1.0] - September 8, 2019
- Initial release.
Use this package as a library
1. Depend on it
Add this to your package's pubspec.yaml file:
dependencies: string_splitter: :string_splitter/string_splitter.dart';
We analyzed this package on Oct 9, 2019, and provided a score, details, and suggestions below. Analysis was completed with status completed using:
- Dart: 2.5.1
- pana: 0.12.21
Platforms
Detected platforms: Flutter, web, other
No platform restriction found in primary library
package:string_splitter/string_splitter.dart.
Maintenance suggestions
Maintain an example. (-10 points)
Create a short demo in the
example/ directory to show how to use this package.
Common filename patterns include
main.dart,
example.dart, and
string_splitter.dart. Packages with multiple examples should provide
example/README.md.
For more information see the pub package layout conventions. | https://pub.dev/packages/string_splitter | CC-MAIN-2019-43 | refinedweb | 711 | 55.13 |
The history of React.js components
Functional components in React
Class components in React.js
The difference between functional and class components
Bonus: Building an App with Flatlogic Platform
When you start to code with React you may get confused by some things in React, like JSX syntax or the difference between functional and class components. JSX is an interesting syntax extension to JavaScript that was developed to make the process of writing the UI elements more comfortable. The development with React doesn’t require using JSX, just like our article is not about it either. We are going to discuss Functional vs Class components. Let’s start.
The history of React.js components
We should jump into the history of React first to understand how React components evolved. React, created by Facebook software engineer Jordan Walke, is an open-source front-end JS library for building interactive user interfaces. The first release was in 2013, with 0.3.0 version. React has got further development and updated several times every year. The React team added more and more new features to the library to give developers more tools for coding. Among the most famous and loved features are the virtual DOM, one-way data-binding, JSX, reusable components, declarative programming, stable code, fast rendering of UI elements, great performance optimization opportunities. The current version at the time of writing this article is 17.0.1.
Along with all these benefits React offered it also gave developers two types of components they could use to create UI components. It can be supposed that both types of components provide the same opportunities for writing UI elements, and the choice depends only on the developer’s preferences. Well, it wasn’t true. The real situation was that class components were the only viable option to develop complex apps with React. The reason was that using class components you get a large number of capabilities, for example, state, while functional components didn’t provide such an option. However, the situation changed when React v.16.8 was released in 2019. A new version contained an update that was meant to take the development with functional components to the next level. React offered Hooks for functional components. The introduction of Hooks made it possible to write the entire complex application using only functions as React components. This is a deeply significant event that changed the way of React apps development. Keeping that in mind, we are going back to the present time and find out what is happening now and what functional and class components are.
Functional components in React
Functional components in React are just JavaScript functions like this:
function Foo(props) {
return <h1>Who is living young, wild, and free? – {props.name}</h1>;
}
const element = <Foo name=”Me!” />;
ReactDOM.render(element, document.getElementById(‘home’));
In our case, we render an element that represents the user-defined component called Foo. The element passes JSX attribute name=” Me” as a prop to our function component Foo, which returns a <h1>Who is living young, wild, and free? – Me!</h1> element as the result.
Props are inputs for both types of components. One of the main tasks of props is to pass information from component to component. It’s especially necessary if you want to build a dynamic user interface. However, there is one important rule that you shouldn’t forget: props are read-only. That means that all React components shouldn’t change their inputs and the same props must return the same result. Components that respect their props are called “pure”. That rule works both for class and function components.
JSX is a special extension that allows us to place HTML elements right inside JavaScript code without using additional methods like createElement(). All your HTML tags will be converted into React elements after compilation. JSX may be convenient, however, it is an optional instrument for development. To see how the same blocks of code look like with/without using JSX try the online Babel compiler.
Another way of writing function components is by using an arrow function.
An example of an arrow function:
const App = () => { //that is an arrow function
const greeting = ‘Hello Function Component!’;
return <Headline value={greeting} />;
};
const Headline = ({ value }) =>
<h1>{value}</h1>;
export default App;
Arrow functions have some benefits:
The code written with arrow functions looks compact. Functions are easier to write and read. One of the reasons is an implicit return by simply omitting the curly braces (see video with an example).
Arrow syntax doesn’t contain its context and automatically bind this to the surrounding code’s context
But since arrow functions give one more way to write code (along with standard functions and classes) you need to set rules when we use any of them. As an example you can stick to the following rules:
If you work with global scope and Object.prototype properties use function.
If you work with object constructors use class.
If you face any other situation use arrow function.
The examples above are called stateless function components because they just take props as an argument and return a react element. They don’t manage state and don’t have a lifecycle, while class components do. However, you can use Hooks with them that allow you to work with state and lifecycle and add even more features. We will speak about that in the comparison below.
Class components in React.js
Let’s start with an example:
class Foo extends React.Component {
render() {
return <h1>Who is living young, wild, and free? – {this.props.name}</h1>;
}
}
It is a regular ES6 class that extends the component class from the react library. To return HTML you have to use render() method in it.
Class components work fine with props as well as functional components do. To pass the props to a component you can use a syntax similar to HTML attributes. In our sample case we need to replace props.name with this.props.name in the render() body to use props.
Additional benefits class components offer by default are state and lifecycle. That is why class components are also known as “stateful” components.
The state of a component is an observable object that holds some information and controls the behavior of the component. The difference between props and state is that props don’t change over time during the lifetime of a component. The state holds the data that can be changed over time and changes the component rendering as a result.
The state of a component is supposed to have the initial this.state that can be assigned with a class constructor. The class constructor is a special JavaScript method that allows to bind event handlers to the component or to initialize the local state of the component.
If you don’t need to handle any of both cases above the implementation of a constructor is unnecessary. Example of a constructor:
constructor(props) {
super(props);
this.state = {};
}
Constructor() function inside a React component requires super(props) before any other statement. Super(props) is a reference to parents constructor() function, that React.Component base class has. When we define a new constructor() inside a class component, we replace the base constructor() function. However, it has some code inside of it we still need. So to get access to that code we call super(props) – that is why we have to add super(props) every time we define a constructor() inside a class component. The constructor() is called before the React component is mounted. To use state in a class component we must define the initial state of it in the constructor. Instead of calling setState(), we need to assign the initial state with this.state command in the constructor. It’s the only case when we are allowed to change the state directly by assigning its value, otherwise use setState() instead. Constructor() has other rules you should be aware of, you can read about them on the link.
Differentiating Functional vs Class components
1.State and lifecycle
Well, the standard answer to the question about the difference between functional and class components was that class components provide developers with such features as setState() and lifecycle methods componentDidMount(), componentWillUnmoun(), etc., while functional components don’t. That was true because functional components are plain JavaScript functions that accept props and return React elements, while class components are JavaScript classes that extend React.Component which has a render method. Both state and lifecycle methods come from React.Component, so they were available only for class components. The widespread advice was something like that: “Go with functional if your component doesn’t do much more than take in some props and render”. You had no options on how to build complex UI and class components dominated in React development for a while.
However, that has changed with the introduction of Hooks. To replace setState method to work with the state in class components React offers useState Hook.
To work with components lifecycle classes have such methods like componentDidMount, componentWillUnmount, componentWillUpdate, componentDidUpdate, shouldComponentUpdate. Functional components have got a tool to work with the same methods using only one Hook useEffect. You can think of useEffect Hook as componentDidMount, componentDidUpdate, and componentWillUnmount combined.
Standard class methods work well but do look not very elegant. Functional components offer an elegant and simple decision: instead of using multiple lifecycle methods, we can replace them with one Hook useEffect. What React developers write about Hooks:
“Our goal is for Hooks to cover all use cases for classes as soon as possible. There are no Hook equivalents to the uncommon getSnapshotBeforeUpdate, getDerivedStateFromError and componentDidCatch lifecycles yet, but we plan to add them soon. It is an early time for Hooks, and some third-party libraries might not be compatible with Hooks at the moment.”
an official React documentation
So Hooks are more addition to functional components rather than a replacement of class components.
2. Syntax
The obvious difference is the syntax. Let’s examine several examples.
How we declare components.
Functional components are JavaScript functions:
function FunctionalComponent() {
return <h1>Hello, world</h1>;
}
Class components are classes that extend React.Component:
class ClassComponent extends React.Component {
render() {
return <h1>Hello, world</h1>;
}
}
To return our h1 we need the render() method inside a class component.
The way we pass props.
Let’s say we have props with the name “First”.
<Component name = “First” />
Working with functional components, we pass the props as an argument of our function using the construction “props.name”.
function FunctionalComponent(props) {
return <h1>Hello, {props.name}</h1>;
}
With class components, we need to add this. to refer to props.
class ClassComponent extends React.Component {
render() {
return <h1>Hello, {this.props.name}</h1>;
}
Handling state.
To handle state functional components in React offer useState()Hook. We assign the initial state of count equal to 0 and set the method setCount() that increases it by one every time we click a button. The component returns the number of times we clicked the button and the button itself. The initial state is used only during the first render. The type of argument can be a number, string, object, or null. To learn more about that useState() Hook see the official documentation.
const FunctionalComponent = () => {
const [count, setCount] = React.useState(0);
return (
<div>
<p>count: {count}</p>
<button onClick={() => setCount(count + 1)}>Click</button>
</div>
);
};
Class components work a bit differently. They use setState() function, require a constructor, and this keyword.
class ClassComponent extends React.Component {
constructor(props) {
super(props);
this.state = {
count: 0
};
}
render() {
return (
<div>
<p>count: {this.state.count} times</p>
<button onClick={() => this.setState({ count: this.state.count + 1 })}>
Click
</button>
</div>
);
}
}
The underlying logic is similar to the logic in functional components. In constructor() we declare a state object, state key “count” and the initial value equal to 0. In render() method we use setState() function to update the value of our count using this.state.count and the app renders the number of times the button was clicked and displays the button itself. The result is the same, but the same functionality requires more lines of code for class components. However, it doesn’t mean that the code written with class components will be more cumbersome than the code made with functional components, but the code definitely will be bigger.
Lifecycle methods.
With version 16.8 React allows working with lifecycle methods of components. That means that developers have better control over functional components and can manipulate their life phases (initialization or setting the initial state, mount, update, unmount). The initialization is explained in the paragraph above, let’s look at the next stage.
Mounting.
The useEffect Hook for functional components:
const FunctionalComponent = () => {
React.useEffect(() => {
console.log(“Hello”);
}, []);
return <h1>Hello, World</h1>;
};
The componentDidMount method for class components:
class ClassComponent extends React.Component {
componentDidMount() {
console.log(“Hello”);
}
render() {
return <h1>Hello, World</h1>;
}
}
The useEffect Hook possesses two parameters: the first is the “effect” itself that is going to be called once after every render of the component. The second parameter is an array of observable state or states (or so-called a dependency list). useEffect Hook only runs if one of these states changes. Leaving the second parameter empty useEffect Hooks runs once after render.
Updating.
The useEffect Hook for functional components:
function BooksList () {
const [books, updateBooks] = React.useState([]);
const [counter, updateCounter] = React.useState(0);
React.useEffect(function effectFunction() {
if (books) {
updateBooks([…books, { name: ‘A new Book’, id: ‘…’}]);
}
}, [counter]);
const incrementCounter = () => {
updateCounter(counter + 1);
}
…
}
The componentDidUpdate method for class components:
componentDidUpdate(prevProps) {
// Typical usage (don’t forget to compare props):
if (this.props.userID !== prevProps.userID) {
this.fetchData(this.props.userID);
}
}
As we have mentioned the second parameter in the useEffect hook is an array of observable states, once a counter changes it triggers the effectFunction hook.
Unmounting.
The useEffect Hook for functional components (yes, again):
const FunctionalComponent = () => {
React.useEffect(() => {
return () => {
console.log(“Bye”);
};
}, []);
return <h1>Bye, World</h1>;
};
The componentDidUnmount method for class components:
class ClassComponent extends React.Component {
componentDidMount() {
console.log(“Hello”);
}
render() {
return <h1>Hello, World</h1>;
}
}
3. Hoisting works only for functional components
Hoisting is a concept that appeared in ECMAScript® 2015 Language Specification. According to that concept, JavaScript moves variable and function declarations to the top that allows you to access a variable or a function first and only then declare it. Actually, JS doesn’t move the code, it puts declarations in memory during the compile phase that allows calling a function before you declare it. That is not true to classes, trying to get access to a class before the declaration throws a ReferenceError exception.
An example of the code where we call function before its declaration:
catName(“Tiger”);
function catName(name) {
console.log(“My cat’s name is ” + name);
}
// The result of the code above is: “My cat’s name is Tiger”
Even though we call the function before we write it, the code works great. The following code with class declaration will throw an error:
const p = new MyName(); // ReferenceError
class MyName {}
That is not all. JavaScript only hoists declarations, not initialization. If we declare a variable and call it before the initialization it returns undefined. See example:
console.log(myName); // Returns undefined, as only declaration was hoisted
var myName; // Declaration
myName = “John”; // Initialization
Initializations with keywords let and const are also hoisted, but not initialized. That means that your app is aware of the variable existence. However, it can’t use it until variable initialization. The example below will throw a ReferenceError:
myName = “John”;
let myName;
That example will not run at all:
myName = “John”;
const myName;
Why does it matter? Let’s get back to React and create a simple React app in index.js file with one component in a separate file Component.js:
import React from ‘react’;
import {render} from ‘react-dom’;
import App from ‘.Component’;render(
<App/>,
document.getElementById(“root”)
);
And the component itself:
const Component = () => {
return (
<div>Hello, React</div>
);
}
export default Component;
The app renders a text: “Hello, React”. Since the component is small, there is a sense not to separate it and merge it into index.js file like this:
import React from ‘react’;
import {render} from ‘react-dom’;
render(
<Component/>,
document.getElementById(“root”)
);
const Component = () => {
return (
<div>Hello, React</div>
);
}
And we get an undefined error because we try to render a component that was declared with an arrow function before we initialize it.To repair the code just re-order the declaration and put it before calling render().
4. The way they capture values (props)
One interesting experiment (the original article with full analysis can be found here) that took place on the Internet is the following React app.
It’s a simple app that simulates a social network request to follow someone. The app displays a drop-down list with three profiles to follow, static greetings text, and two buttons that call the confirmation alert to start following a chosen person. The confirmation alert appears 3 seconds later after you clicked the button. The delay is set with setTimeout() method.
The list of the experiment is the following:
Choose a profile to follow
Click a follow button with “function” text in brackets near it
Change a profile to follow in the drop-down list before the confirmation alert appears
Check the name in the confirmation alert
Repeat the same four steps above for the follow button with “class” text in brackets
In the first case with the functional button switching the name doesn’t affect the confirmation alert. With the class button switching the name changes the alert message, even though you clicked to follow Dan but switched to Sophie, the alert message will be “Followed Sophie”. The correct behavior is the first, of course. No one likes to follow a wrong profile on social media.
The reason for such a behavior lies in the essence of functional and class components. Let’s examine these lines of code:
class ProfilePage extends React.Component {
showMessage = () => {
alert(‘Followed ‘ + this.props.user); };
And:
function ProfilePage(props) {
const showMessage = () => {
alert(‘Followed ‘ + props.user);
};
As we have said props are read-only, they are immutable. So once you pass the props to a functional component ProfilePage(props), the only remaining task for React is to render it after the time is up.
On the other hand, this is mutable. And it’s okay because it allows us to use states and lifecycle methods correctly. So if we pass other props while the alert message doesn’t appear, this.props.name. changes and showMessage method displays the last version of props. Our showMessage method is not tied to any particular render and that may become a problem.
There are several potential solutions that actually work. One of them is to catch props at the time of render like this:
class ProfilePage extends React.Component {
render() {
// Capture the props!
const props = this.props;
// Note: we are *inside render*.
// These aren’t class methods.
const showMessage = () => {
alert(‘Followed ‘ + props.user); };
const handleClick = () => {
setTimeout(showMessage, 3000);
};
return <button onClick={handleClick}>Follow</button>;
}
}
So we stuck our certain props to a particular render().
5. Running tests
There are two most popular instruments for running tests: Enzyme and Jest. Enzyme is a JavaScript testing utility for React that allows testing React components’ display. Jest is a JavaScript testing framework for writing tests, in other words, for creating, running, and structuring tests.
These two instruments do a great job on both types of components. There are some specificities in running tests for functional components, like the fact, that state hooks are internal to the component and can’t be tested by calling them. However, instruments and methods are similar.
6. Performance difference
There is an opinion that functional components show a greater performance compared to class components. The point is that the React functional element is a simple object with 2 properties: type(string) and props(object). To render such a component React needs to call the function and pass props – that is all.
Class components are more complex: they are instances of React.Component with the constructor in it and complicated system of methods for manipulating state and lifecycle.
Theoretically, calling a function should take less time than creating an instance of a class. Well, one developer held a test: he rendered 10000 elements of stateless components and class components. You can see the result here. As we see from the 3 experiments there is no difference in render time between rendering class and functional components.
To sum up everything above:
Сlass components were the only option to add states to components and manipulate lifecycle. However, it has changed since the introduction of Hooks, which gave the same opportunities to functional components as classes had.
The major difference is the syntax. It relates to the way we declare components, pass props, handling states, manage lifecycle.
Function components capture the props and state by default. It is not a bug, but a feature of functional components.
Functional components require less code to write an equal component. However, that doesn’t mean that functional components more readable and convenient to use. If a developer is used to work with object-oriented programming, he finds using class components much more comfortable. Those who are used to functional programming like functional components more than class components.
There are two most popular tools to test functional and class components: Enzyme and Jest. They work great for both types of components.
There is no big difference in render time between class and functional components.
Today you can build a whole app using only functional components. It was impossible till 2019. That became possible thanks to Hooks. Does Hooks replace class components in the coming years? We don’t think so, because there are still some features that functional components can’t reproduce. And there will always be developers, who are used to working with objects rather than with functions. However, we await the growth in functional component popularity and an increase in the number of features for Hooks. And it’s likely that the functionality Hooks will provide goes beyond class components possibilities.
Bonus: Building an App with Flatlogic Platform
Understanding Functional and Class components is an important stepping stone in React development. Crafting Apps by hand requires a thorough understanding of all the intricacies of the library. However, there’s a quicker way for those who aren’t technically adept or lack time to write the whole thing from the ground up. Some people need a unique App with functions not seen elsewhere, but most apps are different combinations of the same parts and features. We used that insight when developing the Flatlogic Platform.
Flatlogic Platform is a constructor-style tool for combining pre-built parts into brand-new applications. It requires a few steps from you. Keep reading to know what those are!
#1: Name the Project
This step is what it sounds like. The only valuable advice we can think of is to pick a name that is easy enough to associate with the project.
#2: Choose stack
Next up, choose the technologies your App’s parts will run on. Those are underlying technologies for back-end, front-end, and database. In this example, we’re picking a combination of React, Node.js, and MySQL. But all other combinations are perfectly compatible, too.
#3: Choose Design
You’ll have several design schemes to choose from. Some are transparent and light, others with a heavier feel. Pick the one you like, this part is purely aesthetical.
#4: Create the schema
The schema is the structure of a database. Names of fields, types of data, the way the App processes said data… Every aspect that defines how the database works is a part of the schema. It might be tricky at first, but you’ll get the hang of it. A thorough review of what your App is supposed to do will be helpful. If you’re short on time or unsure, pick one of the pre-built schemas. One of them is bound to suit your needs.
#5: Review and generate
The heavy decision-making is over. It’s time to check if every choice you’ve made is the way you want it to be and (assuming everything’s fine) hit “Finish”.
The compilation takes a couple of minutes on most devices. Upon completion, the Platform will offer you your very own App. Hit “Deploy”, host it locally in one click or push it to GitHub for further use or adjustment.
Flatlogic Platform helps create simple yet functional and smooth Apps for commercial and administrative purposes, make sure you give it a try! Happy developing and see you in the next articles!
You might also like these articles:
React.js vs. React Native. What are the Key Differences and Advantages?
Top 12 Bug Tracking Tools
Angular vs. Bootstrap – 6+ Key Differences, Pros, and Cons
The post Functional Components vs. Class Components in React.js appeared first on Flatlogic Blog. | https://online-code-generator.com/functional-components-vs-class-components-in-react-js/ | CC-MAIN-2022-40 | refinedweb | 4,179 | 57.57 |
ORA-28113 When A Policy Predicate Is Fetched From A Context
(Doc ID 331862.1)
Last updated on FEBRUARY 01, 2022
Applies to:Oracle Database - Enterprise Edition - Version 8.1.7.4 to 11.2.0.1.0 [Release 8.1.7 to 11.
Symptoms
ORA-28113 is returned intermittently from your application. You follow the diagnostic steps in note 250094.1
and you discover from the trace file the predicate is truncated at about 256 characters. For example you get the error
ORA-01756: quoted string not properly terminated
Or any other SQL syntax error resulting from the truncate of the predicate string.
Changes
In the design of your policy function, you derive the value of the predicate string directly from a namespace (context) with a construct similar to the following:
Cause
In this Document | https://support.oracle.com/knowledge/Oracle%20Database%20Products/331862_1.html | CC-MAIN-2022-40 | refinedweb | 136 | 59.19 |
Google Print goes to extraordinary lengths to keep you from downloading images, but you don't need to go to the same extraordinary lengths to get them anyway.
It's long been stated that if you put your images up on the Web, there's no real way of stopping people from downloading them and using them for their own purposes. That's still basically true, although one of the interesting things about the new Google Print service is the unusual lengths it goes to prevent the average web user from doing exactly that.
This hack is based on an article by Gervase Markham, who has graciously allowed me to include it here. The code is mine, but I couldn't have written it without his excellent and original research. You can read his article at, including comments from many other people who were collaboratively hacking Google Print on the day it was announced.
Google Print allows you to search printed books (although Google obviously has the data in electronic form). To see it in action, search Google for Romeo and Juliet and click the link under Book Results titled "Romeo and Juliet by William Shakespeare." You'll see an image of the first page of the book, but the page is specially crafted to prevent you from printing the image or saving it to your local computer.
The first thing that prevents you from saving the image of the printed page is that the right-click context menu is disabled. Google has used the standard JavaScript trick to disable the context menu for the entire page, by returning false from the oncontextmenu handler. This is no problem for those takingback the Web. Go to Tools Options Web Features Advanced JavaScript and uncheck "Disable or replace context menus." Score one for Firefox.
The next obstacle is that selecting the View Image item in the newly enabled context menu seems to show you a blank page. The <img> element for the image of the printed page is actually a transparent GIF; the real book page is defined as a CSS background image on a container <div>. If you select View Image from the context menu, all you end up with is the transparent GIF, not the background image. And since there's a foreground image overlaying the background image, Firefox suppresses the View Background Image item in the context menu. Score one for Google.
OK, let's change tactics. Open the Page Info dialog under the Tools menu, and go to the Media tab. This lists all the media on the page, and it has a Save As… button next to each media file that allows you to save that file to disk—except that it doesn't work for the one image we're interested in. It works for images inserted using <img>, <input>, and <embed>, but not for background images inserted using a CSS background-image rule. Score: Google 2, hackers 1.
My next idea was to copy and paste the URL out of page source. However, Google likes to serve pages without newlines, and there are a lot of similar URLs in them, so it would seem virtually impossible to find the right URL in the View Source window scrolling two and a half miles to the right. Score: Google 3, hackers 1.
Let's change tactics again. Since the transparent GIF is in our way (literally, it's an <img> element that is obscuring the actual image of the printed page), we can try to delete the GIF altogether using DOM Inspector.
DOM Inspector is not installed by default. If you don't see a DOM Inspector item in your Tools menu, you'll need to reinstall Firefox, select Custom Install Developer Tools. You can safely reinstall over your existing Firefox installation. This will not affect your existing bookmarks, preferences, extensions, or user scripts.
DOM Inspector displays a tree of all the elements on the current page. Changes you make in DOM Inspector are immediately reflected in the original page. So, theoretically, we can locate the GIF in the DOM Inspector tree and just press Delete. Bang! The entire book page image disappears along with it! How did this happen? Well, the transparent GIF <img> element was providing a size for the <div> that contains it. When we removed the transparent GIF, the <div> collapsed and we could no longer see the book page image, since it was now the background image of a 0x0 <div>. Another point for Google.
No problem. In DOM Inspector, we can select the container <div> (the one helpfully declared as class="theimg"), drop down the menu on the right to select CSS Style Rules, and then manually edit the CSS to give the <div> a real width and height. Right-click in the lower pane on the right and select New Property. Enter a property name of width and a value of 400. Repeat and enter a property name of height and a value of 400.
Success! This allows us to see the background image again on the original page, albeit only partially, since the image is larger than 400 x 400. But it's enough, because the transparent GIF is gone, so we can right-click the partial book page image and select View Background Image to display the image in isolation. From there, we can save the image to disk or print it. Final score: Google 4, hackers 8. Game, set, match.
Now that we've suffered through all the gory details of Google's attempts to make your browser less functional, let's automate the process with a 20-line Greasemonkey script.
This user script runs on Google Print pages. Right out of the gate, it reenables the right-click context menu by setting document.oncontextmenu=null. Then, it uses an XPath query to find all the transparent GIFs named cleardot.gif. These are the GIFs obscuring other images. For each one, it replaces the URL of the transparent GIF with the URL of the obscured image. For bonus points, it makes the image clickable by wrapping it in an <a> element that links to the image URL.
Save the following user script as nrestoregoogleprint.user.js:
// ==UserScript==
// @name Restore Google Print
// @namespace
// @description restore normal browser functionality in Google Print
// @include*
// ==/UserScript==
// restore context menu
unsafeWindow.document.oncontextmenu = null;
// remove clear GIFs that obscure divs with background images
var snapDots = document.evaluate("//img[@src='images/cleardot.gif']",
document, null, XPathResult.UNORDERED_NODE_SNAPSHOT_TYPE, null);
for(var i = snapDots.snapshotLength - 1; i >= 0; i--) {
var elmDot = snapDots.snapshotItem(i);
var elmWrapper = elmDot.parentNode;
while (elmWrapper.nodeName.toLowerCase() != 'div') {
elmWrapper = elmWrapper.parentNode;
}
var urlImage = getComputedStyle(elmWrapper, '').backgroundImage;
urlImage = urlImage.replace(/url\((.*?)\)/g, '$1');
// make image clickable
var elmClone = elmDot.cloneNode(true);
elmClone.style.border = 'none';
elmClone.src = urlImage;
var elmLink = document.createElement('a');
elmLink.href = urlImage;
elmLink.appendChild(elmClone);
elmDot.parentNode.insertBefore(elmLink, elmDot);
elmDot.parentNode.removeChild(elmDot);
}
After installing the user script (Tools Install This User Script), go to and search for Romeo and Juliet. Click the link under Book Results titled "Romeo and Juliet by William Shakespeare." You will see the first page of Romeo and Juliet. Thanks to this hack, you can right-click the image of the printed page and do all the things you can normally do with an image (such as saving it to disk), as shown in Figure.
There are actually two protected images on each Google Print page: the image of the printed page and the smaller thumbnail image of the book cover. Google uses the same technique for both images, so this hack works on the cover thumbnail image as | http://codeidol.com/community/internet/restore-functionality-in-google-print/15792/ | CC-MAIN-2017-22 | refinedweb | 1,278 | 64.41 |
07 July 2011 16:12 [Source: ICIS news]
LONDON (ICIS)--A catalytic olefins project based on technology developed by US-based engineering group KBR and ?xml:namespace>
The project will be the first commercial scale plant using the advanced catalytic olefins (ACO) technology, he said. KBR and SK Innovation, formerly SK Energy, started up a demonstration plant at SK Innovation's site in
“We hope to announce the first licensee before the end of this year,” Derbyshire said. “There are several things in the pipeline and we feel strongly that one of them will close quite soon.”
The project would form part of a grassroots petrochemical project which is being planned by a third party in
Derbyshire declined to disclose the size of the plant but said it would be at the lower end of the 300,000-1m tonnes/year capacity range for such a project.
ACO technology, which is used to crack naphtha and condensates to produce light olefins, enables ethylene and propylene yields that are 15-20% higher than those of a traditional steam cracking process, according to KBR. The ACO route also enables a higher propylene/ethylene ratio than traditional steam crackers. | http://www.icis.com/Articles/2011/07/07/9475944/catalytic-olefins-project-under-study-in-china-kbr.html | CC-MAIN-2015-18 | refinedweb | 196 | 52.94 |
C library function - fmod()
Advertisements
Description
The C library function double fmod(double x, double y) returns the remainder of x divided by y.
Declaration
Following is the declaration for fmod() function.
double fmod(double x, double y)
Parameters
x − This is the floating point value with the division numerator i.e. x.
y − This is the floating point value with the division denominator i.e. y.
Return Value
This function returns the remainder of dividing x/y.
Example
The following example shows the usage of fmod() function.
#include <stdio.h> #include <math.h> int main () { float a, b; int c; a = 9.2; b = 3.7; c = 2; printf("Remainder of %f / %d is %lf\n", a, c, fmod(a,c)); printf("Remainder of %f / %f is %lf\n", a, b, fmod(a,b)); return(0); }
Let us compile and run the above program that will produce the following result −
Remainder of 9.200000 / 2 is 1.200000 Remainder of 9.200000 / 3.700000 is 1.800000
math_h.htm
Advertisements | http://www.tutorialspoint.com/c_standard_library/c_function_fmod.htm | CC-MAIN-2017-17 | refinedweb | 171 | 60.01 |
available_seats = (‘1A’, ‘1B’, ‘2A’, ‘2B’, ‘3A’, ‘3B’, ‘4A’, ‘4B’, ‘5A’, ‘5B’, ‘6A’, ‘6B’, ‘7A’, ‘7B’, ‘8A’, ‘8B’, ‘9A’, ‘9B’, ’10A’, ’10B’ )
user_tickets = {}
def print_tickets():
“””Print the tickets of the user.”””
for user_name, seats in user_tickets.items():
print(f”nCustomer, {user_name.title()}, has chosen {len(seats)} seat(s).”)
for seat in seats:
print(f”tSeat number: {seat}”)
print(“Welcome To The Seat Booking Portal!”)
start = input(“Would you like to book a seat?”)
if start.lower() == ‘yes’:
while True:
seats = ()
wanted_seats = input(“How many seats do you need?”)
wanted_seats = int(wanted_seats)
if wanted_seats > len(available_seats):
print(f”n–I’m sorry, we only have {len(available_seats)} ”
“seats available–“)
print(“–Please try again–“)
continue
user_name = input("Enter your name:") while True: print("nHere are the available seats:") for seat in available_seats: print(seat) seat = input("Please enter the number of the seat you would like to book:") if seat in available_seats: available_seats.remove(seat) else: print("n--I'm sorry you have chosen an invalid seat--" "n-Please try again-") continue seats.append(seat) if wanted_seats > 1: print("nYou can now choose another seat.") wanted_seats-=1 continue else: break user_tickets(user_name) = seats if available_seats: go_again = input("Would you like to let someone else book their tickets? (yes/no)") if go_again == 'no': break else: break print_tickets() print("nWe will now redirect you to the payment portal." "nThank You for choosing us.")
else:
print(“You can always come by later!”) | https://proxies-free.com/tag/seat/ | CC-MAIN-2021-17 | refinedweb | 236 | 65.62 |
#include <Unbounded_Queue.h>
#include <Unbounded_Queue.h>
List of all members.
0
Move forward by one element in the set. Returns 0 when all the items in the queue have been seen, else 1.
Returns 1 when all items have been seen, else 0.
Dump the state of an object.
Move to the first element in the queue. Returns 0 if the queue is empty, else 1.
Pass back the next_item that hasn't been seen in the queue. Returns 0 when all items have been seen, else 1.
Declare the dynamic allocation hooks.
[private]
Pointer to the current node in the iteration.
Pointer to the queue we're iterating over. | https://www.dre.vanderbilt.edu/Doxygen/5.4.7/html/ace/classACE__Unbounded__Queue__Const__Iterator.html | CC-MAIN-2022-40 | refinedweb | 110 | 88.53 |
RoadElement
The class RoadElement is a member of com.here.android.mpa.common .
Class Summary
public class RoadElement
extends java.lang.Object
[For complete information, see the section Class Details]
Nested Class Summary
Method Summary
Class Details
Method Details
public boolean equals (Object obj)
Parameters:
obj
public java.util.EnumSet <Attribute> getAttributes ()
Gets the road attributes.
Returns:
Set of roadAttributes
See also:
public int getAverageSpeed ()
Gets the average speed of the road element.
Returns:
the average speed in m/s or 0 if the information is not available.
public FormOfWay getFormOfWay ()
Gets the form of way.
Returns:
the form of way of the road.
See also:
public java.util.List <GeoCoordinate> getGeometry ()
public double getGeometryLength ()
Returns the length of the polyline associated with this RoadElement in meters.
Returns:
length of polyline for this RoadElement in meters.
public int getNumberOfLanes ()
Gets number of lanes in this road element.
Returns:
the number of lanes in this road element.
public PluralType getPluralType ()
Gets the plural type of the road element.
Returns:
The plural type of the road element.
See also:
public String getRoadName ()
Gets the name of the road element. The method returns an empty string if the name is unknown.
Returns:
the name of the road.
public String getRouteName ()
Gets the route name of the road element. The route name is a short label for the road, for example I5 for the Interstate 5 in the US. The method returns an empty int hashCode ()
public boolean isPedestrian ()
Checks, if the road is allowed only for pedestrians.
Returns:
true, if road is allowed only for pedestrians, otherwise false.
public boolean isPlural ()
Tests if the road element is plural.
Returns:
true if the road element is plural. | https://developer.here.com/mobile-sdks/documentation/android-starter/topics_api_nlp/com-here-android-mpa-common-roadelement.html | CC-MAIN-2017-30 | refinedweb | 283 | 60.01 |
Data Augmentation with Masks¶.
Masks¶
Certain Computer Vision tasks (like Object Segmentation) require the use of ‘masks’, and we have to take extra care when using these in conjunction with data augmentation techniques. Given an underlying base image (with 3 channels), a masking channel can be added to provide additional metadata to certain regions of the base image. Masking channels often contain binary values, and these can be used to label a single class, e.g. to label a dog in the foreground. Multi-class segmentation problems could use many binary masking channels (i.e. one binary channel per class), but it is more common to see RGB representations, where each class is a different color. We take an example from the COCO dataset.
Data Augmentation with Masks¶
When we adjust the position of the base image as part of data augmentation, we also need to apply exactly the same operation to the associated masks. An example would be after applying a horizontal flip to the base image, we’d need to also flip the mask, to preserve the corresponsence between the base image and mask.
Color changes to the base image don’t need to be applied to the segmentation masks though; and may even lead to errors with the masks. An example with a RGB mask, would be accidentally converting a region of green for dog to blue for cat.
Custom Dataset¶
With Gluon it’s easy to work with different types of data. You can write custom Datasets and plug them directly into a DataLoader which will handle batching. Segmentation tasks are structured in such a way that the data is the base image and the label is the mask, so we will create a custom Dataset for this. Our Dataset will return base images with their corresponsing masks.
It will be based on the
mx.gluon.data.vision.ImageFolderDataset for simplicity, and will load files from a single folder, containing images of the form
xyz.jpg and their corresponsing mask
xyz_mask.png.
__getitem__ must be implemented, as this will be used by the DataLoader.
%matplotlib inline import collections import mxnet as mx # used version '1.0.0' at time of writing from mxnet.gluon.data import dataset import os import numpy as np from matplotlib.pyplot import imshow import matplotlib.pyplot as plt mx.random.seed(42) # set seed for repeatability class ImageWithMaskDataset(dataset.Dataset): """ A dataset for loading images (with masks) stored as `xyz.jpg` and `xyz_mask.png`. Parameters ---------- root : str Path to root directory. transform : callable, default None A function that takes data and label and transforms them: :: transform = lambda data, label: (data.astype(np.float32)/255, label) """ def __init__(self, root, transform=None): self._root = os.path.expanduser(root) self._transform = transform self._exts = ['.jpg', '.jpeg', '.png'] self._list_images(self._root) def _list_images(self, root): images = collections.defaultdict(dict) for filename in sorted(os.listdir(root)): name, ext = os.path.splitext(filename) mask_flag = name.endswith("_mask") if ext.lower() not in self._exts: continue if not mask_flag: images[name]["base"] = filename else: name = name[:-5] # to remove '_mask' images[name]["mask"] = filename self._image_list = list(images.values()) def __getitem__(self, idx): assert 'base' in self._image_list[idx], "Couldn't find base image for: " + image_list[idx]["mask"] base_filepath = os.path.join(self._root, self._image_list[idx]["base"]) base = mx.image.imread(base_filepath) assert 'mask' in self._image_list[idx], "Couldn't find mask image for: " + image_list[idx]["base"] mask_filepath = os.path.join(self._root, self._image_list[idx]["mask"]) mask = mx.image.imread(mask_filepath) if self._transform is not None: return self._transform(base, mask) else: return base, mask def __len__(self): return len(self._image_list)
Using our Dataset¶
Usually Datasets are used in conjunction with DataLoaders, but we’ll sample a single base image and mask pair for testing purposes. Calling
dataset[0] (which is equivalent to
dataset.__getitem__(0)) returns the first base image and mask pair from the
_image_list. At first download the sample images and then we’ll load them without any augmentation.
!wget -P ./data/images
!wget -P ./data/images
image_dir = "./data/images" dataset = ImageWithMaskDataset(root=image_dir) sample = dataset.__getitem__(0) sample_base = sample[0].astype('float32') sample_mask = sample[1].astype('float32') assert sample_base.shape == (427, 640, 3) assert sample_mask.shape == (427, 640, 3)
def plot_mx_arrays(arrays): """ Array expected to be height x width x 3 (channels), and values are floats between 0 and 255. """ plt.subplots(figsize=(12, 4)) for idx, array in enumerate(arrays): assert array.shape[2] == 3, "RGB Channel should be last" plt.subplot(1, 2, idx+1) imshow((array.clip(0, 255)/255).asnumpy())
plot_mx_arrays([sample_base, sample_mask])
Implementing
transform for Augmentation¶
We now construct our augmentation pipeline by implementing a transform function. Given a data sample and its corresponding label, this function must also return data and a label. In our specific example, our transform function will take the base image and corresponding mask, and return the augmented base image and correctly augmented mask. We will provide this to the
ImageWithMaskDataset via the
transform argument, and it will be applied to each sample (i.e. each data and label pair).
Our approach is to apply positional augmentations to the combined base image and mask, and then apply the color augmentations to the positionally augmented base image only. We concatenate the base image with the mask along the channels dimension. So if we have a 3 channel base image, and a 3 channel mask, the result will be a 6 channel array. After applying positional augmentations on this array, we split out the base image and mask once again. Our last step is to apply the colour augmentation to just the augmented base image.
def positional_augmentation(joint): # Random crop crop_height = 200 crop_width = 200 aug = mx.image.RandomCropAug(size=(crop_width, crop_height)) # Watch out: weight before height in size param! aug_joint = aug(joint) # Deterministic resize resize_size = 100 aug = mx.image.ResizeAug(resize_size) aug_joint = aug(aug_joint) # Add more translation/scale/rotation augmentations here... return aug_joint def color_augmentation(base): # Only applied to the base image, and not the mask layers. aug = mx.image.BrightnessJitterAug(brightness=0.2) aug_base = aug(base) # Add more color augmentations here... return aug_base def joint_transform(base, mask): ### Convert types base = base.astype('float32')/255 mask = mask.astype('float32')/255 ### Join # Concatinate on channels dim, to obtain an 6 channel image # (3 channels for the base image, plus 3 channels for the mask) base_channels = base.shape[2] # so we know where to split later on joint = mx.nd.concat(base, mask, dim=2) ### Augmentation Part 1: positional aug_joint = positional_augmentation(joint) ### Split aug_base = aug_joint[:, :, :base_channels] aug_mask = aug_joint[:, :, base_channels:] ### Augmentation Part 2: color aug_base = color_augmentation(aug_base) return aug_base, aug_mask
Using Augmentation¶
It’s simple to use augmentation now that we have the
joint_transform function defined. Simply set the
tranform argument when defining the Dataset. You’ll notice the alignment between the base image and the mask is preserved, and the mask colors are left unchanged.
image_dir = "./data/images" ds = ImageWithMaskDataset(root=image_dir, transform=joint_transform) sample = ds.__getitem__(0) assert len(sample) == 2 assert sample[0].shape == (100, 100, 3) assert sample[1].shape == (100, 100, 3) plot_mx_arrays([sample[0]*255, sample[1]*255])
Summary¶
We’ve succesfully created a custom Dataset for images and corresponding masks, implemented an augmentation
transform function that correctly handles masks, and applied it to each sample of the Dataset. You’re now ready to train your own object segmentation models!
Appendix (COCO Dataset)¶
COCO dataset is a great resource for image segmentation data. It contains over 200k labelled images, with over 1.5 million object instances across 80 object categories. You can download the data using
gsutil as per the instuctions below (from):
2) Download Images¶
We download the validation data from 2017 from
gs://images.cocodataset.org/val2017 as an example. It’s a much more manageable size (~770MB) compared to the test and training data with are both > 5GB.
mkdir coco_data mkdir coco_data/images gsutil -m rsync gs://images.cocodataset.org/val2017 coco_data/images
3) Download Masks (a.k.a. pixel maps)¶
gsutil -m cp gs://images.cocodataset.org/annotations/stuff_annotations_trainval2017.zip \ coco_data/stuff_annotations_trainval2017.zip unzip coco_data/stuff_annotations_trainval2017.zip rm coco_data/stuff_annotations_trainval2017.zip unzip annotations/stuff_val2017_pixelmaps.zip rm -r annotations mkdir coco_data/masks mv -v stuff_val2017_pixelmaps/* coco_data/masks/ rm -r stuff_val2017_pixelmaps | https://mxnet.apache.org/versions/1.2.1/tutorials/python/data_augmentation_with_masks.html | CC-MAIN-2022-33 | refinedweb | 1,377 | 51.44 |
Web Scraping — Introduction
What is Web Scraping?:
- Social media sentiment analysis — To understand valuable trends and tap into their potential.
- eCommerce pricing — To find the best marketplace where one can earn most profit.
- Machine learning — To gather more data.
- Investment opportunities — It can be regarding stock market or real estate etc.
- Website Ranking — Performed using keyword research, contents from various sites is analyzed and the respective sites are ranked according to a pre-decided criteria.
- News Gathering and Analysis — News regarding an issue or a group of issue is gather from various sources i.e. having various opinions and various location. Once gathered the news can then be analyzed easily.
Scraping tools
Web scraping tools are often software programmed to sift through the websites and extract needed information. Their basic functionality (in order) includes:
- Recognize unique HTML site structures — To find certain information such as images or links or headings etc., they can also be designed to recognize multiple tags.
- Extract and transform content — To pick up only the needed information while ignoring the useless.
- Store scraped data — The data can be stored as per the user convenience, e.g. in a database or even a simple file.
- Extract data from APIs — Some sites provides us with its APIs to easily extract data from it.
Web Scraping with Python
Python provides us with a rich set of web scraping tools and libraries with the help of which we can easily scrape the desired web pages.
- Requests + Beautiful Soup — This combo is often used together. The request library is used to download the web page and the beautiful soup to extracts just the data you require from the web page.
- Scrapy — This is a complete application framework for crawling websites and extracting information. Not only does Scrapy provides the functionality of both Requests and Beautiful Soup libraries but it is also faster and even more powerful.
- Selenium — Although Selenium is majorly used for web automation It can also be used to scrap the web. This library comes quite in handy especially in the case of dynamic sites.
Some background information
HTTP- Hyper Text Transfer Protocol
It is a text based protocol used by browsers to access web content.
HTTP is a client based protocol. The client makes requests and receives responses from the server. Client can be:-
- Any web browser.
- Any mobile application.
- Programs HTTP servers- HTTP clients make requests to HTTP servers.
It hosts web pages and web content, this content can either be static or dynamic.
Static content refers to the HTML(Hyper Text Markup Language) and CSS(Cascading Style Sheets) which is not changed by the client-side scripts. All browsers support a scripting language called JavaScript. If the JavaScript running on your(client) browser updates the HTML and adds interactivity to your web page, that is a dynamic web page.
HTTP Requests
Although there are many requests types, the most frequently used are:-
- GET request to fetch resources from the target URL. This request is made by browser when it is retrieving web content from a web server
- POST request to create/update resources. A good example would be updating your (client’s) activity/status on social networking sites like Facebook, twitter etc.
- PUT request to idempotently create/update. If the same PUT request is made to a server multiple times then the additional requests will have no effect.
- HEAD request to get only HTTP header. It is used to make a request to get only the metadata header information from a target URL.
- DELETE request to delete resources from a website.
Whether these request are implemented on a website is up to the website owner. If a website allows only reading of resources it will only support only the GET request.
HTTP Responses
Web servers which host websites and web applications are standing by to address the requests. On receiving a request they parse it, understand what is requested and send back a HTTP response. Every HTTP response have a few standard fields:
- Status line with code such as 200, 404. All status codes have meaning. 200 means that the response was sent back successfully, 404 says the page wasn’t found
- Response header with metadata information.
- Body of the response, which is typically understood by a web browser, this body is what the browser displays to the screen. The body can contain other content such as JSON that you(client) would create from the server.
Web Scraping
Programmatically extracting the information from the web page that is needed/useful. It is automated extraction of data from the websites; the HTTP requests are made programmatically to fetch the needed content, the content is then downloaded special tools are then used to parse the content and to extract the specific information.
The content has a specific structure that is HTML. HTML has a tree like structure that is navigated and once parsed specific information can be extracted from the webpage.
Fetching and Parsing Content
Two steps are involved in Web Scraping:
- Fetching Content: It involves programmatically accessing the content of a website. It is done by making HTTP requests to that site using a client library.
- Parsing Content: It involves extracting the information from the content that is fetched. It can be done with help of several technologies like HTML parsing, DOM parsing, Computer Vision technologies.
Fetching Content:
- HTTP client libraries are used to make HTTP requests and download content from a URL. There are many such libraries available in python such as Urllib, Urllib2, Requests, Httplib, Httplib2.HTTP client libraries when to use which one:
- Requests: A high-level intuitive API. Easy to use.
- Httplib2 and Httplib: Both of these reference the same HTTP client library. Httplib2 is the enhanced version of Httplib. Httplib 2 gives more granular control over the request that is made, it gives you fine grain control over the HTTP requests that are made. The requests can be configure in a very granular manner.
- Urllib and Urllib2: Non overlapping in python 2.7. But overlapping in python 3. In python 3 Urllib subsumes old Urllib2, Urllib2 is python 2 only. This is part of python standard library, so no need to pip install it separately. There are 4 distinct namespaces for different operations.
- Urllib.request is for opening and reading URls
- Urllib.error contains the exceptions raised by urllib.request
- Url.parse is for parsing URLs
- Urllib.robotparser is for parsing the robot.txt file.
Understanding URLs
URL stands for Uniform Resource Locator. It can be seen as an address with each part conveying different information. Below are the main parts of a URL.
- Scheme — Every URL begins with a scheme, it tells the browser about the type of address. Typically, http or https; though not always shown scheme is an integral part of URL.
- Domain Name — The major part of an URL. Different pages on same site have same domain name e.g..
- File Path — Also known as Path. It tell the browser to load a specific page on the site; given after domain name.
- Parameters — Some URLs have a string of characters after the path which begins with a question mark (?), this is called the parameter string.
- Anchor — Depicted by hash symbol (#) it tells the browser to load a particular part on the web page. Often referred as URL fragment.
As this is an introductory post we will consider the overall process of web scraping without going into much of detail. Also, this tutorial assumes that the reader has basic understanding of Python, HTML, CSS Selectors, XPath and knows how to use Google Developer Tools to find the same.
Example
The output is
History
Techniques
Software
Legal issues
Methods to prevent web scraping
See also
References
Although looking at the above example it may seem to be a stupid task to go to such extent to just fetch the list in the table of content which can be easily done by simple copy and paste, but bear in mind that the same code can be used again and again to find the content in table of content of every Wikipedia page, we just need to change the URL. This in turn saves us a lot of time.
Also, in future we will be dealing with quite some big tasks using web scraping. Being an introductory post I have kept the example easy and concise. | https://archit-sharma.medium.com/web-scraping-introduction-68a55fd988dc?source=post_internal_links---------0---------------------------- | CC-MAIN-2021-21 | refinedweb | 1,394 | 65.12 |
GNOME Bugzilla – Bug 539312
paste of large buffers often fails
Last modified: 2013-09-18 13:34:24 UTC
Please describe the problem:
This is a bug that has been present since forever, I just haven't really gotten around to reporting it. Oddly enough, noone else has either. :)
The problem is that vte seems to mishandle INCR. Pasting large buffers often results in just part of the data getting to the application. Might be a clipboard protocol issue, or a buffer handling problem where it floods stdin of the application. I don't really know how to determine which of those it is. Protocol issue is my best guess though as it stops on the same byte each time it fails.
The problem does not seem to be with the clipboard source as retrying the paste multiple times eventually gets it working. Also, pasting into another application (e.g. gedit) works just fine.
Steps to reproduce:
Actual results:
Expected results:
Does this happen every time?
No, but often (provided the buffer is large enough).
Other information:
Odd... I played around a bit more with this in an effort to provide more info, and I noticed that for at least one test case, the problem is with the display, not the actual paste. I.e. more data gets sent to the application than is echoed to the screen.
Created attachment 113125 [details]
test data
This is one example of the data I've been trying to paste. This command easily provokes it here:
cat - > foo
Reproduced.
Humm, for no apparent reason I can't reproduce this anymore.
Odd, me neither.
It's baaaack!
vte-0.17.2-1.fc10.i386
My suspicion is that this is related to bug 538344. Could you try with HEAD or with the patch applied from that bug?
This is happening again in Fedora 19 (vte 0.28.2, gnome-terminal 3.8.4).
Select more than 4KiB of text from anywhere, attempt to paste it into gnome-terminal and only the first 4KiB shows up.
I've *also* seen the purely cosmetic version of the bug in the past, where you 'cat > file' and paste, and it *looks* like the paste is truncated but the contents of the file are correct. That's not what's happening now. This time the paste really *is* truncated at 4KiB.
Hm, strange.
If I paste a *very* large chunk of a C file from an emacs buffer, it works but with the cosmetic issue as described. Only the first 4KiB shows up, I hit enter and Ctrl-D to terminate my 'cat > file', and the rest of the paste shows up as soon as I hit enter.
If I paste a large chunk of XML from firefox (actually the long line from ) it's truncated at 4KiB.
If I paste that same XML into emacs from firefox it arrives OK. If I paste it into gnome-terminal from emacs and it's also truncated at 4KiB.
And if I format it so that it isn't one big line, it also arrives OK in gnome-terminal.
Ok, we should add printf debugging to vte to see where exactly it's being lost... I'll try to take a look, though if Egmont wants to look, he should go for it ;).
[dwmw2@shinybook src]$ VTE_DEBUG=selection ./vte2_90
Fork succeeded, PID 12982
Pasting PRIMARY.
Requesting clipboard contents.
Pasting 18579 UTF-8 bytes.
Strace looks like this (fd 7 is /dev/ptmx)...
13354 write(2, "Pasting 18579 UTF-8 bytes.\n", 27) = 27
13354 mmap(NULL, 135168, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fee28bfe000
13354 write(7, "> <SOAP-ENV:Envelope xmlns:SOAP-ENV=\"\" xmlns:SOAP-ENC=\"\" xmlns:xsd=\""..., 18579) = 18579
13354 poll([{fd=6, events=POLLIN|POLLOUT}], 1, 4294967295) = 1 ([{fd=6, revents=POLLOUT}])
13354 writev(6, [{"\23\0\3\0N\0\240\2\\\1\0\0", 12}, {NULL, 0}, {"", 0}], 3) = 12
13354 recvfrom(6, "\34\0\231\2N\0\240\2\\\1\0\0Q(&-\1\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0", 4096, 0, NULL, NULL) = 32
13354 recvfrom(6, 0x2150ca4, 4096, 0, 0, 0) = -1 EAGAIN (Resource temporarily unavailable)
13354 poll([{fd=5, events=POLLIN}, {fd=6, events=POLLIN}, {fd=3, events=POLLIN}], 3, 0) = 0 (Timeout)
13354 recvfrom(6, 0x2150ca4, 4096, 0, 0, 0) = -1 EAGAIN (Resource temporarily unavailable)
13354 poll([{fd=5, events=POLLIN}, {fd=6, events=POLLIN}, {fd=3, events=POLLIN}, {fd=7, events=POLLIN}], 4, 293) = 1 ([{fd=7, revents=POLLIN}])
13354 write(5, "\1\0\0\0\0\0\0\0", 8) = 8
13354 read(7, "> <SOAP-ENV:Envelope xmlns:SOAP-ENV=\"\" xmlns:SOAP-ENC=\"\" xmlns:xsd=\""..., 8176) = 4095
13354 read"..., 4081) = 3708
13354 read(7, 0x23b17a7, 373) = -1 EAGAIN (Resource temporarily unavailable)
Now wtf happened there? We *got* the 18579 bytes just fine from the clipboard, and we wrote them to the PTY master. And then we got some of them back. And a lot of BELs?
FWIW if I strace the 'cat' I'm doing within the terminal, it sees this:
read(0, "> <SOAP-ENV:Envelope xmlns:SOAP-"..., 65536) = 4096
write(1, "> <SOAP-ENV:Envelope xmlns:SOAP-"..., 4096) = 4096
Now I'm suspecting a kernel bug...?
Simple test case:
#define _GNU_SOURCE
#define _XOPEN_SOURCE
#include <stdlib.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <string.h>
char buf[16384];
int main(void)
{
int mfd, cfd;
memset(buf, 0x5a, sizeof(buf));
buf[16383]='\n';
mfd = getpt();
unlockpt(mfd);
grantpt(mfd);
cfd = open(ptsname(mfd), O_RDWR|O_NOCTTY|O_NONBLOCK);
write(mfd, buf, 16384);
read(cfd, buf, 16384);
read(cfd, buf, 16384);
}
open("/dev/pts/5", O_RDWR|O_NOCTTY|O_NONBLOCK) = 4
write(3, "ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ"..., 16384) = 16384
read(4, "ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ"..., 16384) = 4096
read(4, 0x6010a0, 16384) = -1 EAGAIN (Resource temporarily unavailable)
exit_group(-1) = ?
Kernel ate my buffer.
Umm. Can you repro with other terminals too?
Also, you planning to followup somewhere else? A closed bug is no good for tracking. Thanks for debugging BTW.
Behdad, you're reading my mind :) This is the second item on my personal bugfixing todo list (apart from rewrapping on resize). Unfortunately I can't promise any timeline when I'd be able to work on this, nor can I promise that I'd be able to fix it.
It's a kernel bug (surprisingly MacOS kernel being buggy too). I reported it to kernel developers a while ago: , but I failed to convince them it was a kernel bug. Maybe you could help me convince them and get it fixed.
Luckily it's very easily reproducible on usermode linux, so it only takes a few seconds to recompile and test any attempt and get any debug info, without actually rebooting the box. The tough part IMO is to get to understand the kernel's code.
It's having the tty in icanon mode which does it. Canonical mode is line-editing mode, as opposed to non-canonical mode where the input is simply passed through directly.
The kernel has a 4KiB buffer for the line editing, and when it receives more than 4096 characters it'll just beep unhappily at you and drop them.
Hence all those \7 (BEL) characters in what we read back from the pty master, in the strace in comment 11.
Interestingly, fpathconf(cfd, _PC_MAX_CANON) is returning 255, not 4096.
I'd certainly be open to expanding the buffer, but it's non-swappable kernel memory. It's a very non-trivial task to do it in a way that doesn't open the system to rampant denial of service attacks.
Looking at your ptmx3.c test case in that bug, I think we're experiencing something different. My issue is simply that I am overflowing the line buffer with a line that is too big. You are probably experiencing some issues with switching to/from icanon mode, not consistently flushing that buffer when we *exit* icanon mode or something like that?
We're experiencing something little bit different, but the main point is the same.. Or, at least there should be a combination of all those termios flags (and still be able to switch back-n-forth between raw and cooked) where this is the behavior.
It should be vte's responsibility (and IIRC it does it correctly) to continue feeding the paste data into the kernel, while still processing the data printed by the application to make sure there's no deadlock between the two.
(In reply to comment #18)
> We're experiencing something little bit different, but the main point is the
> same.
I don't know about that. Your case does seem to be a kernel bug. I'm not sure about mine.
>.
And then what? If the slave is in canonical (line-editing) mode, and we have a 4KiB buffer, there is *nothing* you can do to send a line that is more than 4KiB in size. It just can't be done. There is nothing that anything in userspace can do (apart from switching the slave to non-canonical mode, of course) to allow a line more than 4KiB to get through.
> It should be vte's responsibility (and IIRC it does it correctly) to continue
> feeding the paste data into the kernel, while still processing the data
> printed by the application to make sure there's no deadlock between the two.
But the application won't *receive* the line until the newline at the end of it is sent. It's not simply a case of vte waiting for the buffer to clear — the contents of the buffer *can't* be sent to the slave until the line is complete, and we can't write extra data to complete the line because there's not enough room in the buffer for it...
Okay, I understand your point, you're right. Indeed, if the slave is in canonical (line-editing) mode (such as a "cat" command), and you paste >4kiB data, and that data has no '\n' character inside, there's not much you can do to prevent dropping bytes.
(Note: if you're typing from keyboard, pressing ^D will cause that partial line to get delivered to the application without a trailing newline. Maybe vte could simulate the same after every 4kB? It'd be a nasty workaround, probably with other side effects, not sure if it's worth it.)
However, this is not what the original bugreport was about. The example data there is a "nice" text file (lines of ~100 characters), and pasting that should work perfectly.
OK, that's your kernel bug for which you have a test case. I'll look into that further when I get home from Linuxcon/Plumbers conf. | https://bugzilla.gnome.org/show_bug.cgi?id=539312 | CC-MAIN-2021-39 | refinedweb | 1,789 | 73.27 |
For code/output blocks: Use ``` (aka backtick or grave accent) in a single line before and after the block. See:
Bracket orders - am I supposed to only have one open at a time?
- booboothefool last edited by
In the documentation:
There is code like:
def next(self): if self.orefs: return # pending orders do nothing
def notify_order(self, order): if order.status == order.Completed: self.holdstart = len(self) if not order.alive() and order.ref in self.orefs: self.orefs.remove(order.ref)
self.orefs = [o.ref for o in os] else: # in the market if (len(self) - self.holdstart) >= self.p.hold: pass # do nothing in this case
I am wondering what the point of all this is? It makes it look like it's not a good idea to have multiple brackets open at a time. I am indeed running into issues with duplicates/bracket orders hanging around. Can someone clarify if I should be using this template of "pending orders do nothing"?
@booboothefool
Indeed with one data there should only be one bracket open. However, when using multiple datas, then you need a dictionary to manage the orders. See the article here and the code at the bottom. | https://community.backtrader.com/topic/2550/bracket-orders-am-i-supposed-to-only-have-one-open-at-a-time/2 | CC-MAIN-2020-24 | refinedweb | 201 | 67.86 |
Hi,
like I already posted on this list on 2006-06-12 I wrote a Perl module, named
SVN::Dumpfilter to read and filter Subversion dumpfiles and was planning to
publish it on CPAN. Because of the lack of time I just put it on my homepage
( ).
In the meantime Philippe 'BooK' Bruhat wrote a similar module called
SVN::Dump, which has a quite different user interface.
Now I planning to put it on CPAN any time soon. I would like to ask if
anything speaks again the proposed name 'SVN::Dumpfiler' and if there is
something I should do before publishing a module in the SVN namespace.
If anyone like to give suggestion what (additional) feature this module
should have then he/she should not hesitate to write me.
Best,
Martin
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org
Received on Tue Jan 2 22:49:39 2007
This is an archived mail posted to the Subversion Dev
mailing list. | https://svn.haxx.se/dev/archive-2007-01/0054.shtml | CC-MAIN-2018-05 | refinedweb | 172 | 62.58 |
Stefano Mazzocchi wrote:
> Because it would not make sense. The XML publishing model is different
> from PDF even if has many parts in common. For example, the use of
> namespaces allows the integration of other capabilities without messing
> up with the language itself, thing that is not possible with PDF. For
> example, FO + SMIL integration opposed to PDF with video support.
I understand but it sounds a bit like science-fiction right now. We only have one XSL:FO
-> PDF converter which produces PDF with no integration with languages like SMIL. I think
there are no tools for the processing model you presented. But there are a lot of tools
for publishing giants like TeX, PDF, Postscript.
Porting XML to them (which is not directly publishing format) could be very economical
(especially with the use of XSLT as a transforming tool).
Just allowing to add a semantic layer to published documents and reusing all existing
tools.
Andrzej | http://mail-archives.apache.org/mod_mbox/xml-general/200004.mbox/%3C37259430.7795B0BE@step.pl%3E | CC-MAIN-2015-27 | refinedweb | 157 | 64.2 |
I have a python script using NLU that works fine in a DSX notebook but when I try to run it in a python IDE on my desktop I get an error.
I have credentials and proxies set up and I can successfully get an object from BlueMix storage. So I dont think the problem is our corporate firewall.
I have recreated the problem with just this code snippet :
import watson_developer_cloud import watson_developer_cloud.natural_language_understanding.features.v1 as features nlu = watson_developer_cloud.NaturalLanguageUnderstandingV1(\ version='2017-08-22', username= 'xxx', password= 'xxx' ) f = [features.SemanticRoles()] response = 'Eliza is talking to Sadie' nlu.analyze(text=response, features=f)
On DSX I get results, on my desktop I get TypeError: is not JSON serializable
seems like I made a typo. Hmm the angle brackets are the problem, I'll use double square brackets instead:
On DSX I get the results I expect, on my desktop I get TypeError: [[watson_developer_cloud.natural_language_understanding.features.v1.SemanticRoles object at 0x000000000C20D2E8]] is not JSON serializable
Answer by NorrisH (58) | Dec 08, 2017 at 02:50 PM
I figured out that for our corporate firewall I needed :
os.environ["REQUESTS_CA_BUNDLE"]="full path to our PEM crt"
Apparently I did not need this in order to pull data off of bluemix object storage, but I did need it to get results from NLU. Perhaps it enables the json results to be passed back properly.
Also, I ended up using the latest watson_developer_cloud, v1.0.1
Answer by @chughts (12485) | Nov 20, 2017 at 07:48 PM
Try running
import watson_developer_cloud print (watson_developer_cloud.__version__)
In both and seeing if they report the same. I suspect that they won't.
Sorry for the delay, juggling too many projects. I just tried this and you are correct. BUT I don't know what to do about it .
To my surprise, the watson_developer_cloud in DSX is OLDER than the one I have installed.
DSX (where things work): 0.26.1 Desktop (where JSON is not serializable): 1.0.0
Any suggestions? In the mean time, I guess I'll see if I can use pip to roll back my version on my desktop.
Is this a Python 2 vs Python 3 thing?
OK, documentation seems to show it is NOT a python 2 vs 3 thing, v1.0.0 is supposed to work on both, as is v1.0.1. I tried to upgrade to v1.0.1 first, being overly optimistic. When that didn't work(same serialization error), I used the -I option for pip to roll back to 0.26.1 and now I have a new error:
SSLError: HTTPSConnectionPool(host=gateway.watsonplatform.net, port=443): Max retries exceeded with url: /natural-language-understanding/api/v1/analyze?version=2017-08-22 (Caused by SSLError(SSLError("bad handshake: Error([('SSL routines', 'tls_process_server_certificate', 'certificate verify failed
This error is in response to the line in my code: nlu.analyze(text=response, features=f)
Again, I'm trying to use all the keys and certifications that worked on DSX. Could this be some sort of corporate firewall issue? I believe I have all proxy environment variables set correctly
It maybe because you would have upgraded the pre-requisites for watson_developer_cloud. If after upgrading you were still getting the same json error, then perhaps you needed to restart the DSX kernel to pick up the new library.
127 people are following this question.
Issue in using WKS model on NLU instance on a dedicated IBM Cloud account 1 Answer
Watson knowledge studio custom model deploy on NLU 2 Answers
How To use RESTful step to GET from JSON array 0 Answers
What are the differences between the public service and Bluemix Dedicated service when it comes to language support of natural language understanding? 1 Answer
How can I get the feature 'keyword' with the content-language 'ko' in Natural Language Understanding. 1 Answer | https://developer.ibm.com/answers/questions/414191/nlu-python-api-watson-developer-cloudnatural-langu.html?childToView=414192 | CC-MAIN-2019-30 | refinedweb | 646 | 56.76 |
If you're asking, "What's Yii?" check out my earlier tutorial, Introduction to the Yii Framework, which reviews the benefits of Yii and includes an overview of what's new in Yii 2.0, released in October 2014.
In this Programming With Yii2 series, I'm guiding readers..
What's Amazon S3?
Amazon S3 provides easy-to-use, advanced cloud-based storage for objects and files. It offers 99.99% availability and 99.999999999% durability of objects.
It offers a variety of features for simple or advanced usage. It's commonly used as the storage component for Amazon's CDN service CloudFront, but these are distinct and can be used independently of each other.
You can also use S3 to migrate files over time to archive in Amazon Glacier, for added cost savings.
Like most all of AWS, you operate S3 via APIs, and today, I'm going to walk you through browsing, uploading and downloading files from S3 with Yii.
Getting Started
To run the demonstration code, you'll need your own Amazon AWS account and access keys. You can browse your S3 tree from the AWS console shown below:
S3 consists of buckets which hold numerous directories and files within them. Since I used to use AWS as a CDN, my WordPress tree remains in my old bucket. You can browse your bucket as well:
As I traverse the tree of objects, here's a deeper view of my bucket contents:
Programming With S3
Again, I'll build on the hello tree from GitHub for our demonstration code (see the link on this page.) It's derived from Yii2 basic.
Obtaining Your Access Keys
You will need access keys for the AWS S3 API if you don't already have them. If not, you can get them by browsing to Security Credentials and creating a new pair:
For our code demonstration, you'll need to place them in your hello.ini file with other secure keys and codes:
$ more /var/secure/hello.ini mysql_host="localhost" mysql_db="hello" mysql_un="tom_mcfarlin" mysql_pwd="is-never-gonna-give-up-rick-astley" aws_s3_access = "AXXXXXXXXXXXXXXXXXXXXXXXXXXXA" aws_s3_secret = "nXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXb" aws_s3_region = "us-east-1"
Installing the Yii Extension for AWS
For this tutorial, we'll use Federico Motta's AWS extension for Yii2. He's definitely the youngest Yii programmer whose code I've used for an Envato Tuts+ tutorial:
Isn't it amazing how quickly kids are picking up programming these days?
Here's the installation process using composer:
$
Afterwards, I also installed the two libraries it suggests, but did not install all of the next level of suggestions for my local development machine:
$
I also registered the
awssdk component within hello/config/web.php:
' ],
Browsing My S3 Directories
For today's demonstration, I created a hello/controllers/StorageController.php with action methods to run each example, such as to browse directories.
These methods in turn call the Storage.php model I created with their own methods.
Here's the controller code:
public function actionBrowse() { $s = new Storage(); $s->browse('jeff-reifman-wp',"manual"); }
It requests that the Storage model reach up to the clouds in the "S3ky" and browse the manual directory.
Each time the Storage.php model is instantiated, it loads the AWS SDK extension and creates an S3 instance:
<?php namespace app\models; use Yii; use yii\base\Model; class Storage extends Model { private $aws; private $s3; function __construct() { $this->aws = Yii::$app->awssdk->getAwsSdk(); $this->s3 = $this->aws->createS3(); }
In my browse example, I'm just echoing the directories and files, but you can feel free to customize this code as you need: />'; } } }
Here are the results when I browse to:
Uploading Files
To upload a file, you need to specify the local path and the remote destination key. Here's the controller code for upload:
public function actionUpload() { $bucket = 'jeff-reifman-wp'; $keyname = '/manual/upload.txt'; $filepath ='/Users/Jeff/Sites/hello/upload.txt'; $s = new Storage(); $result = $s->upload($bucket,$keyname,$filepath); echo $result['ObjectURL']; }
And here is the Storage model method:
public function upload($bucket,$keyname,$filepath) { $result = $this->s3->putObject(array( 'Bucket' => $bucket, 'Key' => $keyname, 'SourceFile' => $filepath, 'ContentType' => 'text/plain', 'ACL' => 'public-read', 'StorageClass' => 'REDUCED_REDUNDANCY', 'Metadata' => array( 'param1' => 'value 1', 'param2' => 'value 2' ) )); return $result;
Browsing to displays the returning URL from which I can view the uploaded file, because I specified
public-read in my code above:
In turn, browsing to the S3 address above shows the contents of the uploaded file:
This is a test to upload to S3
Downloading Files
Here's the controller code for downloading a file:
public function actionDownload() { $s = new Storage(); $f = $s->download('jeff-reifman-wp','files/2013/01/i103-wedding-cover.jpg'); //download the file header('Content-Type: ' . $f['ContentType']); echo $f['Body']; }
Since the browser responds to the content-type, it should display the appropriate image, which I'm requesting here.
Note: I'm downloading a cover image from my experience marrying a corporation named Corporate Person to a woman (yes, it actually happened). The marriage didn't work out long term.
Here's the Storage model code for downloading:
public function download($bucket='',$key ='') { //get the last object from s3 //$object = end($result['Contents']); // $key = $object['Key']; $file = $this->s3->getObject([ 'Bucket' => $bucket, 'Key' => $key, ]); return $file; // save it to disk }
Here's what you see when the file is streamed to the browser—that's the bride celebrating by waving the actual marriage license to Corporate Person (I'm smiling in the background, mission accomplished).
Certainly, you could just as easily store the results on your server in a file. It's up to you. I encourage you to play with the code and customize it as you need.
What's Next?
I hope this helps you with the basics of using AWS S3 from your Yii application.
If you like the concept of cloud-based object and file storage but want to find other providers, check out Alternatives to Amazon AWS. I've been gradually moving away from AWS for a number of reasons mentioned in the article. One of my next tasks is to migrate my S3 objects that are still partly in use to my own server, which I can mirror with KeyCDN.. The Meeting Planner application in the startup series is now ready for use, and it's all built in Yii.
If you'd like to know when the next Yii2 tutorial arrives, follow me @reifman on Twitter or check my instructor page. | http://esolution-inc.com/blog/programming-with-yii2-using-amazon-s3--cms-26347.html | CC-MAIN-2019-18 | refinedweb | 1,082 | 51.78 |
Multiples Of 5
June 12, 2018
We have today an interview question from Amazon:
Write a program to determine if an integer is a multiple of 5 in O(log n) time complexity. You cannot use the division or modulus operators.
Your task is to solve the Amazon interview question. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below.
Advertisements
(define (mult5 i)
(let* ((s (number->string i))
(n (string-length s)))
(memv (string-ref s (- n 1)) ‘(#\0 #\5))))
Some out-of-the-box approach to it, using Julia:
code:
function IsMultipleOf5(x::Int64)
X = string(x)
z = parse(Int64, X[end])
return (z == 0)||(z == 5)
end
testing:
IsMultipleOf5(150)
true
IsMultipleOf5(15)
true
IsMultipleOf5(12)
false
I’m not sure about the O() of this code, but it’s definitely faster than brute-forcing the problem.
In Python. I do not think that converting the number to string is a real solution, as it is not easy to write this conversion without div and mod.
Perl version return ! mod($a,5);
def is_multiple_five(n):
return str(n)[-1] in [“0″,”5”]
If we are going to do string manip and assume “integer”s
[sourcecode lang="perl"]
sub mult5{shift=~/[05]$/}
[sourcecode]
Ooops… If we are going to do string manip and assume “integer”s
Build a finite-state machine to process the bits right to left.
bool mult5(unsigned int n) {
unsigned short state = 0;
while (n != 0) {
switch (state) {
case 0:
state = (n & 1) ? 2 : 0;
break;
case 1:
state = (n & 1) ? 0 : 3;
break;
case 2:
state = (n & 1) ? 3 : 1;
break;
case 3:
state = (n & 1) ? 1 : 4;
break;
case 4:
state = (n & 1) ? 4 : 2;
break;
}
n >>= 1;
}
return (state == 0);
}
What the hell? Overcomplicating solutions won’t get you the job, either.
Python.
Define a recursive mod() function and use that to check if n % 5 == 0.
Or, here’s an iterative version that uses a non-restoring binary division algorithm to compute the remainder.
class multiply {
}
I like the finite state machine and recursive mod solutions.
A Python solution based on bisection:
Surprisingly, no-one has looked into the Log(10) option yet:
function imo5(x::Int64)
if x == 0; return true; end
if x < 0; x = -x; end
z = floor(log(10, x))
end
After all, what are logarithms if not a quicker way to do division, multiplication, and powers?
:) If you allow floating point numbers, logarithms and exponentials then you can do
Here’s a silly program, but it is very fast, and it’s specific to Gambit Scheme:
with resulting times
Converting the number to a string and checking the last digit would take a lot longer:
In Ruby. Using built-in Binary Search which has O(log n) charateristics.
Another solution in Ruby using a modified binary search algorithm.
Learning assembly MIPS. I’m using MARS to simulate it:
I made the mistake of trying
which immediately filled my 32GB memory with a list of multiples of 5:
So here’s a pseudo-R6RS version:
that computes
Here’s a solution in C++.
Example:
Two Perl6 solutions:
A solution in Racket.
Since 2^4 = 16 mod 5 is 1, we can just add up the (precalculated) mod 5s of the hex digits of n:
I like @matthew’s solution. That’s a great idea.
Here’s a solution in C, using an approach from section 10-17 of Hacker’s Delight.
Example:
(my recent solution assumes 64-bit integers)
I just realized my recent solution uses division :-(.
Here’s the updated version without division.
Here’s a corresponding Python version (unlike the C version above, this works on arbitrary sized integers).
Here’s a solution in C that uses binary long division.
Example:
@Daniel: nice solution. Actually, @gambiteer seems to have first made the observation about 2^4 = 1 mod 5. This means that n >> 4k = n mod 5 for any n and we can do this:
I initially missed @gambiteer’s solution. That’s a nice approach.
Here’s a similar approach that processes one bit at a time, utilizing 2**i mod 5 = {1,2,4,3}[i%4].
@programmingpraxis:
Your solution is $O((\log n)^2)$ in both space and time.
You compute a list of (approximately) $5^1$, $5^2$, …, $5^{\log n}$, which takes time and space proportional to $1+2+\cdots+\log n$, which is proportional to $(\log n)^2$.
Which is why it filled up the entire memory of my machine so quickly when I tried it on $5^{10,000,000}$.
I haven’t analyzed all the algorithms presented here, but I suspect that several others don’t satisfy the $O(\lot n)$ requirement except under very special circumstances.
When assuming 32-bit integers, for example, all algorithms are $O(1)$.
@Gambiteer: good point. Here’s a Python version of my solution up above that works on arbitrary length integers in log(n) time (ie. linear in the length of the binary representation). The number is repeatedly split into two halves and added (with a lower half of 4n bits, to preserve value mod5). 2^10000000000 is dealt with in a few seconds: | https://programmingpraxis.com/2018/06/12/multiples-of-5/ | CC-MAIN-2019-04 | refinedweb | 883 | 63.19 |
BBC micro:bit
IS31FL3731 Digital Clock
Introduction
This project came about when I was experimenting with the Adafruit Charlieplexed 16x9 matrix. It's based on something I did on an Arduino with the Lolshield. The Adafruit board has a few more LEDs and makes for a nicer way to display the time.
The image shows the time being displayed on the matrix. 4 digits can be displayed, but not with a leading zero under my scheme.
Circuit
This is the simplest part of the idea. Both devices connect to i2c. They have different addresses and can share the same connection. There's nothing fancy.
Connect the VCC of each board to 3V, connect the GNDs to GND. The SDAs go to pin 20 and the SCLs to pin 19. That's it.
Programming
When I started on this, I quickly found how inefficient my code was for the Adafruit display. Instead of writing it better than I had, I just hacked about a bit and swapped function calls for code to do the same job.
The next problem was how to define a 'font' for the numeric digits. I went for the same approach I'd done before and designed them to be 3x5 in size, like this,
These 'characters' could be displayed on the matrix like this,
This spacing means that 1 is the only digit that can be displayed first. In a 12 hr clock, that isn't a problem.
Then I needed a way to represent it. I went for the simplest, but least efficient way to do this by using the denary values of the LEDs that would be on if the digit were in the top left of the matrix. This gave me the list of lists called DIGITS.
from microbit import * class Matrix: DIGITS = [ [0,1,2,16,18,32,34,48,50,64,65,66], [2,18,34,50,66], [0,1,2,18,32,33,34,48,64,65,66], [0,1,2,18,32,33,34,50,64,65,66], [0,2,16,18,32,33,34,50,66], [0,1,2,16,32,33,34,50,64,65,66], [0,1,2,16,32,33,34,48,50,64,65,66], [0,1,2,18,32,33,34,50,64,65,66], [0,1,2,16,18,32,33,34,48,50,64,65,66], [0,1,2,16,18,32,33,34,50,64,65,66] ] def __init__(self): # off an on again self.write_reg8(0x0b, 0x0a,0x0) sleep(10) self.write_reg8(0x0b, 0x0a,0x1) # select picture mode self.write_reg8(0x0b, 0, 0) self.write_reg8(0x0b, 0x01, 0) self.fill(0) for f in range(8): for i in range(18): self.write_reg8(f,i,0xff) # turn off audio sync self.write_reg8(0x0b,0x06, 0x0) def fill(self, value): self.select_bank(0) for i in range(6): d = bytearray([0x24 + i * 24]) + bytearray(([value]*24)) i2c.write(0x74, d, repeat=False) def select_bank(self, bank): self.write_reg(0xfd, bank) def write_reg(self,reg,value): i2c.write(0x74, bytes([reg,value]), repeat=False) def write_reg8(self,bank, reg, value): self.select_bank(bank) self.write_reg(reg, value) def set_led_pwm(self, lednum, frame, value): self.write_reg8(frame, 0x24 + lednum, value) def display_digit(self,d,x): for l in self.DIGITS[d]: self.write_reg8(0,0x24+l+x,255) # RTC functions addr = 0x68 buf = bytearray(7) a = Matrix() while True: hh,mm,ss,YY,MM,DD,wday = get_time() a.fill(0) if hh>12: hh = hh - 12 if hh>9: a.display_digit(1,31) hh = hh % 10 a.display_digit(hh,35) a.set_led_pwm(55,0,255) a.set_led_pwm(87,0,255) q, r = divmod(mm,10) a.display_digit(q,41) a.display_digit(r,45) sleep(60000)
The display_digit method allows us to draw a digit at a particular position on the matrix.
The long sleep could be reduced. The display is being refreshed as it is, you need to think about how much filcker you want and the level of precision you are looking for.
Challenges
- One thing to consider is changing the display code to show whether the time shown is am or pm. There are 2 rows of LEDs that could show this, perhaps with a row at the top for am and one at the bottom for pm.
- Go for a less conventional clock face and you can fit in 4 digits and 24 hour time.
- Since you can make a 5x5 clock face on the micro:bit matrix, you could do 3 side-by-side here to display hours, minutes and seconds.
- The code isn't particularly well optimised here. The class could do with adapting and flattening where possible. The main loop causes an update of all of the digits. If you were wanting to show seconds, why update a digit that will need to be the same for an hour or more? To have time checks every second, you would want to remove the statement that clears the display by filling 0.
- The code for diplaying the digits could be used without the RTC, perhaps to make a stopwatch based on micro:bit time, a scoreboard for a game you are playing, or something else entirely. Shift the colon out of the image and you can easily do 4 plain digits. | http://multiwingspan.co.uk/micro.php?page=mclock | CC-MAIN-2018-22 | refinedweb | 884 | 74.9 |
On Wed, Apr 10, 2002 at 01:26:17AM -0700, Robert Tiberius Johnson wrote: > - I tend to update every day. For people who update every day, the > diff-based scheme only needs to transfer about 8K, but the > checksum-based scheme needs to transfer 45K. So for me, diffs are > better. :) I think you'll find you're also unfairly weighting this against people who do daily updates. If you do an update once a month, it's not as much of a bother waiting a while to download the Packages files -- you're going to have to wait _much_ longer to download the packages themselves. I'd suggest your formula would be better off being: bandwidthcost = sum( x = 1..30, prob(x) * cost(x) / x ) (If you update every day for a month, your cost isn't just one download, it's 30 downloads. If you update once a week for a month, your cost isn't that of a single download, it's four times that. The /x takes that into account) Bandwidth cost, then is something like "the average amount downloaded by a testing/unstable user per day to update main". My results, are something like: 0 days of diffs: 843.7 KiB (the current situation) 1 day of diffs: 335.7 KiB 2 days of diffs: 167.7 KiB 3 days of diffs: 93.7 KiB 4 days of diffs: 56.9 KiB 5 days of diffs: 37.5 KiB 6 days of diffs: 26.8 KiB 7 days of diffs: 20.7 KiB 8 days of diffs: 17.2 KiB 9 days of diffs: 15.1 KiB 10 days of diffs: 13.9 KiB 11 days of diffs: 13.2 KiB 12 days of diffs: 12.7 KiB 13 days of diffs: 12.4 KiB 14 days of diffs: 12.3 KiB 15 days of diffs: 12.2 KiB ...which pretty much matches what I'd expect: at the moment, just to update main, people download around 1.2MB per day; if we let them just download the diff against yesterday, the average would plunge to only a couple of hundred k, and you rapidly reach the point of diminishing returns. I used figures of 1.5MiB for the standard gzipped Packages file you download if you can't use diffs, and 12KiB for the size of each daily diff -- if you're three days out of date, you download three diffs and apply them in order to get up to date. 12KiB is the average size of daily bzip2'ed --ed diffs over last month for sid/main/i386. The script I used for the above was (roughly): #!/usr/bin/python def cost_diff(day, ndiffs): if day <= ndiffs: return 12 * 1024 * day else: return 1.5 * 1024 * 1024 def prob(d): return (2.0 / 3.0) ** d / 2.0 def summate(f,p): cost = 0.0 for d in range(1,31): cost += f(d) * p(d) / d return cost for x in range(0,16): print "%s day/s of diffs: %.1f KiB" % \ (x, summate(lambda y: cost_diff(y,x), prob) / 1024) I'd be interested in seeing what the rsync stats look like with the "/ days" factor added in. Cheers, aj -- Anthony Towns <aj@humbug.org.au> <> I don't speak for anyone save myself. GPG signed mail preferred. ``BAM! Science triumphs again!'' --
Attachment:
pgpB7WQ5q7w5S.pgp
Description: PGP signature | https://lists.debian.org/debian-devel/2002/04/msg00863.html | CC-MAIN-2017-09 | refinedweb | 567 | 84.78 |
This article will be a practical rundown of working with strings in Python, made up of things I constantly forget and have to look up on how to do. I hope it will serve as a super-quick reference for me as well as for anybody else who stumbles here.
This document is not intended for beginners to Python. Although you can still get something out of it, it's best suited for intermediate Python programmers. I tried to illustrate the concepts in a crisp manner with minimum carry-over context from one section to the next.
Table of Contents
- Defining Strings¶
- Auto-concatenated Strings¶
- Raw Strings¶
- Concatenation¶
- Splitting¶
- Substring Check¶
- Learning About the Contents¶
- Transformations¶
- String Formatting¶
- Docstrings¶
- Conclusion¶
Defining Strings¶
Single and Double Quoted Strings¶
We'll refer to strings delimited by the
' character as single quoted strings and those delimited
by
" as double quoted strings.
They are identical in all respects, except that single quote needs to be escaped in single quoted strings and double quote needs to be escaped in double quoted strings.
They cannot span multiple lines. A string's ending quote character must appear in the same line as
it begins. This can be worked around by using a
\ character at the end of the line. For example:
text = 'abc\ def' print(text)
This will print:
abc def
But it's best to avoid breaking using
\ to break strings into multiple lines. It's not pretty and
there's better way to do it. Especially auto-concatenated strings
(discussed below).
Tripled Quoted Strings¶
Tripled quoted strings are a syntax for defining multi-line strings. There's no practical difference
between defining strings with
''' and
""".
In practice, this syntax is commonly used for one of the following:
- Docstrings (discussed below), for writing documentation for classes/functions.
- Module level constant strings that contain long multi-line content. Can be used for small HTML templates that are stored inline or complex SQL queries, long regular expression patterns etc.
- An approximation for multi-line comments. Python doesn't have multi-line comments (like
/*and
*/in C-like languages). Wrapping whole code blocks with tripled quotes can turn it into a pseudo-comment. I personally discourage this, but it's nonetheless used in real-world code.
The string created when using tripled quoted strings will contain everything between the tripled quotes. This includes any indentation present due to Python block-style formatting. For example:
1 2 3 4 5 6 7 8 9 10 11
def make_story(): text = ''' Once upon a time, there was a planet. Suddenly, it named itself Earth. And it hoped to live happily ever after. ''' return text print(repr(make_story()))
This will produce the following output:
'\n Once upon a time, there was a planet.\n Suddenly, it named itself Earth.\n And it hoped to live happily ever after.\n '
There's three things to note in the string defined in this function:
- It starts with a newline character, the one that comes right after the opening
'''on line 2.
- This particular point can be easily addressed by adding a
\right after the opening
'''.
- Each line, except for the first, starts with four spaces, because of the indentation of the
make_storyfunction.
- The
textwrap.dedentfunction from standard library can help deal with this. Details in the next paragraph.
- It ends with a newline character and the four spaces from the line 6.
- Calling
.strip(or
.rstrip) on the string can do this.
Considering the above three points, we rewrite the previous code fragment as:
1 2 3 4 5 6 7 8 9 10
import textwrap def make_story(): text = textwrap.dedent('''\ Once upon a time, there was a planet. Suddenly, it named itself Earth. And it hoped to live happily ever after. '''.rstrip()) return text
Note that it is important to use
.rstrip here, and not
.strip. The reason is that
.strip will
remove the whitespace before
Once... line and so the first line in the string won't have any
indentation. Now the documentation of
textwrap.dedent says:
Remove any common leading whitespace from every line in text.
But since our first line doesn't have the indentation anymore, there's no common leading
whitespace in
text. So, this function won't remove the indentation. Another option would be to do
dedent first, and then call
.strip on the result of
dedent.
The output of this program would be:
'Once upon a time, there was a planet.\nSuddenly, it named itself Earth.\nAnd it hoped to live happily ever after.'
Escape Characters¶
Backslash based escape characters behave exactly the same way in strings defined with any quote type.
Following is a list of commonly used escape characters. This list is not exhaustive.
Regarding escaping quote characters:
- Single quotes don't have to be escaped in double quote strings, but it's not an error to do so.
- Double quotes don't have to be escaped in single quote strings, but it's not an error to do so.
- Neither quotes have to be escaped in tripled quote strings, but it's not an error to do so.
In tripled quote strings, the delimiters cannot be escaped to become part of the string. For
example, a
''' sequence cannot be part of the string when the string is defined with
'''. But it
may be part of the string, when it's defined with
" or
""". This behaviour cannot be escaped.
Auto-concatenated Strings¶
Python has a nice compiler level feature to auto-concatenate literal strings that are next to each other (or more correctly, forming a single expressions). Take a look at an example to illustrate the point:
1 2 3 4 5 6
query = ( 'SELECT * FROM employees' ' WHERE name = ?' ) print(query)
The string
query is defined as two parts, each on lines 2 and 3. These two strings will be
concatenated automatically at compile-time. The output of the above program would be:
SELECT * FROM employees WHERE name = ?
Things to note regarding this behaviour:
- The strings don't have any operator between them, like
+or
,or something else.
- This works only with string literals, it won't work when applied to variables.
- This is a compile-time feature, and so is more performant than string concatenation using the
+operator.
- The multiple string literals should be part of the same expression. So, if we are writing them on multiple lines, they have to wrapped in parentheses or we should use the
\character to tell Python to treat multiple lines as a single expression.
- Works with combinations of ordinary strings, raw strings, format strings and any combinations of them together.
Thanks to this feature, there's almost never a reason to define long string constants by concatenating several strings.
Raw Strings¶
Python's raw strings' syntax is a small variation that disables the escaping behaviour of the
\
character. A string is treated as a raw string if the starting delimiter quote is prefixed with a
r (or
R) character.
The following expressions create equal (as defined by
== operator) string:
In other words, the special escaping behaviour of
\ character cannot be used in raw strings. This
is useful when you have a lot of
\\ in your unadorned string. Such a string's definition can be
much simpler if using raw strings.
Points to note regarding raw strings:
- Can be used with single, double or tripled quotes.
- The actual string object created is no different from the one when using unadorned string syntax. It is just a syntax-level convenience.
- Delimiter quotes cannot be included in raw strings. In other words, single quotes cannot be a part of raw single quote strings. For example,
r'abc\'def'gives the string
"abc\\'def". That is, the string will contain one backslash, and one single quote, essentially it will be exactly as it looks like in the definition.
- Cannot be defined to end with a single
\. The expression
r'abc\'will raise a
SyntaxError. The expression
r'abc\\'will end with two backslash characters.
The limitations above can be worked around by using raw and ordinary strings together.
Most commonly useful scenarios for raw strings:
- Regular expression patterns, to be used with the
remodule.
- Windows style file paths, where the separator is the backslash character. Note that the
openfunction works fine even with forward slashes on Windows, so this is generally not needed.
- SQL queries, especially when defined with tripled quotes as module level constants.
Concatenation¶
The
+ operator can be used to concatenate two strings. This will create a new string object which
is the result of the concatenation (
str objects are immutable in Python).
If there's several strings being concatenated, using the
+ operator may not be the best way to do
this. For example, consider the following snippet of code:
text = '' for i in range(4): text += 'we have %r\n' % i print(text)
When run, it produces the following output:
we have 0 we have 1 we have 2 we have 3
However, using the
+ operator here means that intermediate string objects are created at every
concatenation operation. This is needless memory allocation since these intermediate string objects
are never used, and are ready for garbage collection rather quickly. For situations like this,
there's better options than concatenating strings using
+ operator.
One option is to use a list and then pass it to
''.join method to concatenate them all in one
go. Using this option in the above code snippet, we get:
fragments = [] for i in range(4): fragments.append('we have %r\n' % i) text = ''.join(fragments) print(text)
Additionally, in this case, we could've used
'\n'.join instead and avoid the trailing newline in
text (if that's what is desired, don't do it just because we can).
lines = [] for i in range(4): lines.append('we have %r' % i) text = '\n'.join(lines) print(text)
Another option is to use
io.StringIO which is a file-like, in-memory, string
buffer that you can
.write string content to and then turn it into a single string object when
done. Rewriting the above code snippet to use this option:
import io buffer = io.StringIO() for i in range(4): buffer.write('we have %r\n' % i) text = buffer.getvalue() print(text)
Both solutions are better than concatenating strings with
+ operator, but if you're just
concatenating two or three strings, it's probably simpler to just use
+ and move on. Premature
optimisation is the root of all evil.
Splitting¶
Python strings have the
.split method that can be used to split strings into list of
tokens or parts. There's three things to this method to understand:
First, it takes a separator argument, which can be a string of any length.
print('a,b,c,d'.split(',')) print('a,b;c,d'.split(';')) print('a b c d'.split(' ')) print('a,,b,,,'.split(','))
This will produce the following output:
['a', 'b', 'c', 'd'] ['a,b', 'c,d'] ['a', 'b', 'c', 'd'] ['a', '', 'b', '', '', '']
Note that adjoining separators will produce empty strings in the returned list.
Second, not passing a value for the separator (or passing
None) will split the string over
whitespace. Note that this is not the same as splitting with the space character (
' '). Consider
the following examples:
If you're familiar with regular expressions, then this splitting over whitespace is similar to
splitting over non-overlapping matches of the pattern
\s+.
Third, there is a second argument, which is the maximum number of times the string will be cut
with the given separator (or whitespace). Thus, if we give
1 in the second argument, the result
string will contain at most two elements. Of course, not providing any second argument will mean
the string will be split at all occurrences of the separator.
The
.splitlines Method¶
The
.splitlines method splits the strings into a list of lines. This method is a
better version of just doing
.split('\n') since it handles many of the nasty end-of-line
differences. For example, if your string contains
'\r\n' at the end of each line, then doing a
.split('\n') will leave dangling
'\r' characters at end of each line. This is handled well by
the
.splitlines method. The official documentation has a list of separators
this method splits by, which I won't repeat here.
Substring Check¶
To check if a string is wholly contained in another string, the
in operator should be used. Note
that this operator is case-sensitive. If case-insensitivity is needed, the easiest option is to just
call
.casefold (which is especially designed for this purpose) on both the strings.
needle = 'back' haystack = 'Going back and forth all the time.' print(needle in haystack)
This would print
True, since the string
'back' occurs in
haystack. Note the intent here, for
example, consider the following example:
needle = 'back' haystack = 'Forwards is easier than backwards.' print(needle in haystack)
This would again print
True, but the intent seems to be to look for the word "back". In that
case, we'd expect
False here and
True in the previous example (since back is not a separate
work in the second example). Here again, a simple solution is to call
.split on the
haystack
string before the
in operator check. The idea is that we'd get a list of words out of
haystack
and we check if needle occurs in the list.
needle = 'back' haystack = 'Forwards is easier than backwards.' print(needle in haystack.split())
This prints out
False. This isn't anywhere near a foolproof word searching system, but does get
you a step ahead.
Prefix and Suffix Check¶
We have the
.startswith and
.endswith methods on strings if we want to check if a string is not
just in another string, but more specifically, if it starts/ends with it.
>>> 'the' in 'Hello there' True >>> 'Hello there'.startswith('he') False >>> 'Hello there'.endswith('ere') True >>> 'Hello there'.lower().startswith('he') True
Additionally, there's a useful twist to these two functions. Instead of a single string as argument,
they can accept a
tuple of strings where it check if the original strings starts/ends with any
of the strings in the tuple. Check out the following examples:
>>> 'Hello there'.startswith(('He', 'he')) True >>> 'hello there'.startswith(('garbage from outer space', 'He', 'he')) True
A less obvious fact here is that the original string may be shorter than the string being passed
to
.startswith/
.endswith. This sounds like a nobrainer, but there's one scenario where it's
particularly nice.
Consider a situation where we want to check if the first character of a string is, say,
'A'. One
option to do this is
haystack[0] == 'A'. But this runs the risk that if the
haystack = '', then
haystack[0] will raise an
IndexError, where we just wanted
False. If we did
haystack.startswith('A'), we'd get
False if haystack is empty.
Regular Expressions Check¶
Regular expressions are a much larger topic than can be fit under a third level header (may be a future article). So we'll just cover the substring checking part using regular expressions (in obviously limited scope).
All regex (regular expression) operations in Python start from the
re module. There's no special
syntax for defining regex patterns like there is in JavaScript. Patterns are instead written as
strings and the
re module knows to interpret them as regex patterns.
For our purpose of substring checking, the
re module provides the
.search function
that takes a regex pattern, the haystack string and optionally, any flags for the pattern.
import re print(re.search('the', 'Hello there')) print(re.search('he', 'Hello there')) print(re.search('he', 'Hello there', flags=re.IGNORECASE)) print(re.search('hola', 'Hello there'))
This would produce the following output:
<re.Match object; span=(6, 9), <re.Match object; span=(7, 9), <re.Match object; span=(0, 2), None
A minor point to note here is that the return value is not of boolean type. We get an
re.Match object if there is a successful match, else we get
None. This is usually
a minor concern, because the match objects are truth-y and
None is false-y. So, we can pretend
it returns a boolean value if we need to.
When using the
re.search function this way, the
re.escape function might also come
in handy. This function will escape any special characters in the give string. Special here means
having special behaviour in the context of being a regex pattern.
For example, if the needle is user input and we want to search our haystack such that the needle is at the end of an English sentence, we'd do something like:
re.search(needle + '[.!?:]', haystack)
But this runs the risk of
needle having regex special characters like
.* and that would match
everything, which is probably not what we want. In this case, it's best to wrap the
needle in
re.escape and then concatenate the pattern with end-of-sentence markers.
re.search(re.escape(needle) + '[.!?:]', haystack)
As always, please think twice before using regular expressions to solve a problem, and if you do, if
the pattern is longer than five or six characters, please make use of
re.VERBOSE and add comments
to your pattern. You'll thank yourself later.
Learning About the Contents¶
Python's strings have some nice methods to quickly check some facts about it's contents. Here's a rundown of such methods:
Please use the links to official documentation in the above table to learn more about them. I won't be repeating those details here.
Numeric Checks¶
You might've noticed that we have three different methods that all sound awfully similar to each
other:
isdecimal,
isdigit and
isnumeric. The official documentation regarding the difference
between these three wasn't very helpful for me so I'll try explain it here.
Firstly,
isdecimal will consider any character that can be used to build a number in the
10-decimal system as
True. That means it will give
True for the
0 through
9 digits.
Additionally, it will also give
True for characters that can be used for similar purpose in other
languages. For example, the numbers from Unicode range 3174 to 3183 are of a south Indian language
called Telugu (my mother tongue). The
isdecimal method returns
True for these characters as
well. However, note that it is not true for Roman numerals since they can't technically be used to
construct 10-decimal numbers.
>>> # Arabic Numbers >>> ''.join(chr(i) for i in range(48, 58)) '0123456789' >>> _.isdecimal() True >>> >>> # Telugu Numbers >>> ''.join(chr(i) for i in range(3174, 3184)) '౦౧౨౩౪౫౬౭౮౯' >>> _.isdecimal() True
Secondly,
isdigit gives
True for any character that looks like a digit, of any
language. So, this includes any character that is
True-ed by
isdecimal. Additionally, this
includes characters like
¹,
²,
³, etc., as well as
①,
②,
③. Notice that fraction
characters are not considered as digits.
Thirdly,
isnumeric gives
True for any character that is numeric in nature. So, this
includes any character that is
True-ed by
isdigit. Additionally, this will give
True for
fraction characters such as
¼,
½,
¾ etc., as well as Roman numbers such as
Ⅰ,
Ⅱ,
Ⅲ,
Ⅳ, even
Ⅹ,
Ⅼ,
Ⅽ,
Ⅾ,
Ⅿ (these are not ordinary alphabets, they are Unicode Roman number
characters) etc.
This follows a neat fact regarding the character sets
True-ed by the three methods:
isdecimal
⊂
isdigit ⊂
isnumeric.
Transformations¶
This section is about methods that return a new string, which is the result of some transformation applied to the original string. Since strings in Python are immutable, transformations always return a new string object. The original string is, always, obviously, left untouched.
Here's a few commonly used transformation methods (this list is intentionally non-exhaustive):
Please use the links to official documentation in the above table to learn more about them. I won't be repeating those details here. The official documentation refers to more methods on strings that I suggest skimming over. I happened to reinvent the wheel with transforming strings because I didn't know Python already provided a method for what I needed.
String Formatting¶
String formatting in Python comes majorly in two flavors. First is the (now old)
printf-style
formatting that uses typed control characters prefixed with
%, similar to the
printf (more like
sprintf) function in C. Second is the new
format builtin function and the accompanying
str.format method that is more suited to Python's dynamic typing, and arguably, is much easier to
use.
Python's formatting capabilities are quite vast and powerful, warranting a whole separate article. I
intend to do that some time in the coming weeks. Until then, the official documentation on
printf-style
formatting and the
format function should serve you well.
Docstrings¶
Docstrings are strings that serve as documentation for Python's modules, functions and classes. There's nothing special in the syntax of these strings per se, but their uniqueness is more due to where they are positioned in a Python program.
Consider the following function with a docstring on line 2
1 2 3 4 5 6
def triple(n): """Triples the given number and returns the result.""" return n * 3 print(triple(4))
The string defined on line 2 in this program is not assigned to any variable. On the face of it, it appears pointless to create a string and just discard it. However, in this case, the fact that this string literal is the first expression in the function definition, makes it a docstring. What that means is that the contents of this string are understood to be a human readable help text regarding the usage of this function.
It also doesn't have to be a string defined with
""". It may be using single quotes, double quotes
or any other crazy variation we saw above. But, don't do that. It's usually a best practice to write
docstrings with
""", and I strongly suggest (and even beg) that you stick to using
""" for
docstrings. Please.
It's also not entirely true that this string is not assigned to a variable. Docstrings are saved
to the
.__doc__ attribute of the function (or whatever object) they are documenting. In our
example above, we can get the docstring from
triple.__doc__. But it's usually more practical to
call the
help function to read the docstring.
For classes, the docstring should be the first expression inside the class body, positioned similarly to that of a function. For modules, the docstring should be the first expression in the module (even before any imports).
A minor note regarding docstrings regarding the formatting of their content is to use [ReST][rst] (also called reStructuredText). It is not strictly required, but I suggest you do so, in the event that you choose to generate HTML help pages from your docstrings, you'll be glad you wrote them in ReST.
Conclusion¶
It's hard to imagine a Python program that doesn't have something to do with strings. As such, we have been provided with a lot of utilities within the standard distribution for working with strings. Even in an article of this size, I couldn't be exhaustive. As always, Python's official documentation is unreal good. It pays to occasionally open a random page and skim over it.
View comments at | https://sharats.me/posts/working-with-strings-in-python/ | CC-MAIN-2020-10 | refinedweb | 3,907 | 65.93 |
We have a DFS namespace at \\company.example.org\dfs\ and inside that are entries for half a dozen folders. There is one namespace server, and no replication. The shares are all now on one server. Very basic.
I use Windows 8 Pro as a desktop, with a drive mapped to \\company.example.org\dfs\companyshare\
Suddenly, and arbitrarily, it stops working, with the error "Windows cannot access \\company.example.org\dfs\companyshare\"
I browse to the \dfs\ namespace folder, only one of the endpoint entries is there, the rest are missing. I can ping the namespace server and browse to the share as \server\share.
After a few minutes ... they reappear. Or I reboot, and it works again. This seems to happen on more than one Windows 8 client, but not on Windows 7.
I have run the DFSDiag tests linked, and they all come back fine.
In the properties of the \company.example.org\dfs folder, on the DFS tab, I see one entry in the referral list, which is active and status "Okay".
There are three domain controllers, 2008 R2 twice and 2008. All service packed and updated.
No event log errors that I can find on server or client.
Summary: Win 8 drops most of the DFS endpoints, and then a few minutes later, refreshes them. Nothing else seems to be wrong.
Where can I look for more information?
[Update: So far I have used the dfsutil diagnostics from TheFiddlerWins' answer, and found absolutely nothing. The only progress I've made is realising that the virtual machine in question still had 1 core and 3GB of RAM assigned, and now I've moved lots of roles onto it (printers, several heavy file shares), that was rather underpowered. Since increasing the available CPU and RAM for the machine last week, I haven't noticed this issue occuring again. Too early to say for sure that it's gone, but it might be that simple].
DFS (for some ungodly reason) uses NetBIOS style names (DOMAIN instead of domain.com) by default. This document from Microsoft tells you how to fix it. I suspect your issue is that at times it's able to resolve these names (via WINS, broadcasts etc) and other times not.
If that is not it try following this doc on DFS troubleshooting.
By posting your answer, you agree to the privacy policy and terms of service.
asked
1 year ago
viewed
3032 times
active | http://serverfault.com/questions/532056/dfs-namespace-windows-server-2008-r2-win-8-missing-entries | CC-MAIN-2014-52 | refinedweb | 410 | 75.71 |
CSV(Comma Separated Values) is a simple file format used to store tabular data, such as a spreadsheet. This is a minimal example of how to write & read data in Python.
Writing data to a CSV file:
import csv data = [["Ravi", "9", "550"], ["Joe", "8", "500"], ["Brian", "9", "520"]] with open('students.csv', 'wb') as csvfile: writer = csv.writer(csvfile, delimiter=',') writer.writerows(data)
Reading data from a CSV file:
import csv with open('students.csv', 'rb') as csvfile: spamreader = csv.reader(csvfile, delimiter=',',) for row in spamreader: print("Name: {} class: {} Marks: {}".format(row[0], row[1], row[2])) output: Name: Ravi class: 9 Marks: 550 Name: Joe class: 8 Marks: 500 Name: Brian class: 9 Marks: 520 | https://riptutorial.com/csv/example/6947/reading-and-writing-in-python | CC-MAIN-2022-40 | refinedweb | 118 | 68.47 |
From: Matt Hurd (matt.hurd_at_[hidden])
Date: 2004-07-14 18:13:15
On Wed, 14 Jul 2004 18:45:00 +0300, Peter Dimov <pdimov_at_[hidden]> wrote:
> Howard Hinnant wrote:
> > On Jul 14, 2004, at 6:27 AM, Alexander Terekhov wrote:
> >
> >> It is a philosophical issue, not technical.
> >
> > Thanks for your comments.
> >
> > Speaking philosophically ... just wondering out loud, not trying to
> > solve anything, I wonder if this doesn't boil down to an object
> > oriented view vs a functional view. Because the few times I've felt
> > the need for a recursive mutex is when I had an object, which contained
> > a single mutex, which had several public member functions that just
> > happened to call each other as an implementation detail. Something
> > like:
> >
> > class A
> > {
> > public:
> > void foo1(); // calls foo2 and foo3 under the covers
> > void foo2();
> > void foo3();
> > private:
> > mutex mut_;
> > };
>
> Yep, classic. ;-)
>
> The solution is, of course:
>
> class A
> {
> public:
>
> void foo1()
> {
> scoped_lock<> lock(mut_);
> foo1_impl();
> }
>
> void foo2()
> {
> scoped_lock<> lock(mut_);
> foo2_impl();
> }
>
> void foo3()
> {
> scoped_lock<> lock(mut_);
> foo3_impl();
> }
>
> private:
>
> void foo1_impl(); // calls foo2_impl & foo3_impl
> void foo2_impl();
> void foo3_impl();
>
> mutex mut_;
> };
Yes, that is the classic solution but it never strikes me as quite what I need.
I've been struggling to find a cute solution I'm happy with. Mainly
because I also typically want to expose the foo1_impl() for the cases
when I wish to lock externally for multiple operations.
One thing I've done is have the standard interface lock and have
template methods that take true and false for locking and non-locking.
do_this(); // locks - safety first
do_this<true>(); // locks
do_this<false>(); // no locking
To do this I inherit a monitor class which exposes a guard() method
that return a mutex reference which is used for locking.
The messy part is being able to do it without lots of boilerplate code.
Presently I use a macro for getter/setter attributes which implements
the appropriate methods and also uses a compile time check for
attribute size to determine if locking is truly necessary for the
platform (some operations are atomic and need no locking, e.g. 32 bit
ops on IA32 and 64 bit ops on IA64).
This is fine. However this macro/specialisation approach falls apart
when you have an arbitrary method as I don't know a neat way to
encapsulate the passing of the parameters in the macro for
replication.
However, the named_params library does this nicely and I think I could
use that for the arbitary method approach, I just can't quite find the
time to do it.
I've previously posted this stuff but can again if there is any interest.
Regards,
Matt Hurd.
_______________
Here is a simple attribute only example:
namespace te {
template
<
class S = synch::shareable
>
class instrument : public synch::monitor<S>
{
SYNCH_SIMPLE_ATTRIBUTE( susquehanna_code, std::string );
SYNCH_SIMPLE_ATTRIBUTE( exchange_code, std::string );
SYNCH_SIMPLE_ATTRIBUTE( underlying, std::string );
SYNCH_SIMPLE_ATTRIBUTE( bid, double );
SYNCH_SIMPLE_ATTRIBUTE( bid_volume, double );
SYNCH_SIMPLE_ATTRIBUTE( ask, double );
SYNCH_SIMPLE_ATTRIBUTE( ask_volume, double );
SYNCH_SIMPLE_ATTRIBUTE( last, double );
SYNCH_SIMPLE_ATTRIBUTE( last_volume, double );
SYNCH_SIMPLE_ATTRIBUTE( prior_settlement, double );
public:
instrument() : synch::monitor<S>()
{
}
template < class T >
instrument( const T & t) : synch::monitor<S>()
{
T::lock lk( t.guard(), synch::lock_status::shared );
susquehanna_code<false> ( t.susquehanna_code<false>() );
exchange_code<false> ( t.exchange_code<false>() );
underlying<false> ( t.underlying<false>() );
bid<false> ( t.bid<false>() );
bid_volume<false> ( t.bid_volume<false>() );
ask<false> ( t.ask<false>() );
ask_volume<false> ( t.ask_volume<false>() );
last<false> ( t.last<false>() );
last_volume<false> ( t.last_volume<false>() );
prior_settlement<false> ( t.prior_settlement<false>() );
}
template< class T >
instrument< S > & operator=( const T& rhs )
{
if (&rhs != this)
{
lock lk_this( guard(), synch::lock_status::exclusive );
instrument<S> temp(rhs);
swap(temp);
}
return *this;
}
template< class T>
void swap( T& source) throw()
{
std::swap( susquehanna_code_, source.susquehanna_code_);
std::swap( exchange_code_, source.exchange_code_);
std::swap( underlying_, source.underlying_);
std::swap( bid_, source.bid_);
std::swap( bid_volume_, source.bid_volume_);
std::swap( ask_, source.ask_);
std::swap( ask_volume_, source.ask_volume_);
std::swap( last_, source.last_);
std::swap( last_volume_, source.last_volume_);
std::swap( prior_settlement_, source.prior_settlement_);
}
};
} // namespace
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2004/07/68091.php | CC-MAIN-2019-43 | refinedweb | 678 | 50.02 |
I am writing my first program in python (first program ever for that matter). It's a simple program that detects input (On my raspberry pi) from a door chime and counts the number of times it goes off and prints the number of times on the screen followed by the date and time the event occurs.
So now I want to refine the program a bit; my first thought was to write the data to a file for later reviewing. I have figured out how to have my program create and open a file and even write simple strings to it, but getting it to write the strings with the variable (x) to it and the the variable 'time.strftime' to it has me stumped...
Here's my code:
# My first program
# version 1.1
# Goal is to write each motion event to a file
import time
import RPi.GPIO as GPIO
GPIO.setmode(GPIO.BCM)
GPIO.setup(24,GPIO.IN)
# input = GPIO.input(24)
#temp code so I don't have to keep walking to the sensor called in the line commented out above.
a = int(raw_input("Enter a number"))
x = 0
while True:
#if (GPIO.input(24)):
#again temp code, just the 'if a>0:'
if a>0:
x += 1
print "There have been %d motion events!" % (x)
print "The last one was on: "
print time.strftime("%m/%d/%y %H:%M:%S")
# Open the file that will hold the history data
#this is where I am stuck...
with open('history.dat', 'a') as file:
file.write('motion event recorded at: %s \n') %time.strftime("%m")
file.close()
#pause the program to prevent multiple counts on a single person triggering the chime - some folks are slow ;)
time.sleep(4)
Python print works in a different fashion.
Try this:
print("There have been"+ str(x) +"motion events!")
And this:
file.write('motion event recorded at: '+time.strftime("%m")+'\n')
Try posting what error you're receiving so that it's easier for people to answer.
Also, for a first time code, this is pretty good. | http://www.dlxedu.com/askdetail/3/11a09326289cd359eab443ba655683a7.html | CC-MAIN-2018-30 | refinedweb | 347 | 83.56 |
Thank you both. Once I added -libsupc++ to the link, the the test program
builds cleanly and I can run it.
The good news is that from 111 test cases, 109 run successfully.
The bad news is that 2 fail in calls to the C++ library and then the test
program itself crashes - ironically, while printing the number of successful
tests :-)
I'd like to check the final crash first because I think it might be the
easiest to isolate. It happens when executing lines like
cout << "Tests passed: ";
This is in fact the first reference to cout in the program, so by way of an
experiment I inserted
cout << "Starting tests";
at the beginning and it crashes there instead. Does anyone recognise this
issue?
gdb prints the following, rather cryptic, backtrace
(gdb) bt
#0 0x00e92cdf in raise () from /lib/tls/libc.so.6
#1 0x00e944e5 in abort () from /lib/tls/libc.so.6
#2 0x00246592 in __rw::__rw_assert_fail (expr=0x2b3af3 "this->_C_is_valid
()",
file=0x2b3ab4
"/home/tuscany/workspace/stdcxx/stdcxx-4.1.3/include/fstream.cc",
line=509,
func=0x2b3e60 "std::basic_streambuf<_CharT, _Traits>*
std::basic_filebuf<_CharT, _Traits>::setbuf(_CharT*, int) [with _CharT =
char, _Traits = std::char_traits<char>]") at
/home/tuscany/workspace/stdcxx/stdcxx-4.1.3/src/assert.cpp:96
#3 0x0026d962 in std::basic_filebuf<char, std::char_traits<char> >::setbuf
(this=0xbfffa4d0, __buf=0x0, __len=512) at fstream.cc:509
#4 0x08073005 in sdotest::testUtil () at fstream:109
#5 0x0808ad51 in main (argc=1, argv=0xbfffa664) at main.cpp:106
In the case where the reference to cout is at the beginning of the program
then the source is ...
#include <stdio.h>
#pragma warning(disable:4786)
#include <iostream>
using namespace std;
#include "sdotest
>
> | http://mail-archives.apache.org/mod_mbox/stdcxx-user/200610.mbox/%3C61ceca540610160917m25b48d6bn4f615f46f17142d4@mail.gmail.com%3E | CC-MAIN-2017-22 | refinedweb | 284 | 63.8 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.