text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
14.21 VALUATION OF COCA-COLA USING MARKET MULTIPLES. The Coca-Cola Company is a global soft-drink beverage company (ticker symbol = KO) that is a primary and direct competitor with PepsiCo. The data in Chapter 12’s Exhibits 12.13–12.15 include the actual amounts for 2008 and projected amounts for Year +1 to Year +6 for the income statements, balance sheets, and statements of cash flows for Coca- Cola (in millions).
The market equity beta for Coca-Cola at the end of 2008 is 0.61. Assume that the risk-free interest rate is 4.0 percent and the market risk premium is 6.0 percent. Coca-Cola has 2,312 million shares outstanding at the end of 2008, when Coca-Cola’s share price was $44.42.
In this problem, we use these actual and projected financial statement data to apply the techniques in Chapter 14 to compute Coca-Cola’s required rate of return on equity and share value based on the value-to-book valuation model. We also compare our value-to-book ratio
estimate to Coca-Cola’s market-to-book ratio at the end of 2008 to determine an invest- ment recommendation. In addition, we compute the value-earnings and price-earnings ratios and the price differential and we reverse-engineer Coca-Cola’s share price as of the end of 2008.
a. Use the CAPM to compute the required rate of return on common equity capital for Coca-Cola.
b. Using the projected financial statements in Chapter 12’s Exhibits 12.13–12.15, derive the projected residual ROCE (return on common shareholders’ equity) for Coca- Cola for Years +1 through +5.
c. Assume that the steady-state long-run growth rate will be 3 percent in Year +6 and beyond. Project that the Year +5 income statement and balance sheet amounts will grow by 3 percent in Year +6; then derive the projected residual ROCE for Year +6 for Coca-Cola.
d. Using the required rate of return on common equity from Part a as a discount rate, compute the sum of the present value of residual ROCE for Coca-Cola for Years +1 through +5.
e. Using the required rate of return on common equity from Part a as a discount rate and the long-run growth rate from Part c, compute the continuing value of Coca- Cola as of the start of Year +6 based on Coca-Cola’s continuing residual ROCE in Year +6 and beyond. After computing continuing value as of the start of Year +6, discount it to present value at the start of Year +1.
f. Compute Coca-Cola’s value-to-book ratio as of the end of 2008 with the following three steps: (1) Compute the total sum of the present value of all future residual ROCE (from Parts d and e). (2) To the total from (1), add 1 (representing the book value of equity as of the beginning of the valuation as of the end of 2008). (3) Adjust the total sum from (2) using the midyear discounting adjustment factor.
g. Compute Coca-Cola’s market-to-book ratio as of the end of 2008. Compare the value-to-book ratio to the market-to-book ratio. What investment decision does the comparison suggest? What does the comparison suggest regarding the pricing of Coca-Cola shares in the market: underpriced, overpriced, or fairly priced?
h. Use the value-to-book ratio to project the value of a share of common equity in Coca-Cola.
i. If you computed Coca-Cola’s common equity share value using the free cash flows to common equity valuation approach in Problem 12.16 in Chapter 12 and/or the residual income valuation approach in Problem 13.19 in Chapter 13, compare the value estimate you obtained in those problems with the estimate you obtained in this case. You should obtain the same value estimates under all three approaches. If you have not yet worked those problems, you would benefit from doing so now.
Earnings Ratio, Price Differentials, and Reverse Engineering
j. Use the forecast data for Year +1 to project Year +1 earnings per share. To do so, divide the projection of Coca-Cola’s comprehensive income available for common shareholders in Year +1 by the number of common shares outstanding at the end of 2008. Using this Year +1 earnings-per-share forecast and using the share value com- puted in Part h, compute Coca-Cola’s value-earnings ratio.
k. Using the Year +1 earnings-per-share forecast from Part j and using the share price at the end of 2008, compute Coca-Cola’s price-earnings ratio. Compare Coca-Cola’s value-earnings ratio with its price-earnings ratio. What investment decision does the comparison suggest? What does the comparison suggest regarding the pricing of Coca-Cola shares in the market: underpriced, overpriced, or fairly priced? Does this comparison lead to the same conclusions you reached when comparing value-to- book ratios with market-to-book ratios in Part g?
l. Compute Coca-Cola’s price differential at the end of 2008. Compute Coca-Cola’s price differential as a percentage of Coca-Cola’s risk-neutral value. What dollar amount and what percentage amount has the market discounted Coca-Cola shares for risk?
m. Reverse-engineer Coca-Cola’s share price at the end of 2008 to solve for the implied expected rate of return. First, assume that value equals price and that the earnings and growth forecasts through Year +6 and beyond are reliable proxies for the mar- ket’s expectations for Coca-Cola. Then solve for the implied expected rate of return (the discount rate) the market has impounded in Coca-Cola’s share price. (Hint: Begin with the forecast and valuation spreadsheet you developed to value Coca-Cola shares. Vary the discount rate until you solve for the discount rate that makes your value estimate exactly equal the end of 2008 market price of $44.42 per share.)
n. Reverse-engineer Coca-Cola’s share price at the end of 2008 to solve for the implied expected long-run growth. First, assume that value equals price and that the earn- ings forecasts through Year +5 are reliable proxies for the market’s expectations for Coca-Cola. Also assume that the discount rate implied by the CAPM (computed in Part a) is a reliable proxy for the market’s expected rate of return. Then solve for the implied expected long-run growth rate the market has impounded in Coca-Cola’s share price. (Hint: Begin with the forecast and valuation spreadsheet you developed to value Coca-Cola shares and use the CAPM discount rate. Set the long-run growth parameter initially to zero. Increase the long-run growth rate until you solve for the growth rate that makes your value estimate exactly equal the end of 2008 market price of $44.42 per share.)
Get this Question solved
Get this solution now
Get notified immediately when an answer to this question is available.
You might
want to try the quicker alternative (needs Monthly Subscription)
No, not that urgent
affecting financial performance. Our Company manages income taxes and financial costs, such as interest income and expense, on a global basis within the Corporate operating segment. We evalu- ate segment performance based on income or loss before income taxes. Below are selected segment data for......
) | https://www.transtutors.com/questions/14-21-valuation-of-coca-cola-using-market-multiples-the-coca-cola-company-is-a-globa-1367587.htm | CC-MAIN-2018-47 | refinedweb | 1,245 | 53.31 |
Did you install it?Did you install it?MariaDB provides Python support through the MySQL Python package, which does not come installed with the default Python installation on most distros.
“The wonderful thing about standards is that there are so many of them to choose from.”
― Grace Murray Hopper
Operating a computer is a bit more complicated than using a toaster.Operating a computer is a bit more complicated than using a toaster.
Code: Select all
import time
I'll take roast goose over any programming language.I'll take roast goose over any programming language.internsrus wrote: ↑Sat Dec 23, 2017 1:46 pmbetter than roast goose or mince pies at Christmas | https://www.raspberrypi.org/forums/search.php?author_id=91284&sr=posts | CC-MAIN-2019-22 | refinedweb | 113 | 68.06 |
The range of people working under the broad umbrella of the
Semantic Web come from many diverse communities, from the Web-focused
to experienced researchers in the fields of artificial intelligence
and knowledge representation. Ultimately the skills of all those
involved will be required, and it's definitely beyond the scope of any
one group to provide the expertise necessary to build the ultimate
Semantic Web.
For me, the key thing about the Semantic Web is the word
"Web". It's our essential starting point, and the Web at large is the
ecology in which the primordial Semantic Web must grow. I spend most
of my time working with the Web, as a developer and a writer, and also
in involvement with the community of developers and publishers that
use the Web.
So, as I approach the Semantic Web (or "SW" from here on), I'm
always asking the question "how do we get this started?" There are
many interesting and exciting possibilities in the realms of logic and
proofs, but getting them running on the Web must be preceded by
getting more basic machine processible content out there. The evolving
form of the SW has to crawl before it can run.
In this article I introduce the SW vision and explore the practical
steps that we need to be taking to build it.
The essential aim of the SW vision is to make Web information
practically processible by a computer. Underlying this is the goal of
making the Web more effective for its users. This increase in
effectiveness is constituted by the automation or enabling
of things that are currently difficult to do: locating content,
collating and cross-relating content, drawing conclusions from
information found in two or more separate sources.
In the software world we can often get so enthusiastic about the
systems that we're creating that we stray from a focus on the user's
requirements. One of the great things about the Web is that it's
unforgiving when we ignore the user. Create a site that's hard to use
and nobody will come. Create a technology for page markup that's
difficult to grasp and nobody will use it. In fact, you might see the
creation and implementation of the SW as a near impossible task: it's
still difficult to get people to use as little metadata as the
<title> tag in their web pages.
Clearly, to get off the starting blocks, the SW has to offer enough
in reward to make it worth people's time to learn new skills and to
more carefully deploy their content on the Web.
So, that's the vision. A Web that machines can understand to make
our lives easier. If you accept that the end purpose of the SW is to
make your life easier, then the use cases spring from your
frustrations. Some of the common problems we want to solve on the Web
revolve around interoperability of data. Synchronize your Palm
Pilot's schedule with a web page, have some kind of universal view
over your email, documents, and web browsing history. These problems
are currently unsolved because of the fragmentation of our data due to
custom and proprietary data formats. Providing an integration of these
is an obvious use case.
As well as meeting some obvious use cases, there's a degree of
serendipity in the SW work. There's a feeling that says, "if only we
got all these sources of information tied together, than exciting
things would happen!" Building the SW is a research and development
project, not a manufacturing process. There'll be some dead ends, and
there'll be some discoveries of exciting and unforeseen
proportions.
Speaking personally, I have a fundamental excitement at being able
to recover and integrate my data from disparate sources and
proprietary formats. This springs from constraints on my time, the
difficulty of finding information, and the redundancy of having my
data scattered across multiple devices. In what follows I give an
explanation of each layer in Tim Berners-Lee's vision of the SW: each
layer gives progressively more value; each is exciting in its own
right. My current aims for the SW result purely from the
implementation of some of the lower layers.
The World Wide Web Consortium has recently started a specific
Activity to address SW development. Under the leadership of Eric
Miller, its remit is twofold: to develop and address issues with RDF
and RDF Schema; to coordinate with other W3C groups using RDF; and to
undertake and encourage "advanced development" of SW software.
This latter aim is the thing I find most exciting. "Advanced
development" entails the W3C working with developers in an open
fashion to encourage SW-related projects and to give them a
focus. Early projects that might cluster around mandate include some
work inside the W3, such as RDF wrappers for CVS repositories, and
potentially some existing community-based projects could have a home
there. Essentially, "advanced development" is a recognition of what
has happened to the RDF world in the last year. While it essentially
languished for a while at the W3C in terms of formal activity, a
community has grown up, with some very encouraging results.
The W3C has put forward a very clear architecture for the SW, described
by Berners-Lee at XML 2000 in Washington last year. This
architecture is cleanly layered, starting with the foundation of
URIs and Unicode. On top of that sits syntactic interoperability in
the form of XML, which in turn underlies what I like to think of as
the data interoperability layer, RDF and RDF schemas. Those layers sum
up most of the SW that's presently available in implementation
form. And without looking further up the SW stack, an extraordinary
amount of utility can and has been obtained from just those
layers.
You'll notice that digital signatures run right up the side of the
stack, emphasizing their widespread utility. At each stage they allow
content from a layer to be labeled with an assured provenance. Digital
signatures are critical to both the SW and the growing use of XML in
other message exchanges. From the basic act of signing some RDF
assertion ("I said this!") to signing proofs, they add a level of
assurance to the Web that hasn't existed thus far.
On top of RDF lie ontologies, which allow the further description
of objects and their interrelations, past the basic class-property
descriptions enabled by RDF Schema. The W3C in conjunction with DARPA
and the European Union is pursuing the development of languages in
this area right now. Ontologies provide the ability to say "my world
is like this" and are the foundation that will enable programs to
reason about different worlds and environments and make connections
between them.
The logic layer will provide an interoperable language for
describing the sets of deductions one can make from a collection of
data -- how, given the world we've now neatly described, we can make
connections and derive new facts about it. The proof language will
provide a way of describing the steps taken to reach a conclusion from
the facts. These proofs can then be passed around and verified,
providing short cuts to new facts in the system without having each
node conduct the deductions themselves.
The SW vision is that once all these layers are in place, we will
have a system in which we can place trust that the data we are seeing,
the deductions we are making, and the claims we are receiving have
some value. That's the the goal: to make a user's life easier by the
aggregation and creation of new, trusted information over the Web.
Now that we've seen the plan, let's look at how it's going to be
built. Obviously, the technology needs to be invented. But technology
without adoption is dead. What SW advocates need to do to reach the
critical points along the road to adoption?
Eric Miller, SW Activity Lead, certainly has his job cut out. While
there are encouraging signs of a groundswell in support for RDF, it
mostly has a bad name and reputation at the moment. Take this along
with the confusion that XML namespaces, an underlying layer, generates
(and never mind that many US programs can't even work with European
Latin character sets, much less Unicode) and there are some steep
slopes to climb.
So one of the first aims of SW advocates must be to promote
understanding of what they're doing, at both low and high levels. RDF
is more than an obscure or verbose way to write what you could do
easily in XML. There are reasons for using it. Naming everything with
URIs is in fact very powerful, but the confusion about the use of the
http: prefix for unretrievable resources needs to be
cleared up.
http:
But it would be a mistake to focus on getting all developers (much
less users) to understand fundamentally every layer of this stack. The
fact is that most developers use prepared modules to do their
construction work; only a few are extreme enough to bake their own
bricks. An aid and impetus to getting understanding is to get
implementation. It's very reasonable for people to ask, "what does
this do for me?" about a new technology. Implementations can speak
louder than a thousand specifications.
Implementations fall into two categories: (1) deployment of SW
technologies in a vocabulary or framework and (2) software tools. The
growth in basic RDF tools over the last year has been very
pleasing. These tools are starting to reach the level of maturity at
which I would consider basing an application on one or two of
them. Likewise the deployment of RDF in vocabularies like PRISM and
RSS is encouraging and has reaped particular benefits that straight
XML serializations often miss.
We should be
careful not to restrict SW technologies to just those explicit
layers in Berners-Lee's idealized
diagram. There's obviously a difference between what is on the Web,
and what is in the diagram (HTML is not mentioned, for instance).
The beauty of XML is that it's in the
perfect place to act as a bridge. HTML (or more properly XHTML) can be
semantically decorated by means of things like the class
attribute, and XSLT can be used to extract RDF. Likewise, there are
other semantic applications, such as Topic Maps, that are pure XML
applications. Are these to be excluded from the SW? No, XML provides
a bridge.
class
Picture RDF as providing an interoperable data bus for the SW.
Some data sources may need a converter to connect, but it doesn't stop
them connecting. And once they're patched in, there's a lot of
potential in the resulting integration.
So the W3C has to promote understanding and implementation among
the community. What about money? Surely you can't reach critical mass
without there being money in it?
Yes, there has to be commercial value somewhere down the line;
business is after all about providing services to users. But we ought
to be wary of the effect of premature
intense commercial interest. On the one
hand, look at the W3C's greatest successes: the Web itself was built
while nobody in particular was looking.
XML 1.0 developed similarly: "fast, low and
under the radar," as Tim Bray likes to say. On the other hand, the effect of large-scale
corporate interest on XML Schema has been significant, causing the end
result to be late and an obviously overcomplicated result of
design-by-committee.
What does getting the SW right entail? There's a lot we can learn
from the existing Web itself, which has been outrageously
successful. As the SW is to be built on top of the Web, many of its
characteristics are there as a base and should be continued. The Web
provides the ecology in which the SW must thrive, not destroy. So what
are these characteristics?
So is this a private W3C party? Judging by the way the new SW
Activity is set up, the W3C has recognized it's not and wishes open
community involvement in the effort. The importance of this community
should not be underestimated. Over the last year there have been at
least two community-driven efforts already building the SW that've
caught my attention. Their use cases in each instance described
practical problems that the developers had to solve to help in their
work.
RDDL,
covered recently in XML.com, allows developers to place a
machine-readable description document at the end of a namespace URI to
allow processors to discover resources related to a namespace. RSS 1.0
is a web content metadata distribution format. Its extensibility
allows it to be used in many situations far beyond original use
cases. Both these projects fill in a little bit of the picture for the
SW and represent chunks of what is to come. In the context of success
for the SW, they're notable because they solved direct needs and
extensibly allow reuse and expansion into areas that the designers
didn't foresee -- a direct reflection of the development of the Web
itself.
To conclude, it's important that the builders of the SW keep their
feet on the ground. The next generation of the Web will be built
cooperatively and in a distributed manner. Rather than pondering grand
unification theories, we should concentrate on doing small things well
and solving achievable and well-defined problems. Good and open
implementation in addition to good design is key. Furthermore, the
longer development can stay "fast, low and under the radar," the
better.
The SW represents an enormous opportunity not just to solve our
problems with information management, but also to solve them in an
interoperable environment, so we can all share solutions and enjoy the
network effect. But always the goal should be to make the Web more
effective for the user, and it is by such that it will be judged.
Be the first to post this article to del.icio.us | http://www.xml.com/pub/a/2001/03/07/buildingsw.html | crawl-001 | refinedweb | 2,365 | 60.65 |
- Classes Aren't Special
- Types and Pointers
- Defining Classes
- Memory Management
- Summary
Part 1 of this series examined the history and philosophy of Objective-C. This article starts investigating some of the concrete syntax. As you might expect, this involves defining and using classes.
Classes Aren't Special
In Smalltalk, classes are just objects with a few special features. The same is true in Objective-C. A class is an object. It responds to messages just as an object does. Both Objective-C and C++ split object allocation and initialization:
- In C++, object allocation is done via the new operator. In Objective-C, it's done by sending the class an alloc message—which, in turn, calls malloc() or an equivalent.
- Initialization in C++ is done by calling a function with the same name as the class. Objective-C doesn't distinguish between initialization methods and other methods, but by convention the default initialization method is init.
When you declare a method to which instances respond, the declaration starts with -, and + is used for class methods. It's common to use these prefixes for messages in documentation, so you would say +alloc and -init, to indicate that alloc is sent to a class and then init is sent to an instance.
Classes in Objective-C, as in other object-oriented languages, are object factories. Most classes don't implement +alloc themselves; instead, they inherit it from the superclass. In NSObject, the root class in most Objective-C programs, the +alloc method calls +allocWithZone:. This takes an NSZone as an argument, a C structure containing some policy for object allocations. Back in the 1980s, when Objective-C was used in NeXTSTEP to implement device drivers and pretty much all of the GUI on machines with 8MB of RAM and 25 MHz CPUs, NSZone was very important for optimization. At the moment, it's more or less completely ignored by Objective-C programmers. (It has potential to become more relevant as NUMA architectures become more common, however.)
One of the nice features of the fact that object creation semantics are defined by the library and not the language is the idea of a class cluster. When you send an -init message to an object, it returns an initialized object. This may be the object to which you sent the message (and usually is), but it doesn't have to be. The same is true of other initializers. It's possible to have specialized subclasses of a public class that are more efficient on different data.
One common trick for implementing this feature is called isa-swizzling. As I said earlier, Objective-C objects are C structures whose first element is a pointer to the class. This element is accessible, just as any other instance variables are; you can change the class of an object at runtime simply by assigning a new value. Of course, if you set the class of an object to something that has a different layout in memory, things will go horribly wrong. However, you can have a superclass that defines the layout and then a set of subclasses that define the behavior; for example, this technique is used by the standard string class (NSString), which has various instances for different width text encodings, for static strings, and so on.
Because classes are objects, you can do pretty much anything with them that you would with objects. For example, you can put them in collections. I use this format fairly often when I have a set of incoming events that need handling by instances of different classes. You could create a dictionary mapping the event names to the classes, and then instantiate a new object for each incoming event. If you do this in a library, it allows users of the code to register their own handlers easily. | https://www.informit.com/articles/article.aspx?p=1272497&seqNum=2 | CC-MAIN-2020-34 | refinedweb | 639 | 62.07 |
I have a directory of 3-column tab-separated files that have the following structures and N-lines:
File 1
- Code: Select all
abandonment-n about-bring-v 32.5890
abandonment-n about-complaint-n 5.5112
abandonment-n about-concern-n 10.6714
abandonment-n among-1-crowd-n 11.4496
File 2
- Code: Select all
aardvark-n about-fact-n 7.4328
aardvark-n about-information-n 6.5145
aardvark-n about-know-v 6.4239
aardvark-n among-1-crowd-n 9.9085
I would like to compare Column 2 of these columns, counting and then outputting the number of strings that each of the two files has in common:
In this case it would be 1:
- Code: Select all
aardvark-n among-1-crowd-n 9.9085
abandonment-n among-1-crowd-n 11.4496
However, the output I need is just the count of common items in Column 2 between the files.
My obstacle is that I want to consider all of the unique bi-combinations of files in a directory (in my case it would be 24 - for a total of 300 unique combinations).
I think that using something like this might do the trick:
- Code: Select all
import os, itertools
files = os.listdir("/path/to/files")
for file1, file2 in itertools.combinations(files, 2):
print file1, file2
###and (this part I am not so sure about as it keeps giving me an error)###
def file_to_dict(filename):
lines = open(filename).read().split()
for line in lines:
_, Col2Compare, _ = line.split()
return dict([line.split(',') for line in lines])
dict1, dict2 = file_to_dict('file1.csv'), file_to_dict('file2.csv')
But I am not sure how to integrate this with
The optimal end -result would be to compile all of this information in a contingency matrix, although for the time being I am happy just getting the counts necessary, Unless someone has a suggestion there? | http://www.python-forum.org/viewtopic.php?p=10512 | CC-MAIN-2017-26 | refinedweb | 318 | 66.44 |
freopen
Stream open functions
DescriptionThe fopen function opens the file whose name is the string pointed to by
pathand associates a stream with it.
The argument
modepoints to a string beginning with one of the following sequences (Additional characters may follow these sequences.): The
modestring can also include the letter ``b'' either as a third character or as a character between the characters in any of the two-character strings described above. This is strictly for compatibility descriptor,
fildes. The mode of the stream must be compatible with the mode of the file descriptor. When the stream is closed via fclose,
fildesis closed also. The freopen function opens the file whose name is the string pointed to by
pathand associates the stream pointed to by
streamwith it. The original stream (if it exists) is closed. The
modeargument is used just as in the fopen function. If the
pathargument is NULL, freopen attempts to re-open the file associated with
streamwith a new mode. The new mode must be compatible with the mode that the stream was originally opened with:
- Streams originally opened with mode "r" can only be reopened with that same mode.
- Streams originally opened with mode "a" can be reopened with the same mode, or mode w.
- Streams originally opened with mode "w" can be reopened with the same mode, or mode a.
- Streams originally opened with mode "r+", "w+", or "a+" can be reopened with any mode.
Example:
Example - Stream open functions
Workings
#include <stdio.h> int main() { FILE *in; if (in = fopen("fred.txt", "rt")) { for (char c; !feof(in); fscanf(in, "%c", &c)); fclose(in); } return 0; } | http://www.codecogs.com/library/computing/c/stdio.h/fopen.php?alias=freopen | CC-MAIN-2018-34 | refinedweb | 272 | 73.47 |
In this tutorial, we will work through the basics of using a WebView to display images within your app, configuring the automated interaction controls from within your Java code. We will also explore various options for importing images into a WebView, including loading images from Web locations, from the device Gallery, and from within the app's directory structure.
Step 1: Create an Android Project
If you do not already have an app you are working with, start a new Android project in Eclipse. In your app's main Activity class, or whatever Activity you want to display images within, add the following import statements before your class declaration opening line:
import android.app.Activity; import android.content.Intent; import android.database.Cursor; import android.net.Uri; import android.os.Bundle; import android.provider.MediaStore; import android.view.View; import android.view.View.OnClickListener; import android.webkit.WebView; import android.widget.Button;
You may not need all of these depending on how you plan on loading your images. If you plan on loading your images over the Web, you need to add Internet permission to your project Manifest. Open the Manifest and add the following line anywhere inside the parent "manifest" element:
<uses-permission android:
Step 2: Create the App Layout
We are going to use a single WebView inside a Linear Layout to explore displaying images. Inside your project's main XML layout file, or whichever one you want to use for the Activity in question, add the following layout outline:
<LinearLayout xmlns: </LinearLayout>
Inside this main Linear Layout, first add your WebView as follows:
<WebView android: </WebView>
We will use the ID attribute to identify the WebView in Java. Since the layout is going to include other elements, we specify a weight along with the general layout properties. To demonstrate loading the images from three different locations, we are also going to add three buttons. If you only plan on using one of the loading methods, feel free to alter this. After the WebView, still inside the main Linear Layout, add the following additional Linear Layout:
<LinearLayout xmlns: <Button android: <Button android: <Button android: </LinearLayout>
Here we include three buttons inside a second Linear Layout, with ID attributes so that we can implement button clicks in Java. You will also need to add the following to your Strings XML file, which you should find in the app's "res/values" directory:
<string name="pick">Gallery</string> <string name="load">Web</string> <string name="local">App</string>
Step 3: Prepare for Loading Images
In your app Activity class, alter your opening class declaration line to implement click listeners as follows:
public class PictureViewerActivity extends Activity implements OnClickListener {
Alter the class name to suit your own. Now add the following inside the class declaration, before the "onCreate" method:
private WebView picView;
Your "onCreate" method should already be there, but if not add it as follows:
@Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); }
This is standard code to create the Activity. Inside this method, after the existing code, retrieve a reference to your WebView and alter its display color as follows:
picView = (WebView)findViewById(R.id.pic_view); picView.setBackgroundColor(0);
This will allow us to load images into the WebView while the app runs. The WebView displays with a white background by default, which we are overriding here. After the "onCreate" method, still inside the class declaration, add the outline of your "onClick" method as follows:
public void onClick(View v) { }
We will add code to handle each button click inside this method.
Step 4: Load an Image from the Gallery
Let's start by allowing the user to load an image from the Gallery on their own device. First, add an instance variable inside your class declaration, but before the "onCreate" method:
private final int IMG_PICK = 1;
This will allow us to respond to the user returning from the Gallery after choosing an image. Inside the "onCreate" method, after the existing code, add the following to retrieve a reference to the "pick" button and assign a click listener to it:
Button pickBtn = (Button)findViewById(R.id.pick_btn); pickBtn.setOnClickListener(this);
Now we can respond to button clicks. Inside the "onClick" method, add the following:
if (v.getId() == R.id.pick_btn) { Intent pickIntent = new Intent(); pickIntent.setType("image/*"); pickIntent.setAction(Intent.ACTION_GET_CONTENT); //we will handle the returned data in onActivityResult startActivityForResult(Intent.createChooser(pickIntent, "Select Picture"), IMG_PICK); }
This will take the user to another application to select an image. Depending on which apps they have installed, they may need to select an app from a list. For example, on my device I receive two choices on pressing the "pick" button:
When the user chooses an image, they will return to the app and the "onActivityResult" method will fire. Add it to your class declaration as follows:
protected void onActivityResult(int requestCode, int resultCode, Intent data) { if (resultCode == RESULT_OK) { } }
Inside the "if" statement, add the following to check that the user is returning from the Intent we started for them to choose an image:
if (requestCode == IMG_PICK) { }
Inside this "if" statement, we can retrieve the data returned from the Gallery app, which will be the URI of the image the user picked:
Uri pickedUri = data.getData();
We will build a String representing the path for the image, which we need to load the image into the WebView. We are using the same technique explored in more detail in Displaying Images with an Enhanced Gallery. Add the following code:
String imagePath = ""; String[] imgData = { MediaStore.Images.Media.DATA }; Cursor imgCursor = managedQuery(pickedUri, imgData, null, null, null); if(imgCursor!=null) { int index = imgCursor.getColumnIndexOrThrow(MediaStore.Images.Media.DATA); imgCursor.moveToFirst(); imagePath = imgCursor.getString(index); } else imagePath = pickedUri.getPath();
Now we have a reference to the image location and can load it into the WebView:
picView.loadUrl(""+imagePath);
You can run your app now to test it loading the Gallery image - you may need to run it on an actual device as the emulator does not normally have images stored on it.
Next we will handle loading from the Web and the app directory, before exploring configuration options for the WebView.
Step 5: Load an Image from the Web
Now for a simpler option. To load an image from the Web, we simply need the URL. First, back in the "onCreate" method, implement button clicks on your "load" button as follows:
Button loadBtn = (Button)findViewById(R.id.load_btn); loadBtn.setOnClickListener(this);
In the "onClick" method, after the "if" statement in which we handled the "pick" button, add the following, altering it to suit your own image URL:
else if(v.getId() == R.id.load_btn) { picView.loadUrl("");
Here we are simply loading one of the Android Google Play image resources for demonstration, but you can of course alter it to reflect an image of your choice. If you want the user to enter their chosen image you can add an editable text-field to capture this. The image will load providing the device has a functioning Internet connection:
Step 6: Load an Image From the App Directory Structure
You may have images within your application package that you wish to display in a WebView. We will explore two possible ways to achieve this. First, back in your "onCreate" method, handle button clicks:
Button appBtn = (Button)findViewById(R.id.app_btn); appBtn.setOnClickListener(this);
Add another branch to the "if" and "else" statements in your "onClick" method as follows:
else if(v.getId() == R.id.app_btn) { }
To display only an image in the WebView, you can simply specify its URL:
picView.loadUrl("");
This loads a JPEG image file stored in the app's "assets" folder and named "mypicture.jpg".
The WebView is naturally designed to display HTML content, so you may wish to display the image as an HTML "img" element along with other Web markup. To do so, you can save an HTML file in the app's "assets" folder with an "img" element inside it, for example:
<html> <head> </head> <body> <img src=""/> </body> </html>
You may include other HTML content in this file if you want it to display in your WebView along with the image. To load the HTML, alter the "loadURL" line as follows:
picView.loadUrl("");
This works for an HTML file saved as "imagepage.html" in the "assets" folder, so alter it to suit the name of your own file. This code is all you need to load the image within the HTML file.
Step 7: Configure WebView Image Interaction
You can set some of the details of how the user interacts with your image inside the WebView from your Java Activity code. In the "onCreate" method, after your button listener code, add the following:
picView.getSettings().setBuiltInZoomControls(true); picView.getSettings().setUseWideViewPort(true);
This instructs the app to use the standard zoom controls and wide View Port for your WebView. There are other options you can explore here, such as setting the default zoom level. Now when the user interacts with your WebView, they can double-tap and pinch to zoom, as well as using the buttons and sliding to scroll/ pan:
Conclusion
Making use of default Android resources such as the WebView allows you to quickly exploit interaction models your users will already be familiar with, as well as letting you focus on the unique aspects of your applications. The WebView renders HTML pages, so you can also enhance your apps by using Web technologies such as CSS and JavaScript. As you can see from the above example, you can effectively integrate the WebView with other Android UI items.
Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
| https://code.tutsplus.com/tutorials/image-display-and-interaction-with-android-webviews--mobile-11362 | CC-MAIN-2020-40 | refinedweb | 1,623 | 51.48 |
Dec 27, 2010 03:57 PM|GEM1204|LINK
I ‘m learning ajax and have created some pages and practiced using the ajax controls. I’m using visual studio 2007 on a windows xp machine with .net 3.5 . The site I’m practicing with is the ajax sample web site that I’ve downloaded. I’m just adding pages to this site as practice. Using a tag prefix of ajaxToolkit
<%@ Register Assembly="AjaxControlToolkit" Namespace="AjaxControlToolkit" TagPrefix="ajaxToolkit" %>
I have read on this site and serveral other sites that you can register the assembly and the tag prefix by placing code in the web config, so Between the assemblies Tags of the we congif I entered
<add assembly="AjaxControlToolkit, Version=3.5.40412.0, Culture=neutral, PublicKeyToken=28f01b0e84b6d53e"/>
And between the <controls> tag the web config I added
<add tagPrefix="ajaxToolkit" namespace="AjaxControlToolkit" assembly="AjaxControlToolkit, Version=3.5.40412.0, Culture=neutral, PublicKeyToken=28f01b0e84b6d53e"/>
This didn’t work because when an ajax control is added from the toolbox get asp tag
<asp:ScriptManager
</asp:ScriptManager>
<asp:ComboBox
</asp:ComboBox>
Can someone help me with this?
Member
479 Points
Dec 27, 2010 04:47 PM|habdulrauf|LINK
Right click in tool box, click choose items, then browse to the folder where ajaxcontroltoolkit.dll is present. select it and press ok then the ajax control toolkit controls will be in the tool box, then you can drag drop controls from toolbox on the form. configuration changes will automatically made in web.config and on the aspx page you don't need to do extra.
If you want to use update panel then in tool box locate the section which says ajax extensions and drop update panel on the form.
Ask again if not clear.
Please mark as answer if it helps you.
Dec 27, 2010 06:42 PM|GEM1204|LINK
Thanks for the response. The only reason I care about the tag prefix is because I am going through some of the tutorials on the asp website and in the tutorials the ajax controls have the tag prefix “ajaxToolkit” and I would like to follow along with the tutorial with out constantly have to change the tag prefix form “asp” to “ajaxToolkit”. The tag prefix of ajax controls is confusing – I seen the prefix “cc1” as well as “ajaxToolkit”. I have now idea what the standard is.
I tried following you instructions and I created a new web site and the first thing I did was add the tag prefix to the web.config
<add tagPrefix="ajaxToolkit" namespace="AjaxControlToolkit" assembly="AjaxControlToolkit, Version=3.5.40412.0, Culture=neutral, PublicKeyToken=28f01b0e84b6d53e"/>
I then deleted the Ajax control tools tab and recreated the ajax tools tab and then selected choose items and went out and selected the AjaxControlToolkit.dll and all of the controls were re-added to the Ajax Controls Tool Tab.
When I created a page and added a script manager and a combo box I got:
<asp:ToolkitScriptManager
</asp:ToolkitScriptManager>
<asp:ComboBox
</asp:ComboBox>
At the top of the page I also got
<%@ Register Assembly="AjaxControlToolkit" Namespace="AjaxControlToolkit" TagPrefix="asp" %>
If I remove the register asembly at the top of the page and change the tag for the script manager and combobox prefix from “asp” to “ajaxToolkit” it works, so apparently the tag prefix is being picked up from the web.config. But If a add another ajax control I get the register assembly at the top of the page again trying to register the tag prefix ‘asp’ for the ajax controls.
I guess the only problem I have is the Register assembly for "asp" being added to the top of the page every time I add an ajax control. Any ideas on how to prevent this from happening?
Thanks
All-Star
71009 Points
Dec 28, 2010 03:29 AM|chetan.sarode|LINK
1. Install the Ajax Control Toolkit as described in:
2. Modify the web.config by adding the entries below.
a. <add assembly="AjaxControlToolkit, Version=3.5.40412.0, Culture=neutral, PublicKeyToken=28f01b0e84b6d53e"/>
Goes between <assemblies></assemblies> tag.
b. <add tagPrefix="ajaxToolkit" namespace="AjaxControlToolkit" assembly="AjaxControlToolkit, Version=3.5.40412.0, Culture=neutral, PublicKeyToken=28f01b0e84b6d53e"/>
Goes between <controls></controls> tag.
If you’ve installed some other version of the tool kit then make sure to replace a value of the Version attribute of both entries above.
To get the version just right click on the AjaxControlToolkit.dll and get the value of Assembly Version.
Here is what you’d need to use an ajax control on your page.
3 replies
Last post Dec 28, 2010 03:29 AM by chetan.sarode | http://forums.asp.net/t/1636720.aspx | CC-MAIN-2014-10 | refinedweb | 771 | 54.22 |
Many of us have been working with the Java Server Pages technology for quite some time and have become familiar with custom tags. Custom tags have made working with JSPs not only easier, but also more efficient.
If a custom tag has been created, tested, and debugged, then it only is logical that, as a developer, you want to grab the golden ring of OOD: reusable components. Working with custom tags is one avenue to explore in the world of reuse. But wouldn't it be better, more efficient, and easier if there was a set of standard tags that solved common problems? Do we really need ten different ways to iterate, or to do conditional processing?
Enter the JSP Standard Tag Library, also known as JSTL. The base design philosophy of JSTL is to provide page authors with a script-free environment. I say, bottoms up to that.
JSTL is still in early access release, but that doesn't mean that you shouldn't start paying attention to it. JSTL is being developed under the Java Community Process, in the JSR-052 expert group. The purpose of JSTL is to work towards a common and standard set of custom tags. There are currently many tag libraries available for download from various vendors, as well as from open source projects such as the Jakarta Taglibs project. In fact, a JSP 1.2/Servlet 2.3-compliant reference implementation of JSTL is being hosted under the Taglibs project. You can run this reference implementation under Tomcat, but you'll need to use Tomcat 4.
Remember, this is still an early-access release, so there's always the chance that things will change slightly. The tags provided in JSTL will be able to be used within any JSP-compliant container. The advantage of using JSTL over a vendor-specific tag library is, obviously, that you won't be bound to a specific vendor's container.
Ready for a look at the wizard behind the curtain?
A tag library is a set of actions that encapsulate functionality. These tags are then used within JSP pages. JSTL provides a wide variety of functionality that can be broken down into specific functional areas. While JSTL is a single taglib, it is exposed through multiple Tag Library Descriptors (TLDs). This is done primarily for convenience (so that tags are in their appropriate functional area), but also so that each TLD can have its own namespace, or prefix. These four areas are:
Core (with a URI of and a prefix of c): provides the core JSTL tags, such as those that perform iteration, conditional processing, and expression language support. Access tags provided in this TLD by placing < %@ taglib prefix="c" uri="" %> at the top of your JSP.
c
< %@
One other point to mention about the JSTL tag libraries is that there are actually two versions each of the majority of the tags. One set takes advantage of the new expression language, and the other doesn't; the tags that don't make use of an expression language use request-time expressions. This is useful if you want the flexibility provided by the standard tags but don't want to use the expression language support yet. To access these tag libraries, simply append -rt to the end of the taglib URI like so:
<%@ taglib prefix="c" uri="" %>
-rt
<%@ taglib prefix="c" uri="" %>
Let's take an overview look at the various tags in the functional areas outlined above.
Related Reading
Java Servlet Programming
By Jason Hunter
One of core tags is <forEach>, which provides iteration functionality: it iterates over most collections. It can be used with ranges or primitives, and it also can provide a detailed status of the iteration. <forTokens> works like <forEach>, except that it is applied to strings of tokens and takes an extra attribute called delims that allows for the delimiter to be specified. You can have more than one delimiter specified.
<forEach>
<forTokens>
delims
Also in the core functional area are the conditional tags. JSTL supports a simple conditional <if> tag along with a collection of tags such as <choose>, <when>, and <otherwise>. These tags support mutually exclusive conditionals. By using these tags, you can implement if/else structures. In its API, JSTL also exposes the abstract class ConditionalTagSupport, to facilitate the implementation of custom conditional tags that leverage the standard conditional behavior defined in JSTL.
<if>
<choose>
<when>
<otherwise>
ConditionalTagSupport
Expression language (EL) support is an important feature of JSTL that we'll talk about in more detail in the next section. It is also part of the core functionality. JSTL provides a few tags to facilitate the use of expression language. <c:expr> prints out the value of a particular expression in the current EL, similar to the way that the scriptlet expression (<%= ... %=>) syntax prints out the value of a expression in the scripting language (usually Java). <c:set> lets you set a scoped attribute (e.g., a value in the request, page, session, or application scopes) with the value of an expression. There is also the <c:declare> tag. In order for JSTL tags to collaborate with custom tags that only accept rtexprvalues, the <c:declare> tag must be used to create a scripting variable.
<c:expr>
<%= ... %=>
<c:set>
<c:declare>
rtexprvalues
While JSP supports the jsp:include tag, this standard action is limited because it only supports relative URLs. JSTL introduces the c:import tag, which lets you retrieve absolute URLs. You can use c:import to retrieve information from the Web using HTTP URLs, or from a file server using an FTP URL. You are even able to specify a foreign context, and that can come in handy sometimes.
jsp:include
c:import
Formatting data is one of the key tasks in many JSP pages. JSTL introduces tags to support data formatting and parsing for such things as dates and numbers. Using <fmt:locale>, <fmt:bundle>, and <fmt:message>, you can easily specify locale information for determining which resource bundles to use. These tags can also be used to support parameterized content using the <fmt:messageArg> tag. It's possible to use these tags very easily in your internationalized applications.
. | http://www.onlamp.com/pub/a/onjava/2002/03/13/jsp.html?page=1 | CC-MAIN-2017-51 | refinedweb | 1,032 | 54.22 |
More and more employers are looking for people experienced in building and running Kubernetes-based systems, so it’s a great time to start learning how to take advantage of the new technology. Elasticsearch consists of multiple nodes working together, and Kubernetes can automate the process of creating these nodes and taking care of the infrastructure for us, so running ELK on Kubernetes can be a good options in many scenarios.
We’ll start this with an overview of Kubernetes and how it works behind the scenes. Then, armed with that knowledge, we’ll try some practical hands-on exercises to get our hands dirty and see how we can build and run Elastic Cloud on Kubernetes, or ECK for short.
What we’ll cover:
- Fundamental Kubernetes concepts
- Use Vagrant to create a Kubernetes cluster with one master node and two worker nodes
- Create Elasticsearch clusters on Kubernetes
- Extract a password from Kubernetes secrets
- Publicly expose services running on Kubernetes Pods to the Internet, when needed.
- How to install Kibana
- Inspect Pod logs
- Install the Kubernetes Web UI (i.e. Dashboard)
- Install plugins on an Elasticsearch node running in a Kubernetes container
There’s a trend, lately, to run everything in isolated little boxes, either virtual machines or containers. There are many reasons for doing this so we won’t get into it here, but if you’re interested, you can read Google’s motivation for using containers.
Let’s just say that containers make some aspects easier for us, especially in large-scale operations.
Managing one, two, or three containers is no big deal and we can usually do it manually. But when we have to deal with tens or hundreds of them, we need some help.
This is where Kubernetes comes in.
What is Kubernetes?
By way of analogy, if containers are the workers in a company, then Kubernetes would be the manager, supervising everything that’s happening and taking appropriate measures to keep everything running smoothly.
After we define a plan of action, Kubernetes does the heavy lifting to fulfill our requirements.
Examples of what you can do with K8s:
- Launch hundreds of containers, or whatever number needed with much less effort
- Set up ways that containers can communicate with each other (i.e. networking)
- Automatically scale up or down. When demand is high, create more containers, even on multiple physical servers, so that the stress of the high demand is distributed across multiple machines, making it easier to process. As soon as demand goes down, it can remove unneeded containers, as well as the nodes that were hosting them (if they’re sitting idle).
- If there are a ton of requests coming in, Kubernetes can load balance and evenly distribute the workload to multiple containers and nodes.
- Containers are carefully monitored with health checks, according to user-defined specifications. If one stops working, Kubernetes can restart it, create a new one as a replacement, or kill it entirely. If a physical machine running containers fails, those containers can be moved to another physical machine that’s still working correctly.
Kubernetes Cluster Structure
Let’s analyze the structure from the top down to get a good handle on things before diving into the hands-on section.
First, Kubernetes must run on computers of some kind. It might end up being on dedicated servers, virtual private servers, or virtual machines hosted by a capable server.
Multiple such machines running Kubernetes components form a Kubernetes cluster, which is considered the whole universe of Kubernetes, because everything, from containers to data, to monitoring systems and networking exists here.
In this little universe, there has to be a central point of command, like the “brains” of Kubernetes. We call this the master node. This node assumes control of the other nodes, sometimes also called worker nodes. The master node manages the worker nodes, while these, in turn, run the containers and do the actual work of hosting our applications, services, processing data, and so on.
Master Node
Basically, we’re the master of our master node, and it, in turn, is the master of every other node.
We instruct our master node about what state we want to achieve which then proceeds to take the necessary steps to fulfill our demands.
Simply put, it automates our plan of action and tries to keep the system state within set parameters, at all times.
Nodes (or Worker Nodes)
The Nodes are like the “worker bees” of a Kubernetes cluster and provide the physical resources, such as CPU, storage space, memory, to run our containers.
Basic Kubernetes Concepts
Up until this point, we kept things simple and just peaked at the high-level structure of a Kubernetes cluster. So now let’s zoom in and take a closer look at the internal structure so we better understand what we’re about to get our hands dirty with.
Pods
Pods are like the worker ants of Kubernetes – the smallest units of execution. They are where applications run and do their actual work, processing data. A Pod has its own storage resources, and its own IP address and runs a container, or sometimes, multiple containers grouped together as a single entity.
Services
Pods can appear and disappear at any moment, each time with a different IP address. It would be quite hard to send requests to Pods since they’re basically a moving target. To get around this, we use Kubernetes Services.
A K8s Service is like a front door to a group of Pods. The service gets its own IP address. When a request is sent to this IP address, the service then intelligently redirects it to the appropriate Pod. We can see how this approach provides a fixed location that we can reach. It can also be used as a mechanism for things like load balancing. The service can decide how to evenly distribute all incoming requests to appropriate Pods.
Namespaces
Physical clusters can be divided into multiple virtual clusters, called namespaces. We might use these for a scenario in which two different development teams need access to one Kubernetes cluster.
With separate namespaces, we don’t need to worry if one team screws up the other team’s namespace since they’re logically isolated from one another.
Deployments
In deployments, we describe a state that we want to achieve. Kubernetes then proceeds to work its magic to achieve that state.
Deployments enable:
- Quick updates – all Pods can gradually be updated, one-by-one, by the Deployment Controller. This gets rid of having to manually update each Pod. A tedious process no one enjoys.
- Maintain the health of our structure – if a Pod crashes or misbehaves, the controller can replace it with a new one that works.
- Recover Pods from failing nodes – if a node should go down, the controller can quickly launch working Pods in another, functioning node.
- Automatically scale up and down based on the CPU utilization of Pods.
- Rollback changes that created issues. We’ve all been there 🙂
Labels and Selectors
First, things like Pods, services, namespaces, volumes, and the like, are called “objects”. We can apply labels to objects. Labels help us by grouping and organizing subsets of these objects that we need to work with.
The way Labels are constructed is with key/value pairs. Consider these examples:
app:nginx
site:example.com
Applied to specific Pods, it can easily help us identify and select those that are running the Nginx web server and are hosting a specific website.
And finally, with a selector, we can match the subset of objects we intend to work with. For example, a selector like
app = nginx
site = example.com
This would match all the Pods running Nginx and hosting “example.com”.
Ingress
In a similar way that Kubernetes Services sit in front of Pods to redirect requests, Ingress sits in front of Services to load balance between different Services using SSL/TLS to encrypt web traffic or using name-based hosting.
Let’s take an example to explain name-based hosting. Say there are two different domain names, for example, “a.example.com” and “b.example.com” pointing to the same ingress IP address. Ingress can be made to route requests coming from “a.example.com” to service A and requests from “b.example.com” to service B.
Stateful Sets
Deployments assume that applications in Kubernetes are stateless, that is, they start and finish their job and can then be terminated at any time – with no state being preserved.
However, we’ll need to deal with Elasticsearch, which needs a stateful approach.
Kubernetes has a mechanism for this called StatefulSets. Pods are assigned persistent identifiers, which makes it possible to do things like:
- Preserve access to the same volume, even if the Pod is restarted or moved to another node.
- Assign persistent network identifiers, even if Pods are moved to other nodes.
- Start Pods in a certain order, which is useful in scenarios where Pod2 depends on Pod1 so, obviously, Pod1 would need to start first, every time.
- Rolling updates in a specific order.
Persistent Volumes
A persistent volume is simply storage space that has been made available to the Kubernetes cluster. This storage space can be provided from the local hardware, or from cloud storage solutions.
When a Pod is deleted, its associated volume data is also deleted. As the name suggests, persistent volumes preserve their data, even after a Pod that was using it disappears. Besides keeping data around, it also allows multiple Pods to share the same data.
Before a Pod can use a persistent volume, though, it needs to make a Persistent Volume Claim on it.
Headless Service
We previously saw how a Service sits in front of a group of Pods, acting as a middleman, redirecting incoming requests to a dynamically chosen Pod. But this also hides the Pods from the requester, since it can only “talk” with the Service’s IP address.
If we remove this IP, however, we get what’s called a Headless Service. At that point, the requester could bypass the middle man and communicate directly with one of the Pods. That’s because their IP addresses are now made available to the outside world.
This type of service is often used with Stateful Sets.
Kubectl
Now, we need a way to interact with our entire Kubernetes cluster. The kubectl command allows us to enter commands to get kubectl to do what we need. It then interacts with the Kubernetes API, and all of the other components, to execute our desired actions.
Let’s look at a few simple commands.
For example, to check the cluster information, we’d would enter:
kubectl cluster-info
If we wanted to list all nodes in the cluster, we’d enter:
kubectl get nodes
We’ll take a look at many more examples in our hands-on exercises.
Operators
Some operations can be complex. For example, upgrading an application might require a large number of steps, verifications, and decisions on how to act if something goes wrong. This might be easy to with one installation, but what if we have 1000 to worry about?
In Kubernetes, hundreds, thousands, or more containers might be running at any given point. If we would have to manually do a similar operation on all of them, it’s why we’d want to automate that.
Enter Operators. We can think of them as a sort of “software operators,” replacing the need for human operators. These are written specifically for an application, to help us, as service owners, to automate tasks.
Operators can deploy and run the many containers and applications we need, react to failures and try to recover from them, automatically backup data, and so on. This essentially lets us extend Kubernetes beyond its out-of-the-box capabilities without modifying the actual Kubernetes code.
Custom Resources
Since Kubernetes is modular by design, we can extend the API’s basic functionality. For example, the default installation might not have appropriate mechanisms to deal efficiently with our specific application and needs. By registering a new Custom Resource Definition, we can add the functionality we need, custom-tailored for our specific application. In our exercises, we’ll explore how to add Custom Resource Definitions for various Elasticsearch applications.
Hands-On Exercises
Basic Setup
Ok, now the fun begins. We’ll start by creating virtual machines that will be added as nodes to our Cluster. We will use VirtualBox to make it simpler.
1. Installing VirtualBox
1.1 Installing VirtualBox on Windows
Let’s go to the download page: and click on “Windows Hosts”.
We can then open the setup file we just downloaded and click “Next” in the installation wizard, keeping the default options selected.
After finishing with the installation, it’s a good idea to check if everything works correctly by opening up VirtualBox, either from the shortcut added to the desktop, or the Start Menu.
If everything seems to be in order, we can close the program and continue with the Vagrant setup.
1.2 Installing VirtualBox on Ubuntu
First, we need to make sure that the Ubuntu Multiverse repository is enabled.
Afterward, we install VirtualBox with the next command:
sudo apt-get update && sudo apt-get install virtualbox-qt
Let’s try to run VirtualBox to ensure the install was successful:
virtualbox
Once the app opens up, we can close it and continue with Vagrant.
1.3 Installing VirtualBox on macOS
Let’s download the setup file from and click on “OS X hosts.”
We can now open the DMG file, execute the PKG inside and run the installer. We keep the default options selected and continue with the steps in the install wizard.
Let’s open up the terminal and check if the install was successful.
virtualbox
If the application opens up and everything seems to be in order, we can continue with the Vagrant setup.
2. Installing Vagrant
It would be pretty time-consuming to set up each virtual machine for use with Kubernetes. But we will use Vagrant, a tool that automates this process, making our work much easier.
2.1 Installing Vagrant on Windows
Installing on Windows is easy. We just need to visit the following address,, and click on the appropriate link for the Windows platform. Nowadays, it’s almost guaranteed that everyone would need the 64-bit executable. Only download the 32-bit program if you’re certain your machine has an older, 32-bit processor.
Now we just need to follow the steps in the install wizard, keeping the default options selected.
If at the end of the setup you’re prompted to restart your computer, please do so, to make sure all components are configured correctly.
Let’s see if the “vagrant” command is available. Click on the Start Menu, type “cmd” and open up “Command Prompt”. Next, type:
vagrant --version
If the program version is displayed, we can move on to the next section and provision our Kubernetes cluster.
2.2 Installing Vagrant on Ubuntu
First, we need to make sure that the Ubuntu Universe repository is enabled.
If that’s enabled, installing Vagrant is as simple as running the following command:
sudo apt-get update && sudo apt-get install vagrant
Finally, let’s double-check that the program was successfully installed, with:
vagrant --version
2.3 Installing Vagrant on macOS
Let’s first download the setup file from, which, at the time of this writing, would be found at the bottom of the page, next to the macOS icon.
Once the download is finished, let’s open up the DMG file, execute the PKG inside, and go through the steps of the install wizard, leaving the default selections as they are.
Once the install is complete, we will be presented with this window.
But we can double-check if Vagrant is fully set up by opening up the terminal and typing the next command:
vagrant --version
Provisioning the Kubernetes Cluster
Vagrant will interact with the VirtualBox API to create and set up the required virtual machines for our cluster. Here’s a quick overview of the workflow.
Once Vagrant finishes the job, we will end up with three virtual machines. One machine will be the master node and the other two will be worker nodes.
Let’s first download the files that we will use with Vagrant, from
Credit for files:
Next, we have to extract the directory “k8s_ubuntu” from this ZIP file.
Now let’s continue, by entering the directory we just unzipped. You’ll need to adapt the next command to point to the location where you extracted your files.
For example, on Windows, if you extracted the directory to your Desktop, the next command would be “cd Desktop\k8s_ubuntu”.
On Linux, if you extracted to your Downloads directory, the command would be “cd Downloads/k8s_ubuntu”.
cd k8s_ubuntu
We’ll need to be “inside” this directory when we run a subsequent “vagrant up” command.
Let’s take a look at the files within. On Windows, enter:
dir
On Linux/macOS, enter:
ls -lh
The output will look something like this:
We can see a file named “Vagrantfile”. This is where the main instructions exist, telling Vagrant how it should provision our virtual machines.
Let’s open the file, since we need to edit it:
Note: In case you’re running an older version of Windows, we recommend you edit in WordPad instead of Notepad. Older versions of Notepad have trouble interpreting EOL (end of line) characters in this file, making the text hard to read since lines wouldn’t properly be separated.
Look for the text “v.memory” found under the “Kubernetes Worker Nodes” section. We’ll assign this variable a value of 4096, to ensure that each Worker Node gets 4 GB of RAM because Elasticsearch requires at least this amount to function properly with the 4 nodes we will add later on. We’ll also change “v.cpus” and assign it a value of 2 instead of 1.
After we save our edited file, we can finally run Vagrant:
vagrant up
Now, this might take a while since there’re quite a few things that need to be downloaded and set up. We’ll be able to follow its progress in the output and we may get a few prompts to accept some changes.
When the job is done, we can SSH into the master node by typing:
vagrant ssh kmaster
Let’s check if Kubernetes is up and running:
kubectl get nodes
This will list the nodes that make up this cluster:
Pretty awesome! We are well on our way to implementing the ELK stack on Kubernetes. So far, we’ve created our Kubernetes cluster and just barely scratched the surface of what we can do with such automation tools.
Stay tuned for more about Running ELK on Kubernetes with the rest of the series!
Part 2 – Coming December 22nd, 2020
Part 3 – Coming December 29th, 2020 | https://coralogix.com/blog/running-elk-on-kubernetes-with-eck-part-1/ | CC-MAIN-2021-25 | refinedweb | 3,163 | 62.07 |
#5681: par# and spark# call newSpark differently, confuses LLVM backend ---------------------------------+------------------------------------------ Reporter: scpmw | Owner: dterei Type: bug | Status: new Priority: normal | Component: Compiler (LLVM) Version: 7.2.1 | Keywords: Testcase: | Blockedby: Os: Unknown/Multiple | Blocking: Architecture: Unknown/Multiple | Failure: GHC rejects valid program ---------------------------------+------------------------------------------ If `par#` and `spark#` are being used in the same compilation unit, say like {{{ case spark# (work 2) realWorld# of (# _, _ #) -> case par# (work 1) of _ -> return () }}} The LLVM backend generates code like follows: {{{ call ccc void (i8*,i8*)* @newSpark( i8* %lnnX, i8* %lnnZ ) nounwind [...] %lno7 = call ccc void (i8*,i8*)* @newSpark( i8* %lno4, i8* %lno6 ) nounwind }}} So both call `newSpark` under the hood - but inconsistently. The first call expects no returned value, while the second does. As the first seen call doesn't have a returned value, the backend concludes that the function's type must be `void (i8*,i8*) *`, which make `opt` choke on the second usage: {{{ opt: /tmp/ghc23150_0/ghc23150_0.ll:845:1: error: instructions returning void cannot have a name %lno7 = call ccc void (i8*,i8*)* @newSpark( i8* %lno4, i8* %lno6 ) nounwind ^ }}} The underlying reason is in code generation: It doesn't always ask for the return value of `newSpark`. So one way to fix this is to just always get the return value - but simply discard it if it's not needed. Patch attached. -- Ticket URL: <> GHC <> The Glasgow Haskell Compiler | http://www.haskell.org/pipermail/glasgow-haskell-bugs/2011-December/033876.html | CC-MAIN-2013-20 | refinedweb | 230 | 52.94 |
Opened 5 years ago
Closed 5 years ago
Last modified 5 years ago
#11779 closed (duplicate)
Validation doesn't detect import errors
Description (last modified by Alex)
I have a model with a bug in it(I imported user instead of User from django.contrib.auth.models), here is the code:
from django.db import models from django.contrib.auth.models import user class myprofile(models.Model): user = models.ForeignKey(user, unique=True)
When running manage.py validate I get no errors, but when asking for the app's sql I am told that I probably didn't install it. What should happen is that the model shouldn't validate(or at least be told that the app is found but contains errors)
$ python manage.py validate 0 errors found $ python manage.py sql myprofile Error: App with label myprofile could not be found. Are you sure your INSTALLED_APPS setting is correct?
Attachments (0)
Change History (3)
comment:1 Changed 5 years ago by Alex
comment:2 follow-up: ↓ 3 Changed 5 years ago by kmtracey
- Resolution set to duplicate
- Status changed from new to closed
comment:3 in reply to: ↑ 2 Changed 5 years ago by kmtracey
Note: See TracTickets for help on using tickets.
Please use preview. | https://code.djangoproject.com/ticket/11779 | CC-MAIN-2014-15 | refinedweb | 209 | 57.06 |
Hello. Sorry for the basic question, but what is the easiest way to stream data from the Edison (i.e. real-time sensor values or analogIn values) to a Mac, using Python? EDIT: I would need to be able to access this sent data in either Matlab or Python running on a Mac
I want to perform mathematical operations on sensor data but on my Mac and not on the Edison (for several reasons).
Thanks!
Hello menonv,
You may be able to do it by sending serial data throught the Edison's Serial0 port (the one used for the Arduino IDE) which is directly connected to the micro-USB port next to the micro-switch. You can do this by making system calls that send data to ttyGS0. On your computer's side you will have to read this data from the serial port in order to use it as you wish. Here's a quick example on how to send data to Serial0:
from subprocess import call import time while True: call('echo Your_Message > /dev/ttyGS0', shell=True) time.sleep(0.2) call('echo Another_Message > /dev/ttyGS0', shell=True) time.sleep(0.2)
Peter.
If you want to avoid cabling, and use the Wifi network, consider MQTT. Just host a mosquitto broker on your MAC, and run a subscriber.
You can publish data from the Edison. Here is an untested snippet... This is basically how it works...
import paho.mqtt.client as mqtt import time def on_connect(client, userdata, flags, rc): print("Connected with result code "+str(rc)) print("Connected to " + BROKER_ADDR) # BROKER is your MAC def main(): client.connect(BROKER_ADDR) client.loop_start() while True: client.publish("edison/test", "Hello Edison!!") time.sleep(.5) | https://communities.intel.com/thread/87913 | CC-MAIN-2018-13 | refinedweb | 285 | 63.9 |
I am trying to make a 3 star rating system for a project like angry birds. In my case instead of stars as rewards I use cars.
I have 2 buttons on the scene Level1 and Level2
and four images for the star(car) rewards OneCarSprite TwoCarSprite ThreeCarSprite NoCarSprite
I am using Playmaker to pass the following variables to a script.
LevelToReward ( which holds a string with the name of the level to be rewarded Level1 or Level2 )
and carscore (which is the number of stars(cars) the level will be awarded.
I attach the script to each Buttons (Level1 and Level2)
My problem is that the script works for button1 and when i click on button2 it works but I loose the reward that was assigned to button1
using UnityEngine;
using System.Collections;
using UnityEngine.UI;
using HutongGames.PlayMaker;
public class CarsRewardSystem : MonoBehaviour {
public Sprite OneCarSprite;
public Sprite TwoCarSprite;
public Sprite ThreeCarSprite;
public Sprite NoCarSprite;
public string LevelToReward;
Image images; // declare of Image type
public Button l1; // declare of Button type
public Button l2;
// Use this for initialization
void Start () {
LevelToReward = FsmVariables.GlobalVariables.GetFsmString("levelCompleted").Value; // get variable from Playmaker FSM
}
// Update is called once per frame
void Update () {
}
void Cars(int carscore) {
Debug.Log (LevelToReward);
Debug.Log (carscore);
// changes the sprites to allow rewards of 1 , 2, 3 cars according to score
images = gameObject.GetComponent<Image>(); // get the component of Image method
l1 = gameObject.GetComponent<Button>(); // get the component of Button method.
l2 = gameObject.GetComponent<Button>(); // get the component of Button method.
if (carscore == 1 && LevelToReward == "Level1") {
//images.sprite = OneCarSprite;
l1.image.sprite = OneCarSprite;
} else if(carscore == 2 && LevelToReward == "Level1") {
//images.sprite = TwoCarSprite;
l1.image.sprite = TwoCarSprite;
} else if(carscore == 3 && LevelToReward == "Level1") {
//images.sprite = ThreeCarSprite;
l1.image.sprite = ThreeCarSprite;
} else if (carscore >3 || carscore<1){
l1.image.sprite = NoCarSprite;
}
if (carscore == 1 && LevelToReward == "Level2") {
//images.sprite = OneCarSprite;
l2.image.sprite = OneCarSprite;
} else if(carscore == 2 && LevelToReward == "Level2") {
//images.sprite = TwoCarSprite;
l2.image.sprite = TwoCarSprite;
} else if(carscore == 3 && LevelToReward == "Level2") {
//images.sprite = ThreeCarSprite;
l2.image.sprite = ThreeCarSprite;
} else if (carscore >3 || carscore<1){
l2.image.sprite = NoCarSprite;
}
}
}
Answer by GluedBrain
·
Nov 01, 2014 at 04:09 AM
Check this article which covers exactly how to do this from scratch..
Star reward system - Unity
Answer by thelime
·
Oct 17, 2014 at 10:56 AM
Lets see if i understand you. You play level 1 with button 1 and when done you get the rewards? And after you play level 2 with button 2 and you are done you get the rewards but the result from level 1 is missing? If it is like that you have the problem here
} else if (carscore >3 || carscore<1){
l1.image.sprite = NoCarSprite;
// you dont checking if it is level1 you have played so it will always change it to noCarSprite
add this to
&& LevelToReward == "Level1"
and it should work :)
That's my problem yes. I will check your solution in a couple of hours. Thanks for your time :)
okey if it still is problem post it here and i will try to help you!
no luck it is not working. At other parts of my code I am using PlayerPrefs to save and load the rewards of each level. Maybe there is the problem. If this part of code is correct.
yes maybe. if you whant more help you need to post more kode and show how you save and loade draw lines/images on 4.6 uGUI Image for a scrollable map?
0
Answers
Sprite Becoming Blurry
2
Answers
[4.6 UI] Image fading - How to?
2
Answers
Unity 4.6 UI - Image Order
4
Answers
Image with transparency cut-offs as button
2
Answers | https://answers.unity.com/questions/811028/help-with-3-star-reward-system.html | CC-MAIN-2020-50 | refinedweb | 615 | 67.65 |
B3 is a binary serializer which is easy like json, compact like msgpack, powerful like protobuf, and handles datetimes in python
Project description
B3 = Better Binary Buffers
B3 is a data serializer, it packs data structures to bytes & vice versa. It has:
- The schema power of protobuf, without the setup/compiler pain,
- The quick-start ease of json.dumps, but with support for datetimes,
- The compactness of msgpack, but without a large zoo of data types.
With B3 you can fast-start with schema-less data (like json), and move to schemas (like protobuf) later & stay compatible. Or have ad-hoc json-like clients talk to rigorous protobuf-like servers without pain & suffering.
The small number of lovingly-handcrafted data types means often the only choice you need make is between Fast or Compact.
This version is pure python, no dependencies apart from Six (and pytest for the tests). Tested working in python 3.8 & 2.7 on windows & linux.
Version
B3 is now version 1.x, it is out of beta.
The wire format and existing core data types are now frozen and will not change.
- Except for the unused core types 10,11,12 which may have a type assigned in future, and
- Except for SCHEDs unfinished named-timezone support, which needs py3.10+)
(v1.x is not backward compatible with beta 0.9.x versions)
Installing
pip install b3buf >>> import b3
Getting Started
You can pack lists of things (like json.dumps):
import b3 list_data = [ None, b"foo", u"bar", True, -69, 2.318, 46j, [1,2,3], {4:5, 6:7}, decimal.Decimal("13.37"), datetime.datetime.now() ] list_buf = b3.pack(list_data) out_list = b3.unpack(list_buf)
Complex numbers, decimal numbers, and dates and times all work.
You can pack dicts of things:
dict_data = { 1:1, u"2":u"2", b"3":b"3" } dict_buf = b3.pack(dict_data) out_dict = b3.unpack(dict_buf, 0)
Byte keys are supported as well as string and number keys
You can save on slicing when unpacking by giving unpack a start index
Schema Packing
You can make messages using a "type, name, tag_number" schema (like protobuf)
SCHEMA = ( (b3.B3_BYTES, "bytes1", 1), (b3.B3_UVARINT, "number1", 2), )
Schema packing/unpacking is to and from python Dicts.
sch_data = dict(bytes1=b"foo", number1=69) sch_buf = b3.schema_pack(SCHEMA, sch_data) out_sch = b3.schema_unpack(SCHEMA, sch_buf)
Tests
B3 ships with an extensive test suite, using pytest.
pip install pytest cd /your/site-packages/b3 pytest
More Info
See the tests, and examples.py in the tests folder for more examples (including how to nest schemas)
See datatypes.py for the available data types.
See wire_format.md for an overview of the wire format.
Licensing
The code in this project is licensed under MIT license. See LICENSE.txt.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/b3buf/1.0.8/ | CC-MAIN-2022-40 | refinedweb | 483 | 67.86 |
Builds fails to create js, libjs
.so - version 0.9.6
RESOLVED INCOMPLETE
Status
()
▸
JavaScript Engine
People
(Reporter: Stew Benedict, Unassigned)
Tracking
Firefox Tracking Flags
(Not tracked)
Details
Attachments
(1 attachment)
Building on Mandrake PPC, the build porcess fails to create the above files, due to an error with VARARGS. The following patch forces it to build: --- mozilla/js/src/jsprf.c.ppc Thu Nov 29 13:22:02 2001 +++ mozilla/js/src/jsprf.c Thu Nov 29 13:33:46 2001 @@ -55,6 +55,9 @@ #else #define VARARGS_ASSIGN(foo, bar) (foo) = (bar) #endif +#if defined(__powerpc__) +#define VARARGS_ASSIGN(foo, bar) foo[0] = bar[0] +#endif /* ** WARNING: This code may *NOT* call JS_LOG (because JS_LOG calls it) I tried various mods to configure.in, but was not successful in getting to work at that level.
What version of gcc/glibc does that version of Mandrake PPC use? The LinuxPPC (2000?) tinderbox that we have does not see that problem so I don't think hardcoding that ifdef is the proper fix.
Assignee: seawood → rogerl
Component: Build Config → Javascript Engine
QA Contact: granrose → pschwartau
Formally confirming and reassigning to khanson; cc'ing rginda, jband as additional reviewers for this patch -
Assignee: rogerl → khanson
Status: UNCONFIRMED → NEW
Ever confirmed: true
Created attachment 60476 [details] [diff] [review] Patch to fix build on Linux PPC platform This patch fixes the build system, since there is no need to modify source
The problem described can only be seen when building Javascript engin as a standalone library (using Makefile.ref), not when building in Mozilla seamonkey.. I've attached a clean patch which fix the standalone build rules for Linux PPC
Just need r=, sr= to get this patch in -
Comment on attachment 60476 [details] [diff] [review] Patch to fix build on Linux PPC platform r=rginda
Attachment #60476 - Flags: review+
I'm the wrong person to review this. But, like cls suggested, it is not obvious if this flag is required for all Linux/PPC or if the detection should be more fine grained to only switch on some particular combination of gcc/glibc versions. Without additional information there is no way to be sure that this patch won't help some while hurting others.
Can this patch make things worse? I'm willing to sr= it if someone can do a little investigation of under what conditions mozilla/configure.in defines HAVE_VA_LIST_AS_ARRAY, and then ensure that the same conditions are respected by the patch. /be
Sorry I took so long to get back to you. Our in-house maintainer has come up with a patch, which I see he forwarded, which he feels is a better solutions: In response to the other questions: [cooker@grapefruit ~/rpm-rebuilder/failure]$ gcc --version 2.95.3 [cooker@grapefruit ~/rpm-rebuilder/failure]$ rpm -q glibc glibc-2.2.4-15mdk thx Stew Benedict
Targeting for 9.9. Appears that Robert Ginda r= (see comment #6) and Brendan is willing to sr= (see comment #8).
Target Milestone: --- → mozilla0.9.9
Moved to Mozilla 1.2 as a temporary holding place, as bugs remaining on 0.9.9 are being scrutinized, now that the milestone has passed. If this must be fixed for MachV, then it needs to be assigned the "nsbeta1" keyword.
Target Milestone: mozilla0.9.9 → mozilla1.2
Any hope to have r= and sr= (post mozilla1.0 of course) ? As a reminder, this patch doesn't affect mozilla build, only standalone libjs build.. BTW, could someone with enough privilege add patch keyword to this bug ?
Adding patch keyword as requested. Have r=, but sr= will require an answer to Brendan's question above: > Can this patch make things worse? I'm willing to sr= it if someone can > do a little investigation of under what conditions mozilla/configure.in > defines HAVE_VA_LIST_AS_ARRAY, and then ensure that the same conditions > are respected by the patch.
Keywords: patch
Oops, I forgot we already had r=.. I've checked mozilla configure.in. HAVE_VA_LIST_AS_ARRAY is defined if ${target_cpu} is ppc.. And that was patch with r= fix (test for CPU_ARCH == ppc) for libjs standalone build...
The hardcoded HAVE_VA_LIST_AS_ARRAY was for NTO ppc builds. Further down on line 1910, there's actually a runtime test. I don't know why the define is hardcoded for NTO as the test was explicitly added for NTO (and other platforms) per bug 20882.
Oops, you're right, I checked configure.in too fast.. So, now we are stuck in a bigger problem : there is a runtime test done by configure but standalone js engine build doesn't use configure stuff at all :(
Stew, Is this still an issue?
Assignee: khanson → general
QA Contact: pschwartau → general
(In reply to comment #17) > Stew, Is this still an issue? Closing this bug after 9 years. I don't have a PPC machine though so please reopen or file a new bug if you're still unable to build the shell.
Status: NEW → RESOLVED
Last Resolved: 7 years ago
Resolution: --- → INCOMPLETE | https://bugzilla.mozilla.org/show_bug.cgi?id=113433 | CC-MAIN-2018-22 | refinedweb | 831 | 64.81 |
online using Wakari. Both methods worked fine for simple examples.
In chapter 1, we are given a quick tour of what the IPython notebook can do using the modeling of a cup of coffee as an example. The idea, on its own works well. [As a scientist, I found the use of degree Farenheit a bit odd and reminded me of books I read that were written 50 or more years ago.] However, while the authors used variables that follow the normal Python convention, separating names by underscore as in temp_cream, the code formatting is at times atrocious as variable names are sometimes split up after an underscore character as in the following two lines of code:
initial_temp_mix_at_shop = temp_mixture(temp_coffee, vol_coffee, temp_
cream, vol_cream)
which are, ironically enough, in bold in the text as the author wants us to focus our attention on their meaning (but no mention of the error in formatting).
While I usually prefer holding a paper book over reading an ebook on my screen, I must admit that being able to copy-paste code samples beats having to retype everything. So, it was possible to quickly reproduce the examples by copy-pasting and fixing the typos to run the examples.
However, a much better way exists which is often used by Packt books: making code samples available for download. The IPython notebooks have been designed for easy sharing. Here, the author has chosen not to make use of this. This, in my opinion, is something that is a major flaw in this case.
Chapter 2 covers in more details the notebook interface. This is undoubtably the most important chapter of the book for anyone who wishes to learn about the IPython notebook. It covers a lot of grounds.
However, I encountered many problems, some more serious than others. The first, which is perhaps more an annoyance than a real problem, is that one example intended to show code completion using the tab key is given as follows:
print am
Python programmers will immediately recognize this as being a Python 2 example. However, as far as I could tell (using ctrl-f "python 2") there isn't a single mention anywhere that Python 2 is used.
I happened to have the Anaconda 3.4 distribution installed (as recommended) but with Python 3.4 and not Python 2. Python 3 is now 6 years old and there is no excuse to focus on an old, and soon to be obsolete, version of Python without at least mentioning why that choice was made. Minor syntax difference, like adding parentheses for print statements, are easily fixed; some more subtle ones are not. Furthermore, while I had the Anaconda distribution installed, I was still using the online Wakari site to work through the examples, so that was not a major problem at that point.
While still in chapter 2, we are invited to replace "%pylab inline" by "%pylab" and run an example again to see a plot appear in a window (I first assumed a separate browser window) instead of being shown in the document. This does not work using the online Wakari site: the window is actually a Qt window, not something that works in a browser. So, to try this example, I had to start my local version and recopy the code, which worked correctly.
Shortly thereafter, we are introduced to more "magic" methods and invited to try running a cython example, loading from a local file. This did not work. The recommended "%%cython" magic method is no longer working in the latest IPython version included with the Python 3.4 Anaconda 3.4 distribution. After a while, I found the proper way to run cython code with the latest version BUT the example provided raised a (numpy-related?) syntax error. I copy-pasted the code from my browser to the Wakari online version and it worked correctly, confirming that I had not made an error in reproducing the code given by the author. However, I was not able to figure out the source of the error using the local version.
After finishing Chapter 2, I stopped trying to run every single examples and simply skimmed the book.
Chapter 3 focuses on producing plots with matplotlib, including animations. While not specific to the IPython notebook, this topic felt like an appropriate one to cover.
In Chapter 4, we learn about the pandas library which has very little to do with the IPython notebook. The situation is similar with Chapter 5 which covers SciPy, Numba and NumbaPro, again three libraries that have very little to do with the notebook as such. The choice of NumbaPro struck me as a bit odd since it is a commercial product. True enough, it can be tried for free - however, it is not something that I would consider to be an "essential" for the IPython notebook.
I know very little more about the IPython notebook than what I have learned from this book. However, I do know that it is possible to write extensions for the IPython notebook which is something that I would have thought should be included in any book titled "IPython Notebook Essentials", well before discussing specialized libraries such as Pandas, SciPy, Numba and NumbaPro.
There might very well be other topics more notebook specific that should be included, but I have no way to know from this book.
The book includes three appendices: a brief IPython notebook reference card, a brief review of Python, and an appendix on Numpy arrays. Both the Reference card and the Numpy arrays appendices seem like worthwhile additions. However, the brief review of Python seems a bit out of place. By including code like:
def make_lorenz(sigma, r, b):
def func(statevec, t):
x, y, z = statevec
return [ sigma * (y - x),
r * x - y - x * z,
x * y - b * z ]
return func
in Chapter 2, the author seems to assume, and rightly so in my opinion, that the reader will be familiar with Python. However, the appendix only covers the standard Python construct that one may find in a beginner's book intended for programmers that are familiar with other languages. As such, the Python review appendix seems just like a filler, increasing the page count while adding nothing of value to the book. Thankfully, it is relegated to an appendix instead of being inserted in an early chapter.
In summary, about half of the book contains information of value for someone who wants to learn about the IPython notebook itself; the choice of Python 2 over Python 3 is odd, and almost inexcusable given that it is not even mentioned anywhere; the lack of downloable code samples" (mostly IPython notebooks in this case) greatly reduces the value of this book and is something that could be remedied by the author. In fact, given the typos mentioned (where variable names are split over two lines), downloadable copies of notebooks should be made available.
As I write this review, Packt is having a sale during which ebooks are available for $5. At that price, I would say that IPython Notebook Essentials is worth it if one wants to learn about the IPython Notebook; however, based on a quick scan of other Packt books covering the IPython notebook, I believe that better choices exist from the same editor.
1 comment:
Glad to see I'm not the only one that thinks Pakt has some serious tech formatting/editing problems. | https://aroberge.blogspot.com/2014/12/review-of-ipython-notebook-essentials.html | CC-MAIN-2018-13 | refinedweb | 1,242 | 66.17 |
Modern C++: The Good Parts/Switching things up
In the previous chapter, you received your first assignment: to write a simple calculator. At the time, if statements were your only way to make a decision; this chapter introduces the switch statement, which works similarly to the if statement but is more suited to problems like the calculator.
Code[edit | edit source]
Here's a calculator built around a switch statement:
#include <iostream> #include <string> int main() { std::string input; float a, b; char oper; float result; std::cout << "Enter two numbers and an operator (+ - * /).\n"; // Take input. std::cin >> input; // Parse it as a float. a = std::stof(input); // Take input. std::cin >> input; // Parse it as a float. b = std::stof(input); // Take input. std::cin >> input; // Get the first character. oper = input[0]; switch (oper) { case '+': result = a + b; break; // DON'T case '-': result = a - b; break; // FORGET case '*': result = a * b; break; // THESE case '/': result = a / b; break; // !!! } std::cout << a << " " << oper << " " << b << " = " << result << "\n"; }
Explanation[edit | edit source]
std::stof is similar to
std::stoi, except it converts to float.
input[0] is the first character in the string
input. Anytime you see square brackets with a number (or variable) between them, the number is zero-based. This means that what people normally call the "first" character is at index 0, the "second" is at index 1, and so forth.
The switch statement has exactly the same effect as the if-else chain you wrote for your own calculator. Notice that every
case label has a matching
break statement; be careful that this is true of any switch statement you write, because otherwise you might get some very surprising behavior. Specifically, without a break statement, control will flow right past any labels, causing the logic under them to be executed in multiple cases.
Exercises[edit | edit source]
- Explore what happens when you divide one int by another, and how that differs from doing the same with two floats, and an int and a float. Discuss your findings with your instructor.
Vocabulary[edit | edit source]
- switch statement
- uses its expression to select which
caselabel to jump to. | https://en.wikibooks.org/wiki/Modern_C%2B%2B:_The_Good_Parts/Switching_things_up | CC-MAIN-2021-21 | refinedweb | 360 | 62.78 |
Hello world not compiling
So I followed through, and I've successfully installed haxeFlixel on my windows 10 machine. I used notepad++ to edit the Playstate.hx file as instructed, but it throws an error that there is unexpected text.
Specifically, it says unexpected characters on line 10 -- which is where the text variable that was created earlier. I've tried simply copy pasting the example in, but unfortunately it doesn't seem to matter--for whatever reason, it's not recognizing that 'text' is a variable.
Here's the code:
class PlayState extends FlxState
{
override public function create():Void
{
super.create(
var text = new flixel.text.FlxText(0, 0, 0, "Hello World", 64);
text.screenCenter();
add(text);
);
}
and the error
HelloWorld>lime test html5
source/PlayState.hx:11: characters 0-4 : Unexpected text
Consider I've literally just copy and pasted, and I had no errors while running all the installs earlier, I'm really not sure what the problem is.
Thanks for your help!
perhaps you forgot to import the Flxtext.
package; import flixel.FlxState; import flixel.text.FlxText; import flixel.util.FlxColor; class PlayState extends FlxState { var text:FlxText; override public function create():Void { super.create(); text = new FlxText(0, 0, 0, "Hello World", 64); text.screenCenter(); text.scrollFactor.set(); add(text); } override public function update(elapsed:Float):Void { super.update(elapsed); } }
@felgs said in Hello world not compiling:
super.create(
var text = new flixel.text.FlxText(0, 0, 0, "Hello World", 64);
text.screenCenter();
add(text);
);
the problem is that you have a misplaced
). It should be:
super.create(); var text = new flixel.text.FlxText(0, 0, 0, "Hello World", 64); text.screenCenter(); add(text);
@felgs, notice part of the code
flixel.text.FlxTextthat you copied from that lesson. If you are using flash develop then you can replace that code with
FlxText. in that IDE, mouse click
FlxText, press
CTRL SHIFT 1and the import will be created. Then the word
FlxTextis all you need to create text. so always use the shortcut, for example
FlxColorclick it. do
CTRL SHIFT 1and your finished. no need to remember all the imported stuff. if you are not using that IDE then copy
import flixel.text.FlxText;and paste below the top line that has the word
package;then in a function just use
new FlxText
Thanks everyone for the responses! The main culprit was indeed not importing the flxText. This is pretty amusing, since the current Hello World tutorial on haxeFlixel says nothing about importing it to make this file work.
However, I'm still not able to get the code compile, though for a different reason now. This hello world guide for just testing it with the command line is the one that I'm following.
My new error, after much fussing and trying to figure out if maybe I missed something else, is twofold.
- it has no idea what to do with text.screenCenter(); -- it throws an error every time about unexpected text.
- If I get rid of that, I also get the fun error "unexpected add" when I try to compile it :)
At this point, I'm really debating if this is at all worth all the effort; I'm just trying to compile the Hello World that's on the site, and it seems to be missing several bits of information I would need.
Here's everything I have written; as mentioned before, I'm just editing it in Notepad++ for now until I'm sure I actually want to work with the language.
package; import flixel.FlxState; import flixel.text.FlxText; class PlayState extends FlxState { var text:FlxText; override public function create():Void { super.create( text = new FlxText(0, 0, 0, "Hello World", 64); text.screenCenter(); add(text); } override public function update(elapsed:Float):Void { super.update(elapsed); } }
Thank you again to everyone offering solutions and help! If there's something I'm missing, please explain why and what it does if you can; I also think this stuff should probably be updated in the hello world on the actual site, because, again, there is nothing about importing the text libraries, and if you take the code as written in the tutorial, it simply doesn't work.
@felgs said in Hello world not compiling:
you need
super.create();not
super.create(
flash develop is a nice IDE. you can learn the API much faster with it. for example, typing
text.will then display a popup box with all the functions and variables syntax to use for
text. that popup will happen after you type
., such as
this.or
text.
also, this is a good tutorial
That did it! Thank you so much for your help, and the suggestion about an IDE. I feel like an idiot missing that little detail.
I am glad i could help.
here is something to remember. I am not sure if this applies to the newer version of haxeflixel, a class name such as Reg will not give an error but will not give the correct result either when written RegReg.variable. so be careful with that. took me weeks to find that bug. :) | http://forum.haxeflixel.com/topic/832/hello-world-not-compiling | CC-MAIN-2020-40 | refinedweb | 856 | 68.16 |
ReferenceClasses
Percentile
Objects With Fields Treated by Reference (OOP-style)
The software described here allows packages to define reference classes that behave in the style of “OOP” languages such as Java and C++. This model for OOP differs from the functional model implemented by S4 (and S3) classes and methods, in which methods are defined for generic functions. Methods for reference classes are “encapsulated” in the class definition.
Computations with objects from reference classes invoke methods on them and
extract or set their fields, using the
`$` operator in R.
The field and method computations potentially modify the object.
All computations referring to the objects see the modifications, in contrast to
the usual functional programming model in R.
A call to
setRefClass in the source code for a package defines the class and returns a generator object.
Subsequent calls to the
$methods()
method of the generator will define methods for the class.
As with functional classes, if the class is exported from the package,
it will be available when the package is loaded.
Methods are R functions. In their usual implementation, they refer to fields and other methods of the class directly by name. See the section on “Writing Reference Methods”.
As with functional classes, reference classes can inherit from other
reference classes via a
contains= argument to
setRefClass. Fields and methods will be inherited, except where the
new class overrides method definitions. See the section on “Inheritance”.
- Keywords
- programming , classes
Usage
setRefClass(Class, fields = , contains = , methods =, where =, inheritPackage =, ...)
getRefClass(Class, where =)
Arguments
- Class
character string name for the class.
In the call to
getRefClass()this argument can also be any object from the relevant class.
- fields
either a character vector of field names or a named list of the fields. The resulting fields will be accessed with reference semantics (see the section on “Reference Objects”). If the argument is a list, each element of the list should usually be the character string name of a class, in which case the object in the field must be from that class or a subclass. An alternative, but not generally recommended, is to supply an accessor function; see the section on “Implementation” for accessor functions and the related internal mechanism.
Note that fields are distinct from slots. Reference classes should not define class-specific slots. See the note on slots.
- where
for
setRefClass, the environment in which to store the class definition. Should be omitted in calls from a package's source code.
For
getRefClass, the environment from which to search for the definition. If the package is not loaded or you need to be specific, use
asNamespacewith the package name.
- inheritPackage
Should objects from the new class inherit the package environment of a contained superclass? Default
FALSE. See the Section “Inter-Package Superclasses and External Methods”.
- …
other arguments to be passed to
setClass.
Value are accessed by reference. In particular, invoking a method may modify the content of the fields.
Programming for such classes involves writing new methods for a
particular class.
In the R implementation, these methods are R functions, with zero or
more formal arguments.
For standard reference methods, the object itself is not an explicit
argument to the method.
Instead, fields and methods for the class can be referred to by name
in the method definition.
The implementation uses R environments to make fields and other methods
available by name within the method.
Specifically, the parent environment of the method is the object itself.
See the section on “Writing
Reference Methods”.
This special use of environments is optional. If a method is defined
with an initial formal argument
.self, that will be passed in
as the whole object, and the method follows the standard rules for any
function in a package. See the section on “External
$methods()
on a generator object
g or as
the argument
methods in a call to
setRefClass.
The two mechanisms have the same effect, but the first makes the code more readable.
Methods are written as ordinary R functions but have some special features and restrictions in their usual form. In contrast to some other languages (e.g., Python), the object itself does not need to be an argument in the method definition. The body of the function can contain calls to any other reference method, including those inherited from other reference classes and may refer to methods and to fields in the object by name.
Alternatively, a method may be an external method.
This is signalled by
.self being the first formal argument to the method.
The body of the method then works like any ordinary function.
The methods are called like other methods (without the
.self
argument, which is supplied internally and always refers to the object
itself).
Inside the method, fields and other methods are accessed in the form
.self$x.
External methods exist so that reference classes can inherit the
package environment of superclasses
in other packages; see the section on “External Methods”..
Two method names are interpreted specially,
initialize
and
finalize. If an
initialize method is defined, it
will be invoked when an object is generated from the class. See the
discussion of method
$new(...)
in the section “Initialization Methods”.
If a
finalize method is defined, a function will be
registered to invoke it before the environment in
the object is discarded by the garbage collector; finalizers are
registered with
atexit=TRUE, and so are also run at the end of
R sessions. See the matrix viewer example for both initialize and
finalize methods.
Reference methods can not themselves be generic functions; if you want additional function-based method dispatch, write a separate generic function and call that from the method.
Two special object names are available.
The entire object can be referred to in a method by the reserved
name
.self.
The object
.refClassDef contains the definition of the
class of the object.
These are accessed as fields but are read-only, with one exception.
In principal, the
.self field can be modified in the
$initialize
method, because the object is still being created at this stage.
This is not recommended, as it can invalidate the object with respect
to its class.
The methods available include methods inherited from superclasses, as discussed in the section “Inheritance”.
method for the generator object.
Methods for classes are not documented in the
Rd format used
for R functions.
Instead, the.
Initialization Methods...
Methods Provided for all Objects
All reference classes inherit from the class
"envRefClass".
All reference objects can use.
External Methods; Inter-Package Superclasses
The environment of a method in a reference class is the object itself,
as an environment.
This allows the method to refer directly to fields and other methods,
without using the whole object and the
"$"
operator.
The parent of that environment is the namespace of the package in
which the reference class is defined.
Computations in the method have access to all the objects in the
package's namespace, exported or not.
When defining a class that contains a reference superclass in another
package, there is an ambiguity about which package namespace should
have that role.
The argument
inheritPackage to
setRefClass() controls
whether the environment of new objects should inherit from an
inherited class in another package or continue to inherit from the
current package's namespace.
If the superclass is “lean”, with few methods, or exists primarily to support a family of subclasses, then it may be better to continue to use the new package's environment. On the other hand, if the superclass was originally written as a standalone, this choice may invalidate existing superclass methods. For the superclass methods to continue to work, they must use only exported functions in their package and the new package must import these.
Either way, some methods may need to be written that do not assume the standard model for reference class methods, but behave essentially as ordinary functions would in dealing with reference class objects.
The mechanism is to recognize external methods.
An external method is
written as a function in which the first argument, named
.self,
stands for the reference class object.
This function is supplied as the definition for a reference class method.
The method will be called, automatically, with the first argument
being the current object and the other arguments, if any, passed along
from the actual call.
Since an external method is an ordinary function in the source code
for its package, it has access to all the objects in the namespace.
Fields and methods in the reference class must be referred to in the
form
.self$name.
If for some reason you do not want to use
.self as the first
argument, a function
f() can be converted explicitly as
externalRefMethod(f), which returns an object of class
"externalRefMethod" that can be supplied as a method for the
class.
The first argument will still correspond to the whole object.
External methods can be supplied for any reference class, but there is no obvious advantage unless they are needed. They are more work to write, harder to read and (slightly) slower to execute.
NOTE: If you are the author of a package whose reference classes are likely to be subclassed in other packages, you can avoid these questions entirely by writing methods that only use exported functions from your package, so that all the methods will work from another package that imports yours.(...)
With no arguments, returns the names of the reference methods for this class. With one character string argument, returns the method of that name.
Named arguments are method definitions, which will be installed in the class, as if they had been supplied in the
methodsargument to
setRefClass(). Supplying methods in this way, rather than in the call to
setRefClass(), is recommended for the sake of clearer source code. See the section on “Writing Reference Methods” for details.)
Establish a traced version of methodallows tracing on the methods of the generator object itself. By default,
what as S4 Classes.
As a related feature, the element in the
fields= list supplied
to
setRefClass can be an accessor
function, a function of one argument that returns
the field if called with no argument or sets it to the value of the
argument otherwise.
Accessor functions are used internally and for inter-system interface
applications, but not generally recommended as they blur the concept
of fields as data within the object.
A field, say
data, can be accessed generally by an expression
of the form
x$data
for any object from the relevant class.
In an internal method for this class, the field can be accessed by the name
data.
A field that is not locked can be set by an expression of the form
x$data <- value.
Inside an internal method, a field can be assigned by an expression.
Reference classes can have validity methods in the same sense as any
S4 class (see
setValidity).
Such methods are often a good idea; they will be called by calling
validObject and a validity method, if one is defined,
will be called when a reference object is created (from version 3.4 of
R on).
Just remember that these are S4 methods. The function will be called
with the
object as its argument. Fields and methods must be
accessed using
$.
Note: Slots. Because of the implementation, new reference classes can inherit from non-reference S4 classes as well as reference classes, and can include class-specific slots in the definition. This is usually a bad idea, if the slots from the non-reference class are thought of as alternatives to fields. Slots will as always be treated functionally. Therefore, changes to the slots and the fields will behave inconsistently, mixing the functional and reference paradigms for properties of the same object, conceptually unclear and prone to errors. In addition, the initialization method for the class will have to sort out fields from slots, with a good chance of creating anomalous behavior for subclasses of this class.
Inheriting from a class union, however, is a reasonable strategy (with all members of the union likely to be reference classes)..
References
Chambers, John M. (2016) Extending R, Chapman & Hall. (Chapters 9 and 11.)
Aliases
- ReferenceClasses
- setRefClass
- getRefClass
- initFieldArgs
- initRefFields
- activeBindingFunction-class
- defaultBindingFunction-class
- uninitializedField-class
- refClassRepresentation-class
- refObjectGenerator-class
- refGeneratorSlot-class
- refClass-class
- refObject-class
- refMethodDef-class
- refMethodDefWithTrace-class
- SuperClassMethod-class
- show,envRefClass-method
- show,refMethodDef-method
- show,externalRefMethod-method
- show,refClassRepresentation-method
- externalRefMethod
- externalRefMethod-class
Examples
library(methods)
## a simple editor for matrix objects. Method $edit() changes some ## range of values; method $undo() undoes the last edit. mEdit <- setRefClass("mEdit", fields = list( data = "matrix", edits = "list")) ## The basic edit, undo methods mEdit$methods() }) ## A method to automatically print objects mEdit$methods( methods ## A method to save the object mEdit$methods( save = function(file) { 'Save the current object on the file in R external object format. ' base::save(.self, file = file) } ) tf <- tempfile() xx$save(tf) ##) <!-- %$ --> | https://www.rdocumentation.org/packages/methods/versions/3.4.1/topics/ReferenceClasses?tap_a=5644-dce66f&tap_s=10907-287229 | CC-MAIN-2020-50 | refinedweb | 2,149 | 55.34 |
Meetings:Telecon20151023
Contents
Agenda for DWBP teleconference 23-October 2015.
- Chair: Deirdre
- Scribe:
- Regrets: Phil
Preliminaries
- Check bots are running (see #The IRC Bots)
- Matching everyone on IRC and webex, by typing 'present+ name'. (It helps if you use the same name on IRC and WebEx.)
- Appoint Scribe
- Approving last call's [1]
Main agenda
Data usage:
- Brief update (Bernadette, Eric Stephan, Sumit)
Best Practices (Bernadette, Caroline and Newton)
- Glosary
- The open issues that remained from F2F.
- Erik Wilde's email about DWBP feeback.
Data Quality
- Should/can Christophe stay as editor?
- Proposed: DQ should occupy a separate namespace to DCAT. (Discuss/vote) (Fadi)
- added an early draft example of linkset quality: there are some technical issues come up from this example, but Riccardo guess we have to postpone such technical discussion after the editor draft is out (and I agree)
- consider an (DQV or DUV) issue about?
- problems of having dimensions, categories (and properties between them) as "abstract". Proposal to officially open the two following issues:
- discussion on range, domain and inverse of dqv:computedOn, so that it is the exact inverse property of dqv:hasQualityMeasure and a subclass of daq:computedOn
- Issue 200: Can we align the quality dimension hints in DQV with the ones in ISO 25012? (Nandana)
- Issues & Actions
- 2015-10-23
-
- Filip | https://www.w3.org/2013/dwbp/wiki/Meetings:Telecon20151023 | CC-MAIN-2017-39 | refinedweb | 217 | 52.49 |
Connect the Billing Form
Now all we have left to do is to connect our billing form to our billing API.
Let’s start by including Stripe.js in our HTML.
Append the following the the
<head> block in our
public/index.html.
<script src=""></script>
Replace our
render method in
src/containers/Settings.js with this.
handleFormSubmit = async (storage, { token, error }) => { if (error) { alert(error); return; } this.setState({ isLoading: true }); try { await this.billUser({ storage, source: token.id }); alert("Your card has been charged successfully!"); this.props.history.push("/"); } catch (e) { alert(e); this.setState({ isLoading: false }); } } render() { return ( <div className="Settings"> <StripeProvider apiKey={config.STRIPE_KEY}> <Elements> <BillingForm loading={this.state.isLoading} onSubmit={this.handleFormSubmit} /> </Elements> </StripeProvider> </div> ); }
And add the following to the header.
import { Elements, StripeProvider } from "react-stripe-elements"; import BillingForm from "../components/BillingForm"; import config from "../config"; import "./Settings.css";
We are adding the
BillingForm component that we previously created here and passing in the
onSubmit prop that we referenced in the last chapter. In the
handleFormSubmit method, we are checking if the Stripe method from the last chapter returned an error. And if things looked okay then we call our billing API and redirect to the home page after letting the user know.
An important detail here is about the
StripeProvider and the
Elements component that we are using. The
StripeProvider component let’s the Stripe SDK know that we want to call the Stripe methods using
config.STRIPE_KEY. And it needs to wrap around at the top level of our billing form. Similarly, the
Elements component needs to wrap around any component that is going to be using the
CardElement Stripe component.
Finally, let’s handle some styles for our settings page as a whole.
Add the following to
src/containers/Settings.css.
@media all and (min-width: 480px) { .Settings { padding: 60px 0; } .Settings form { margin: 0 auto; max-width: 480px; } }
This ensures that our form displays properly for larger screens.
And that’s it. We are ready to test our Stripe form. Head over to your browser and try picking the number of notes you want to store and use the following for your card details:
- A Stripe test card number is
4242 4242 4242 4242.
- You can use any valid expiry date, security code, and zip code.
- And set any name.
You can read more about the Stripe test cards in the Stripe API Docs here.
If everything is set correctly, you should see the success message and you’ll be redirected to the homepage.
Commit the Changes
Let’s quickly commit these to Git.
$ git add . $ git commit -m "Connecting the billing form"
Next, we’ll set up automatic deployments for our React app using a service called Netlify. This will be fairly similar to what we did for our serverless backend API.
For help and discussionComments on this chapter
If you liked this post, please subscribe to our newsletter, give us a star on GitHub, and follow us on Twitter. | https://serverless-stack.com/chapters/connect-the-billing-form.html | CC-MAIN-2019-04 | refinedweb | 501 | 67.96 |
Details
Description
I defined a method like:
Integer 'int'(String name) { def o = get(name) if(o instanceof Number) { return o.intValue() } else if(o != null) { try { return Integer.parseInt(o.toString()) } catch (NumberFormatException e) { } } }
This compiles fine in Groovy but blows up with the joint compiler with an error like:
[groovyc] public java.lang.Integer byte(java.lang.String name) { return (java.lang.Integer)null;}
Activity
- All
- Work Log
- History
- Activity
- Transitions
The fix IMO should be just to exclude them from the generated Java classes since it would be impossible for Java to call these methods anyway
With these instructions to try and reproduce the problem:
git clone git://github.com/grails/grails.git cd grails/grails git checkout eb82261fa94beca22c422cc5968779fc5cdaa68d ant clean jar
With Groovy 1.6.x, I do see some warnings in the output, but I don't see the joint compiler error with the byte / int / etc. methods.
The build also fails with OOME Java heap space, so I hope it's not the tree hiding the forest.
But so far, I've not been able to see that problem.
I already fixed this a few weeks back. Do you still see this with 1.6.x?
IntelliJ joint compiler also compiles it fine | https://issues.apache.org/jira/browse/GROOVY-3895 | CC-MAIN-2017-26 | refinedweb | 209 | 65.93 |
In this Program, you’ll learn how to find Prime number entered by User.
What is the prime number?
To understand this example, you should have the knowledge of C programming :
A number is considered as the prime number when it satisfies the below conditions.
- It should be the whole number
- It should be greater 2 factors only. They are 1 and the number itself. But, number 4 has 2*2 also. Like this, all remaining numbers 6, 8, 9, 10, 12, 14, 15, 16 have factors other than 1 and the number itself. So, these are not called as prime numbers.
Program to find Prime Numbers Entered by User
#include <stdio.h> int main() { int n, i, flag = 0; printf("Enter a positive integer: "); scanf("%d",&n); for(i=2; i<=n/2; ++i) { // condition for nonprime number if(n%i==0) { flag=1; break; } } if (flag==0) printf("%d is a prime number.",n); else printf("%d is not a prime number.",n); return 0; }
Output:
Please enter a number: 13 Entered number is 13 and it is a prime number.
Related C Programs
- C program for factorial
- C program for Fibonacci series
- C Program to Print an Integer (Entered by the User)
- C Program to Add Two Integers
Ask your questions and clarify your/others doubts on C Prime Numbers by commenting. Documentation | https://coderforevers.com/c/c-program/prime-number/ | CC-MAIN-2019-39 | refinedweb | 226 | 72.36 |
DS3232 Real Time Clock Module Setup, Arduino Library Use Shown With an Example Project - Tutorial
Introduction
This is a tutorial of an open source Arduino library, which lets you turn your Arduino to a “Clock”, with a Maxim Integrated DS3232 Real Time Clock module. I will show you how to install the library to Arduino IDE and make an example project with the library, throughout this tutorial while giving information digital clocks and their working principle.
What Will I Learn?
In this tutorial you will learn:
- General knowledge about digital clocks.
- How to install DS3232RTC library to Arduino IDE.
- How to use DS3232RTC library and how to use It’s functions.
- How to apply Arduino real time clock project using the library functions.
Requirements
This tutorials requirements are:
- Arduino UNO or equivalent microprocessor.
- Arduino IDE (Download here)
- DS3232RTC Arduino library (Download here
- Maxim Integrated DS3232 or DS3231 Real-Time Clock module, Breadboard, 4 Jumper cables.
Difficulty
This tutorials difficulty is:
- Basic
Tutorial Contents
What is a Digital Clock and How It Works
A digital clock is a clock consisting of a power supply, a circuit consisting of an crystal oscillator and a display to show the time, which shows the time digitally unlike traditional analog clocks. First digital clock is made by an Austrian engineer named Josef Pallweber using a jump hour mechanism. Digital clocks have the same fundamental working principle as the analog clocks. They need a source of power to run the clock, which is the power supply, a battery or AC power, a display which can be a LED, LCD display or a seven segment display, and a time base that keeps track of the time which is the time circuit consisting of an crystal oscillator and a counter. The crystal oscillator creates a steady 60 or 50 Hertz signal. Then created signal is divided down using a counter circuit in order to create a binary number. Then this number is converted to the time format desired (12 hour or 24 hour format) and sent to the display. Digital clocks are used in nearly everywhere in our lives, such as ovens, cars, phones, televisions, computers, radios, industrial timers.
A digital clock. Image Source (Royalty free)
How to Install DS3232RTC Library to Arduino IDE
1. Download the library from.
2. Open Arduino IDE. Then go to and select Sketch->Include Library->Add .ZIP Library.
3. Choose DS3232RTC-master.zip file and click to Open.
An Example Project Using DS3232RTC Arduino Library
I'll show you step by step how to use the DS3232RTC Arduino library with a simple example. In this example project, we will make a real time digital clock with an Arduino microcontroller, and a Maxim Integrated DS3232 or DS3231 Real-Time Clock module. In order to make a real time digital clock with an Arduino, first we’ll have to get the time from the real time clock module. Then we need to print the time in certain intervals to the serial monitor screen. Connections for DS3232 real time clock module to Arduino is shown below.
If you use this library in your other projects please tell in the comments.
Connection diagram for DS3232 real time clock module to Arduino. Made with Fritzing.
1. Open a new sketch and save it as "Real_Time_Clock”.
2. To add our library to our sketch, type #include <DS3232RTC.h> at the beginning of the code.
3. First we need to add the setup commands. In the void setup() function, add the code that starts serial communication in 9600 baud rate. Then type the library function to get the time from the RTC module. Add an if statement afterwards, which will trigger if the Arduino cannot sync up with the real time clock module and prints the situation to serial monitor. Add an else statement and make it print that the system time is set.
4. After setup is done, we need to create a function that returns the time and prints the time to serial monitor. Name the void returning function “displayTime”. Add the codes below which prints the time to serial monitor.
5. Then we need to add the digit function that that prints preceding colon and leading 0, as used in the “displayTime” function. Add the codes below.
6. In the void loop() function, add the function that we created in order to print the real world time to serial monitor. Then add an 1 second interval between the time printings using delay command. You can change the interval as desired.
7. Click “Verify” and then “Upload” in order to compile and execute your codes. You should get a readings screen like this. Make sure your COM port and board setting is set right.
Serial monitor screen showing real time.
Conclusion
In this tutorial I’ve shown how to install “DS3232RTC” Arduino library, written by GitHub user “JChristensen” to Arduino IDE, showing how to use the library functions with an example, while giving information about what digital clocks are and their working principle.
I hope that you enjoyed this tutorial and the information that I’ve given. Thank you for reading.
If you want more information about the library and the source use the link below.
Github:
Code
#include <DS3232RTC.h> void setup() { Serial.begin(9600); // Starts serial communication in 9600 baud rate. setSyncProvider(RTC.get); // Library function to get the time from the RTC module. if(timeStatus() != timeSet) Serial.println("System Time Cannot be Set. Check Connections."); else Serial.println("System Time is Set."); } void loop() { displayTime(); // Prints the time to serial monitor. delay(1000); // 1 second interval between prints. } void displayTime() // Function that prints the time to serial monitor. { Serial.print(hour()); displayDigits(minute()); displayDigits(second()); Serial.print('/'); Serial.print(day()); Serial.print('/'); Serial.print(month()); Serial.print('/'); Serial.print(year()); Serial.println(); } void displayDigits(int digits) // Function that prints preceding colon and leading 0. { Serial.print(':'); if(digits < 10) Serial.print('0'); Serial.print(digits); }
Posted on Utopian.io - Rewarding Open Source Contributors
@drencolha, Approve is not my ability, but I can upvote you.
It's a powerfull.. | https://steemit.com/utopian-io/@drencolha/ds3232-real-time-clock-module-setup-arduino-library-use-shown-with-an-example-project-tutorial | CC-MAIN-2020-45 | refinedweb | 1,014 | 57.47 |
Hi,
I am trying to build a small app that has video in it and need to be able to get current video time real time. Value would be used to update chart, i.e. I want to move vertical
shapes line based off current time.
I found this thread which should give me what I need but
Events have been removed from latest Dash version so it doesn’t work.
import dash import dash_core_components as dcc import dash_html_components as html import plotly.graph_objs as go external_stylesheets = [''] app = dash.Dash(__name__, external_stylesheets=external_stylesheets) app.layout = html.Div(children = [ html.Div(children = [ html.Video( controls = True, id = 'movie_player', ), ]), html.Div(children = [ dcc.Graph( id = 'mean_chart', figure = { 'data': [go.Scatter( x = [0,1,2,3,4,5,6,7,8,9,10], y = [0,1,2,4,5,4,3,5,3,2,1], )], 'layout': go.Layout( xaxis = { 'title': 'Second', 'type': 'linear', }, yaxis = { 'title': 'Average score', }, shapes = [{ 'type': 'line', 'x0': 3, 'y0': -1, 'x1': 3, 'y1': 6, },] ), }, ), ]), ]) if __name__ == '__main__': app.run_server(debug = True)
Any suggestions? | https://community.plotly.com/t/getting-video-current-time-in-dash-app/23921 | CC-MAIN-2020-45 | refinedweb | 174 | 68.47 |
Many of you will be familiar with uWSGI and typically use it as an application or web server for your Python apps, à la Flask or Django.
But did you know that uWSGI has WAY more in store?
After spending more time with uWSGI and digging through the documentation, I've come to understand why it's called the uWSGI project...
Task queues, cron jobs, file/directory monitoring, threads, spools, locks, mules, timers & more.. All with a simple Python decorator!
The uWSGI functionality is vast and ranges from extremely low to high level, however in this guide I'm going to give you an introduction to some of the awesome decorators available in this package using Flask.
Installing uWSGI
Before installing uWSGI, I highly recommend you create a virtual environment in a new directory and activate it:
python -m venv env --prompt UWSGI source env/bin/activate
Install Flask and uWSGI with
pip:
pip install flask uwsgi
Flask skeleton
With our dependencies installed, we can build our Flask application skeleton in
run.py:
from flask import Flask, request app = Flask(__name__) @app.route("/") def index(): return "Hello world" if __name__ == "__main__": app.run()
uWSGI config file
We can start our app on the commend line with the
uwsgi command as pass it some arguments, but for simplicity, we'll create a configuration file called
app.ini:
[uwsgi] strict = true http = :8080 wsgi-file = run.py callable = app master = true processes = 4 threads = 8
We're going to be running our app on localhost, port
8080.
Feel free to change the number of processes/threads to better match your machine specs. We're going to come back to this file shortly.
Running the app with uWSGI
To run the application with uWSGI, simply call the
uwsgi command and pass it the name of the configuration file:
uwsgi app.ini
You should be able to go to and see
Hello world in the browser.
To stop uWSGI, simply hit
Ctrl + c in the terminal.
uWSGI decorators
uWSGI comes with a range of useful decorators which become available ONLY when running with uWSGI.
This means you're unable to use these decorators when running your application with the Flask development server.
So that we don't have to keep on stopping and starting our app from the command line, we can add something to our
app.ini file to reload our app as we change it.
Note - This should only be used in development
Open up
app.ini and add the following:
py-autoreload = 2
This will watch our application for changes every 2 seconds.
To access the uWSGI decorators, we need to import them. We'll import everything for now but feel free to only import the individual functions:
from uwsgidecorators import *
We'll also import
time:
import time
@timer
The first decorator we're going to look at is
timer.
This decorator allows us to execute a function at regular intervals:
@timer(3) def three_seconds(num): print("3 seconds elapsed")
We set the interval in seconds by passing the number of seconds to the
@timer decorator.
After running the app and waiting for 10 seconds or so, you'll see the following output in the terminal:
3 seconds elapsed 3 seconds elapsed 3 seconds elapsed 3 seconds elapsed 3 seconds elapsed 3 seconds elapsed
This function will keep on running at regular 3 second intervals for as long as your application is running.
@filemon
The
filemon decorator will execute a function every time a file or directory is modified.
We're going to create a directory named
log containing a file called
test.log in the same directory as
run.py and
app.ini:
mkdir log cd log touch test.log
We'll create 2 decorated functions using the
filemon decorator. One to watch a file and one to watch a directory:
@filemon("log/test.log") def file_has_been_modified(name): print("test.log has been modified") @filemon("log") def directory_has_been_modified(name): print("The log directory has been modified")
With the app running, go ahead and edit
test.log. You'll see output in the terminal:
test.log has been modified
Adding or removing a file/directory in the
log directory will trigger the
directory_has_been_modified function:
The log directory has been modified
@cron
The
cron decorator allows us to easily register cron jobs.
We'll create a cron job to run every minute:
@cron(-1, -1, -1, -1, -1) def cron_every_minute(num): print("Running cron on the minute")
And another cron job to run at 5:30 every day:
@cron(30, 17, -1, -1, -1) def cron_at_five_thirty(num): print("it's 17:30pm. Time to go home!")
Fortunately for us, it's just turned 17:30pm and our cron job has just ran:
Running cron on the minute! Running cron on the minute! it's 17:30pm. Time to go home! Running cron on the minute!
@mulefunc
Mules can be considered as a primitive task queue, allowing us to offload tasks to a mule to be executed in the background, allowing our application to return a response and have the mule handle the task.
Before we can use the
mulefunc decorator, we need to declare it in
app.ini:
mule = true
There's lots of interesting things we can do with mules, however in this example we're just going to create the one. Read more about mules here
To create a mule function, decorate a function with the
@mulefunc decorator, passing any arguments into the function itself.
We'll create a simple
mulefunc that takes an integer as an argument:
@mulefunc def mule(num): for i in range(num): print(i) time.sleep(1)
We'll also create a new route in our app to trigger the mule:
@app.route("/mule") def add_mule(): num = request.args.get("n", None) if num: mule(int(num)) return "Mule"
We can trigger the
mulefunc by sending a query string in the URL with a value for
n:
Sending a request to this url will return the text
Mule immidiately, whilst the fucntion is executed in the background.
In the terminal, you'll see:
0 [pid: 7750|app: 0|req: 1/4] 127.0.0.1 () {40 vars in 913 bytes} [Wed Apr 10 17:45:41 2019] GET /mule?n=8 => generated 4 bytes in 2 msecs (HTTP/1.1 200) 2 headers in 78 bytes (1 switches on core 0) 1 2 3 4 5 6 7
@spool
The uWSGI spooler is a task queue/manager that works like many other popular task queue systems, allowing us to return a response whilst a task is offloaded to be processed in the background.
A spooler works by defining a directory that "spool files" are written to. Spool functions are then ran when the spooler finds a file in the directory.
As with mules, there's lots of advanced things you can do with spoolers and we're only going to cover the basics. To learn more, read the uWSGI spooler docs.
Spooling has a fev advantages over mules including:
- Spooled tasks will be restarted/retried if uWSGI crashes or is stopped as task information is stored in files
- Spooled tasks are not limited to a 64 kb parameter size
- Spoolers offer generally more flexibility and configuration
To work with the spooler, we first need to create the "spool file". We'll call it
tasks:
mkdir tasks
We then need to tell uWSGI about our spool file. We can do so in our
app.ini file:
spooler = tasks
With the directory created and configuration file updated, we can use the
@spool decorator.
We'll start by creating a basic spool that doesn't require any arguments when called:
@spool def a_basic_spool_function(args): print(args) for _ in range(5): print("Spool task triggered with no args") time.sleep(0.5)
You'll notice we've passed
args to the function, we'll cover that shortly.
We'll create a new route to trigger the spooler:
@app.route("/spool") def add_spool(): a_basic_spool_function.spool() return "Spooled"
You'll notice we're calling
a_basic_spool_function.spool() without passing in any arguments.
Go to to trigger the spooler and keep an eye on the terminal.
The value for
args:
{'spooler_task_name': 'uwsgi_spoolfile_on_jnwt_8029_1_844293360_1554916124_156079', 'ud_spool_func': 'a_basic_spool_function', 'ud_spool_ret': '-2'}
Information about the request:
[pid: 8029|app: 0|req: 1/1] 127.0.0.1 () {40 vars in 908 bytes} [Wed Apr 10 18:08:44 2019] GET /spool => generated 7 bytes in 4 msecs (HTTP/1.1 200) 2 headers in 78 bytes (1 switches on core 0)
The spool function output:
Spool task triggered with no args Spool task triggered with no args Spool task triggered with no args Spool task triggered with no args Spool task triggered with no args
To pass arguments to a
@spool function, we can add
pass_arguments=True and pass in any values supported by the
pickle module.
Let's create another function that takes an
int as an argument. We'll use the
/spool route to trigger it:
@spool(pass_arguments=True) def background_task(num): print("Background task triggered with args") for i in range(num): print(i) time.sleep(1)
@app.route("/spool") def add_spool(): background_task.spool(5) return "Spooled"
Trigger the function by heading to
/spool. You'll see in the terminal:
Background task triggered 0 1 2 3 4 [spooler /mnt/c/wsl/projects/pythonise/tutorials/flask_series/ep_26_uwsgi_decorators/decorators/tasks pid: 8555] done with task uwsgi_spoolfile_on_jnwt_8574_1_490298766_1554918176_882267 after 5 seconds
The route returned an immidiate response whilst our function was executed in the background.
We can in fact pass any kind of Python object to a spool function, providing they can be pickled:
@spool(pass_arguments=True) def spool_with_args(*args, **kwargs): print(args) print(kwargs) print("Background task triggered") for i in range(5): print(i) time.sleep(1)
We can trigger the spooled function with:
@app.route("/spool") def add_spool(): spool_with_args.spool(name="uwsgi", data=["a", "b", "c"], dt=datetime.datetime.utcnow()) return "Spooled"
Accessing the route will print the following:
() {'name': 'uwsgi', 'data': ['a', 'b', 'c'], 'dt': datetime.datetime(2019, 4, 10, 20, 12, 54, 757350)} Background task triggered 0 1 2 3 4
When passing arguments to a spooled function, some arguments have a special meaning and must be bytes:
spooler: specify the absolute path of the spooler that has to manage this task
at: unix time at which the task must be executed (the task will not be run until the
attime is passed)
priority: this will be the subdirectory in the spooler directory in which the task will be placed, you can use that trick to give a good-enough prioritization to tasks (for better approach use multiple spoolers)
Spooler priority
One of the nice things about spoolers is the ability to set a simple priority queue, using numbers to indicate the priority.
Providing a
priority argument will give order to the spooler parsing, creating numbered directories in your "spool file", each containing their respective tasks.
To setup a priority queue, we need to add a couple more options to our uWSGI
ini config:
spooler-ordered = true spooler-frequency = 3
Priority queues only work when
spooler-ordered is enabled, allowing the spooler to scan the directories in alphabetical order (The spooler will do its best to maintain the priority order)
spooler-frequency isn't required, but will activate the spooler after
n seconds if any tasks aren't executed.
For now, we'll just create a simple
spool fucntion and call it from the
/spool route:
@spool(pass_arguments=True) def some_task(*args, **kwargs): print(args) print(kwargs) time.sleep(2) print(kwargs.get("name"), "done!")
In our route, we'll call the
spool function 4 times, setting a priority for each call:
@app.route("/spool") def add_spool(): some_task.spool(name="No priority", data=[9, 7, 8]) some_task.spool(priority=b"3", name="Priority 3", data={"foo": "bar"}) some_task.spool(priority=b"2", name="Priority 2", data=["a", "b", "c"]) some_task.spool(priority=b"1", name="Priority 1", data=[1, 2, 3]) return "Spooled"
You'll notice we've provided the special
priority parameter, with a binary version of the priority we wish to assign to the task, with descending priority.
When we request this route, the
spool functions will be called and a directory will be created for each level of priority within the
tasks directory (the "spool directory" we created earlier).
The spooler will do its best to run the spooled functions in order of priority, but it can't be guaranteed (from my initial testing)
Accessing this route, we see the following output:
{'name': 'Priority 3', 'data': {'foo': 'bar'}} Priority 3 done! {'name': 'No priority', 'data': [9, 7, 8]} No priority done! {'name': 'Priority 1', 'data': [1, 2, 3]} Priority 1 done! {'name': 'Priority 2', 'data': ['a', 'b', 'c']} Priority 2 done!
Not quite in the order or priority, but I'm sure there's something I'm missing (this was just after some initial testing)
Another area I'm having mixed results is with the spool function return values.
Looking through the documentation, we have an option to return 3
My initial testing and thoughts:
import uwsgi @spool(pass_arguments=True) def spool_ok(*args, **kwargs): time.sleep(2) print("Spool OK") return uwsgi.SPOOL_OK @spool(pass_arguments=True) def spool_retry(*args, **kwargs): time.sleep(2) print("Spool retry") return uwsgi.SPOOL_RETRY @spool(pass_arguments=True) def spool_ignore(*args, **kwargs): time.sleep(2) print("Spool ignored") return uwsgi.SPOOL_IGNORE
My idea was to call each spool function, expecting the spool file for
spool_retry to remain in the spool file:
@app.route("/spool") def add_spool(): spool_ok.spool(id="1") spool_retry.spool(id="2") spool_ignore.spool(id="3") return "Spooled"
However after missing something in the documentation, I found out that we can use the
@spoolraw decorator to control the return values of a spool!
@spoolraw
To control the return value of a spool, we can use the
spoolraw decorator, returning 3 possible - If multiple languages are loaded in the instance all of them will fight for managing the task. This return values allows you to skip a task in specific languages
Let's re-run the same tests as above using the
spoolraw decorator:
@spoolraw(pass_arguments=True) def spool_ok(*args, **kwargs): time.sleep(2) print("Spool OK") return uwsgi.SPOOL_OK @spoolraw(pass_arguments=True) def spool_retry(*args, **kwargs): time.sleep(2) print("Spool retry") return uwsgi.SPOOL_RETRY @spoolraw(pass_arguments=True) def spool_ignore(*args, **kwargs): time.sleep(2) print("Spool ignored") return uwsgi.SPOOL_IGNORE
Calling the functions:
@app.route("/spool") def add_spool(): spool_ok.spool(id="1") spool_retry.spool(id="2") spool_ignore.spool(id="3") return "Spooled"
And now, as expected:
spool_ok- Ran succesfully and the spool file was removed
spool_retry- Ran but returned the retry signal. The spool file was kept and retried every 3 seconds (the
spooler-frequencywe set in the
inifile)
spool_ignore- Was ignored and the spool file remained, producing the following output every 3 seconds
Spool ignored unable to find the spooler function, have you loaded it into the spooler process ?
Which makes sense as we told uWSGI to ignore it.
These options make it easy for us to retry a task if a condition isn't met or there's an exception in the function, for example:
@spoolraw def critical_function(path): try: compress_images(path) except Exception as e: log_error(e) return uwsgi.SPOOL_RETRY return uwsgi.SPOOL_OK
@spoolforever
Need a function to run forever? use the
@spoolforever decorator.
@spoolforever def forever_and_ever(args): print(args) for i in range(10): print(i) time.sleep(0.5)
Calling it from our route:
@app.route("/spool") def add_spool(): forever_and_ever.spool() return "Spooled"
The
forever_and_ever function will now run forever, even after stopping and starting the application.
If you need to remove a
spoolforever task, you'll have to delete the spool file found in the spool folder.
@thread
The
thread decorator can be used to execute a function in a separate thread.
To enable threading, you must add it as an option in your
ini file or pass it to
uwsgi as an argument on the cli:
enable-threads = true # or threads = <n>
If you're following along, we already set a value for
threads in
app.ini.
Let's decorate 3 functions with the
@thread decorator and call them from the
index route:
@thread def thread_func_a(): for i in range(5): print("Thread a running") @thread def thread_func_b(): for i in range(5): print("Thread b running") @thread def thread_func_c(adj): for i in range(5): print("Thread c running with args ", adj)
We'll call the functions from the
index route:
@app.route("/") def index(): thread_func_a() thread_func_b() thread_func_c("AWESOME!") return "Hello world"
Upon requesting the route, we see the following output:
Thread a running Thread a running Thread b running Thread c running with args AWESOME! Thread c running with args AWESOME! Thread c running with args AWESOME! Thread b running Thread c running with args AWESOME! Thread c running with args AWESOME! Thread b running Thread b running
@postfork
The
postfork decorator allows us to decorate functions that will be executed when uWSGI forks the application.
From the uWSGI docs:
"uWSGI is a preforking (or “fork-abusing”) server, so you might need to execute a fixup task after each fork(). The postfork decorator is just the ticket. You can declare multiple postfork tasks. Each decorated function will be executed in sequence after each fork()."
For example, you may want to reconnect to a database after forking:
@postfork def connect_to_db(): # Connect to the database after fork db.connect() print("Connected to db after fork")
Any functions decorated with
@postfork will be executed sequentially. Let's add another one:
@postfork def second_postfork_func(): print("Running second postfork function")
When we first startup our app, uWSGI will fork based on how many
processes we set in the
ini file:
Connected to db after fork Running second postfork function spawned uWSGI worker 1 (pid: 11087, cores: 8) Connected to db after fork Running second postfork function spawned uWSGI worker 2 (pid: 11096, cores: 8) Connected to db after fork Running second postfork function spawned uWSGI worker 3 (pid: 11105, cores: 8) Connected to db after fork Running second postfork function spawned uWSGI worker 4 (pid: 11114, cores: 8)
@lock
The
lock decorator will execute a function in a fully locked environment.
From the uWSGI docs:
"This decorator will execute a function in fully locked environment, making it impossible for other workers or threads (or the master, if you’re foolish or brave enough) to run it simultaneously."
To create a
lock function, simply decorate it with
@lock:
@lock def locked_function(): print("Concurrency is for fools!")
We'll call it from the
index route:
@app.route("/") def index(): locked_function() return "Hello world"
Requesting the
index route, we see:
Concurrency is for fools!
To better illustrate the
@lock decorator, we can combine it with the
@timer decorator:
@timer(2) @lock def locked_function(num): print("Concurrency is for fools!", time.time())
locked_function as expected will run every 2 seconds and print
Concurrency is for fools! to the terminal:
Concurrency is for fools! 1554936572.0891757 Concurrency is for fools! 1554936574.0893521 Concurrency is for fools! 1554936576.08939 Concurrency is for fools! 1554936578.0899732 Concurrency is for fools! 1554936580.0901859
However, if we modify the function to include a delay:
@timer(2) @lock def locked_function(num): time.sleep(4) print("Concurrency is for fools!", time.time())
The
timer will attemp to run
locked_function every 2 seconds, but due to the
@lock decorator and addition of adding a 4 second delay, the funcion is not ran and instead has to wait for the delay to finish.
We can see this is the terminal output:
Concurrency is for fools! 1554936725.4628816 Concurrency is for fools! 1554936729.4640946 Concurrency is for fools! 1554936733.4651072 Concurrency is for fools! 1554936737.4664443 Concurrency is for fools! 1554936741.4673338
If you have a function that must not be called by any other process,
@lock is your friend.
Other decorators
Some other interesting decorators, out of the scope of this guide include:
@hakari(n)- kill a worker if the given call is taking too long
@rpc('x')- Used for remotely calling functions using the uWSGI RPC stack
@signal(n)- Registers signals for the uWSGI signal framework
Be sure to read the uWSGI decorator docs here
Wrapping up
This guide was just to introduce you to some of the useful decorators available in uWSGI and I highly recommend you check out the documentation, have a play around and do some testing for yourself.
Also, you may want to check out this awesome package/repo for working with many of the uWSGI tasks: | https://pythonise.com/series/learning-flask/exploring-uwsgi-decorators | CC-MAIN-2021-17 | refinedweb | 3,430 | 53 |
Overview of the tasks
The process involves a number of steps:
- Configure and enable cookie-based authentication
- Configure Protected resources
- Secure your credentials
- Store the credentials
- Create a login form
Configure Cookie-based Authentication
The next step is to enable the correct middleware in the request pipeline
- Create a new Razor Pages application named AuthenticationSample (
dotnet new webappfrom the command line). If you are using Visual Studio to create the application, ensure that Authentication is left at "No Authentication".
- Add a new folder to the Pages directory, named Admin.
- Add a new Razor Page to the Admin folder named Index. If you are using VS Code, you can do this by executing
dotnet new page -o Pages/Admin -n Index -na AuthenticationSample.Pages.Adminfrom the terminal.
- Change the code in Index.cshtml to read as follows:
- Run the application and navigate to(where
xxxxrepresents the port number the application is running on). You should be able to reach the page you just created without any issues.
- In Startup.cs, add
using Microsoft.AspNetCore.Authentication.Cookies;to the top of the file.
- Change the
ConfigureServicesmethod so that it looks like this:
In this step, you configured Cookie-based authentication, setting the login page to the home page. Then you configured the Admin folder to prevent unauthorised users being able to access anything in it.
- Add
app.UseAuthentication();to the
Configuremethod, just before
app.UseMvc();. This step adds authentication middleware to the pipeline so that it is made available to the application. Without this, log in attempts will fail.
Now if you re-run the application and try to navigate to
/admin, you should be redirected to the home page, with an extra query string value in the
URL:
You have successfully enabled cookie-based authentication and protected a folder with it.
Securing Credentials
Instead of storing
credentials in a database, you will store them in the standard appSetting.json
file. However, just as you wouldn't store your password in a database in plain
text, you don't want to store it in a text file in plain sight either. So you
will use hashing to protect the password. That way, if anyone can access your appSettings
file, your password will be safe. You want the hash to be cryptographically
robust. Rather than concocting your own hashing algorithm, you should use one
that is written by experts who know what they are doing. There is a
PasswordHasher class in ASP.NET Identity that does the job perfectly. So you will create a console application that leverages it
to hash the password that will be stored in appSettings .
- Create a new .NET Core console application using VS, VS Code or the command line. It doesn't matter what you call it. This is a disposable utility.
- Add the Microsoft.AspNetCore.Identity package to it -
dotnet add package Microsoft.AspNetCore.Identityfrom the command line, or
install-package Microsoft.AspNetCore.Identityfrom the Package Manager Console in VS.
- Change the code in Program.cs to look like this, supplying your own password where applicable. Note that subsequent attempt to match it will be case-sensitive:
- Run the application to check that a hashed value was generated. Keep the application for later use.
You may notice if you run the application multiple times that it generates a different value each time for the same password. This is because the algorithm appends a "salt" to the password for hashing, ensuring that the hash will differ each time for the same password. This makes it infeasible for hackers to crack the actual password.
The
PasswordHasher constructor and the
HashPassword methods both take a generic parameter, representing the current user. The type parameter is not used by the default implementation. It has been made available for custom implementations. I have instantiated the
PasswordHasher with a
string type, and then passed
null into the
HashPassword method.
Storing Credentials
In this section, you will store your user credentials in a custom section in the appSettings.json file. You will also create a class representing the user so that you can work with the configuration values in a strongly typed manner.
- Create a folder named Models in the root of the application.
- Add a new C# class file to the Models folder and name it SiteUser.cs.
- Replace the existing content with the following:
- Add the
SiteUsersection to the appSettings.json file with the user name that you want to use, and the hashed password that you generated earlier:
Creating The Login Form
- Change the Index.cshtml.cs file in the Pages folder (not the one you created in the Admin folder) so that the content looks like this:
You have added bound properties for the user name and password, which you have decorated with the
DataTypedata annotation attribute, setting its value to
Password. This is so that the input tag helper generates to correct type of input.
You have injected
IConfigurationinto the
PageModelso that you can use it to resolve the user credentials from appSettings. Having checked that the user name matches whatever was posted, you then instantiate an instance of the
PasswordHasherthat you used in the console application, and passed the submitted password along with the hashed version obtained from the configuration file to its
VerifyHashedPasswordmethod.
Internally, this method unwraps the hashed value, extracting the salt, which is used to hash the submitted password. If the resulting hash matches the stored hash, the method returns a
PasswordVerificationResultenumeration set to
Success.
- Change the Index.cshtml file in the Pages folder to include the following log in form:
This is a standard form that makes use of Bootstrap 4. It has its method set to
postso that credentials are not passed in the query string.
Now when you run the application and log in with the correct credentials, you should automatically be redirected to the admin page.
Summary
This walkthrough showed how to secure a simple application robustly without needing to store credentials in a database or use the data access side of the Identity framework. You have still used some extremely useful APIs provided by Identity to secure your credentials using code written by experts. You have used the same API to match submitted values to what has been stored. You have also seen how easy it is to include and configure cookie-based authentication, and to secure areas of the application from non-authenticated users. | https://www.mikesdotnetting.com/article/335/simple-authentication-in-razor-pages-without-a-database | CC-MAIN-2019-47 | refinedweb | 1,065 | 55.44 |
0
hi, have a little problem using fstream. first here is the code...
#include<iostream> #include<fstream> using namespace std; int main() { string word; ofstream outl("biscuit.txt", ios::out); cin>>word; outl<<word; system("pause"); return 0; }
using this, i can type a word into the input screen and it saves whatever word i typed into the biscuit.txt file. now my problem is... how do i make it save a whole sentence (as many words as i wish... i think it has to do with the
string ). secondly, how do i write a program that can read from the file and print it on the screen? and also how do i add data into the file without overwriting the previous data...( the book i'm using isnt too clear on that)
thanks
p.s.
is using
ifstream and
ofstream the same as using
fin and
fout ? cuz i cant find enough info about
fin or
fout anywhere | https://www.daniweb.com/programming/software-development/threads/47224/fstream-help | CC-MAIN-2017-39 | refinedweb | 159 | 84.37 |
Hi I'm trying to create RFT scripts with their own objects maps and call custom methods.
I can do this if I create them in the same folder, i.e. Script1.vbs contains method I wrote testLogin which Script2.vbs instantiates the class Script1 that contains the public method testLogin
Directory structure:
<PROJECT>
|-Script1
|-Script2
|-wrappers
|-Script1
code below: works
Public Class Script2
Inherits Script2Helper
Public Function TestMain(ByVal args() As Object) As Object
Dim script2 As Script1 = New Script1
script2.testLogin()
Return Nothing
End Function
End Class
However I tried and created same Script1.vbs under a "wrappers" folder and referenced the correct Namespace
but it gives the following error during compilation:
Error message:
[
C:\Users\sean.lee-loy\Documents\Visual Studio 2005\Projects\OLISTEST\OLISTEST\Script2.vb(29,0): Type 'wrappers.Script1' is not defined.
]
Public Class Script2
Inherits Script2Helper
Public Function TestMain(ByVal args() As Object) As Object
Dim script2 As wrappers.Script1 = New wrappers.Script1
script2.testLogin()
Return Nothing
End Function
End Class
I even tried importing Namespace and still gave the same error.
In Java, this referencing of methods in other RFT scripts contained in different folders (packages) works.
code for Script1 in "wrappers" folder: which is essentially the same as the one in the root folder just without Namespace declaration.
Namespace wrappers
Public Class Script1
Inherits Script1Helper
'Script Name : Script1
'Generated : Jul 12, 2012 10:36:08 AM
'Description : Functional Test Script
'Original Host : Windows 7 x86 6.1 build 7601 Service Pack 1
'since 2012/07/12
'author sean.lee-loy
Public Function TestMain(ByVal args() As Object) As Object
' TODO Insert code here
Return Nothing
End Function
Public Function testLogin() As Object
StartApp("iexplore")
Link_Facilities().WaitForExistence()
Link_Facilities().Click()
Return Nothing
End Function
End Class
End Namespace
Topic
Pinned topic Accessing methods from one VB .NET RFT script diff folder
2012-07-12T14:43:42Z |
Updated on 2012-11-14T15:05:30Z at 2012-11-14T15:05:30Z by Sean_lee_loy
Re: Accessing methods from one VB .NET RFT script diff folder2012-07-12T15:08:01Z
This is the accepted answer. This is the accepted answer.I found an old thread in 2008: which says RFT has problems resolving Namespaces is this still an issue in RFT 8.2.2?
4. Namespaces defined in your RFT project don't quite work right. I found this is related to number 3 above. The problem is that the IDE behaves as if the project is developed under a root namespace with the same name as the project. Therefore, any new namespaces defined within the project are treated as children of that root namespace. The compiler, on the other hand, does not actually use the /rootnamespace switch (and there is no way within the IDE to specify that it should) and therefore all namespaces defined in the project are compiled as their own root. This causes problems when trying to use "Imports" lines for namespaces defined within the project. For example, say I have an RFT project named "Bob", and within it I have a utility class that defines a the new namespace "Frank". If I want my script to access that utility class I would put the following line in the header of the script:
Imports Bob.Frank
As far as the IDE is conserned, this is correct and no errors are detected at design-time. However when you compile this script, you get errors because as far as the compiler is concerned there is no "Bob" namespace. If you change the line to this:
Imports Frank
It will compile with no problems, however the IDE will now throw a fit and report a bunch of design-time errors, and you will get no intellisense support for that namespace.
- SystemAdmin 110000D4XK16727 Posts
Re: Accessing methods from one VB .NET RFT script diff folder2012-10-21T13:37:21Z
This is the accepted answer. This is the accepted answer.
- Sean_lee_loy 270005CHTN
- 2012-07-12T15:08:01Z
Re: Accessing methods from one VB .NET RFT script diff folder2012-11-14T15:05:30Z
This is the accepted answer. This is the accepted answer.
- SystemAdmin 110000D4XK
- 2012-10-21T13:37:21Z | https://www.ibm.com/developerworks/community/forums/html/topic?id=77777777-0000-0000-0000-000014855765 | CC-MAIN-2017-13 | refinedweb | 691 | 64.3 |
These are chat archives for dry-rb/chat
@solnic, @AMHOL Hi!
@marshall-lee and me have recently written
dry-initializer gem with a simple DSL for fast definition:
require 'dry-initializer' class User extend Dry::Initializer parameter :name, default: 'Unknown', type: String # dry-types will be supported as well option :type, default: -> { 'admin' }, type: String end user = User.new 'Joe', type: 'customer'
which is the same as:
class User attr_reader :name, :type def initialize(name = 'Unknown', type: 'admin') @name = name @type = type # ... type constraints follows here end end
Could you please create a
dry-initializer repo for the gem and give us corresponding rights?
I'd prefer to push it on place from the very beginning.
dry-constructorbecause the scope of changes is big
dry-initializerwas extracted from my work in which should be rewritten to use
dry-initialize. may I import it as a
dry-function?
dryrb
Hi guys!
Having such piece of code:
require 'i18n' require 'dry-validation' class UserSchema < Dry::Validation::Schema configure { config.messages = :i18n } key(:email) {|email| email.filled? > email.password_valid? } def password_valid?(email) false end end schema = UserSchema.new p schema.call(email: '', password: 19).messages # {}
it returns valid, whereas as I understand the docs, it should at least tell me that the email must be filled. Is it correct behaviour?
@blelump as far as I can tell, you need to change
> to
&:
key(:email) {|email| email.filled? & email.password_valid? }
>, I beleive, is for high-level rules (which depend on values of different keys)
rule(:email_absence) do value(:login).false? > rule(:email).none? end
uhm, perhaps such piece of code would explain what I'm trying to achieve:
key(:email) {|email| email.filled? } key(:password, &:filled?) rule(:user_active) do value(:email).account_active? end rule(:valid_password) do value(:email).password_valid? end
Basically the idea is to perform a few checks, where each check depends on the previous one. I might follow the @achernik way and put it in such way:
email.filled? & email.password_valid? & email.account_active?, however in such case the validation result always concerns the
Perhaps you guys have a better approach for that?
value(:email).none?and
rule(:email).none?
rule(:foo).bar?checks if
bar?predicate was successfuly applied, w/o applying it again
valuesyntax
rulesyntax is tricky so you might have issues with getting it right
i guess this ?
# pseudocode, no idea if this is right rule(:credentials_match) do value(:email).filled? & value(:email).email? & value(:password).filled? & value(:password).valid_password?(value(:email)) end
btw, is this the right way? :)
key(:email, &:filled?)and
key(:password, &:filled?)are valid. Then (but only if these params are filled) I'd like to perform another validation logic, with some checking , e.g.
rule(password: :valid) do (rule(:email).email? & rule(:password).filled?) > value(:password).valid? end
errors.password.valid
>is an alias to
then
rule(:email).email?.and(rule(:password).filled?)).then(…)
@solnic ,
key(:password) {|pass| pass.filled?} key(:email) {|email| email.filled?} rule(:user_active) do (rule(:email).filled? & rule(:password).filled?) > value(:email).user_active? end def user_active?(email) false end
gives:
dry-logic-0.1.2/lib/dry/logic/rule/result.rb:5:in `[]': no implicit conversion of Symbol into Integer (TypeError)
@solnic , I've tried various chains, e.g:
rule(:user_active) do rule(:email).filled?.and(rule(:password).filled?).then(rule(:email).user_active?).then(rule(:email).user_valid?) end
or
rule(:user_active) do rule(:email).filled?.and(rule(:password).filled?).then(rule(:email).user_active?) end rule(:user_valid) do rule(:user_active).then(rule(:email).user_valid?) end
but it doesnt work. I'm lost now
rulewill only be used to define a high-level rule
value
OK , great !
I'll try then to not complicate my validations until next release is out
def password_valid?(email); end– can I access password as well somehow ?
rule(:foo, password_valid?: [:email, :password])
password_valid?: [:email, :password], it fires | https://gitter.im/dry-rb/chat/archives/2016/02/19?at=56c7415d7a66b5965f68e66f | CC-MAIN-2019-39 | refinedweb | 639 | 52.36 |
How of choice is Python. I don’t evangelize it a lot, and not just because Python in general doesn’t market itself very much, or because I feel that language cheerleading isn’t a particularly productive use of my time. I also don’t evangelize it much because I feel like I’m not the sort of person to turn to for advice on choosing a programming language; I’m just a jumped-up liberal-arts kid who never took an algorithms class in college and is still playing catch-up on that front.
But every once in a while I feel like I should explain how I got here, and why I’m writing in Python, instead of PHP or Perl or Ruby or Java or C# or any other language that’s applicable to the domain I work in. So here goes.
Entry point
I didn’t grow up hacking. I learned BASIC as a kid (in fact, I learned a few different flavors of it at various points), and when I was in the second grade I learned Logo and used it to draw amusing pictures. But outside of brief encounters, programming wasn’t a part of my formative experiences. A lot of this had to do with the economic situation I grew up in; aside from a third-hand Apple II, neither I nor anyone else in my family had owned a computer until I was in my second year of college. Now, sitting on my couch with one computer in my lap and four more strewn around my apartment, I can’t help feeling that I’m living in unbelievable luxury; I could, if I wanted to, throw one of them away and buy a new one. What affluence!
So it wasn’t until I was in college that I had a chance to really learn much about computers. I started, ignominiously, with HTML, which I learned almost on a whim. I had to produce a web page for a class I was taking, and HTML didn’t look too difficult, so I dove in.
Naturally, I didn’t stop with the web page I was supposed to produce for the class; now that I knew how to write HTML, I took advantage of the free web space my college provided to all its students, and set up my very first personal web site. I had an irregularly-updated journal which I managed by hand, and I was proud of myself.
The next step, logically, was to learn a programming language I could use to automate the management of that site.
In those days, there were two choices for an amateur who wanted to dabble in web programming: Perl/CGI and PHP. I spent a little time looking at examples of both before I made my choice, but it wasn’t a particularly difficult choice to make.
Why PHP won
Most of the “serious” web programmers I came into contact with back then with wrote Perl, and tended to look down their noses at PHP. Looking at it now I can sort of understand why, but I also understand why they all kept a working knowledge of PHP on their résumés: PHP was winning over the Web. Perl was a more mature language and certainly a more robust language, but Perl was losing. It had been the language of web programming for quite a while, but still it was losing.
The reason is simple: PHP was easy. You just wrote your code, threw in a little bit of HTML, tacked a
.php on the end of the filename and uploaded it. Whereas Perl, even at its best, required you to learn a few things about CGI and web servers before it would work on most hosting accounts.
And that’s putting it nicely. In more accurate terms, PHP stomped the ever-lovin’ crap out of Perl for a first-time web programmer. Perl was originally designed as a glue language to do text processing on Unix systems, and only later grew CGI functionality and server integration. PHP was originally designed to do web programming, and integrated into Apache in a way that even a mentally-impaired chimpanzee could understand. In the hearts and minds of a generation of new web developers, one of whom was me, Perl didn’t stand a chance.
My web hobby continued to be mostly a hobby for the remainder of my college years. I learned CSS, and I learned SQL and I learned JavaScript for the first time (and hated every second of it), and my little personal site grew lots of interesting new capabilities. Most of my disposable income was being spent on ways to impair the computer inside my head, so I never bought a real domain or real web hosting; when I graduated and didn’t have free hosting from the college anymore, I set up Apache on my personal computer (I’d been converted to Linux by that point), got a
dyndns.org subdomain and kept on chugging. Once or twice, people paid me to build sites for them, but they never involved any serious programming.
Broadening horizons
I did eventually learn Perl, but I never really liked it. I usually attributed that to the fact that PHP had been so much easier, but as time went by I came to understand the real reason why I didn’t like it, a reason which was also behind my eventual break with PHP.
Right around the time I graduated, I started learning Python. Given that my first foray into JavaScript had been fairly shallow and I’d been using PHP 4 for web stuff, this was my first experience with real object-oriented programming. OOP was easy for me to pick up; it was an abstraction, and a useful, sensible abstraction to boot. Just for kicks I did learn some Java on the side, to get a feel for a “pure” object-oriented language, but I didn’t really enjoy that and don’t like to admit to it now.
Python was also my first experience with a language that allowed you to do functional programming, and my first exposure to the concept of functional programming in general. So I read up on it. I learned about Alonzo Church and the lambda calculus; I learned about Haskell Curry and the various things named after him; and I worked through a few basic Lisp tutorials.
The more I learned, the more I wanted to learn. I read as much as I could understand of the fundamentals of computer science (my degree in philosophy ended up doing a lot of good there; a philosophy major doesn’t have very many direct applications to the real world, but a solid grounding in logic and formal systems really paid off in this particular case), and particularly in the philosophies behind different programming languages. The guiding principles behind the various languages I’d learned started to stand out more clearly, and even though I started recognizing the flaws in Python I kept right on liking it.
And I kept wishing I could use it for the web stuff I was doing (by this point I’d quit my customer-service job and was freelancing full-time). Some guys in the local LUG were making good money building stuff with Zope, but it didn’t feel right to me, and I didn’t have the time or the energy to invest in learning it; Zope was such a mind-bogglingly huge thing that the only way to get a handle on it was to start memorizing pieces and hope they’d eventually come together and make sense.
Finally, something clicked.
Memorize this
One day a little light went off in my head, and I understood why I didn’t like Perl or PHP, and part of why I liked Python and a few of the other languages I’d dabbled with. Much later I was reading Steve Yegge’s rant about Perl and got to this bit, which explained it far better than I could ever hope to:
Every operator in Perl (not just the Range operator) has six different behaviors depending on the invisible context in which its surrounding expression is being evaluated. When you evaluate a hash in a string context, for instance, you get back a string that’s a fraction, such as “7/10”, the numerator of which represents the current number of keys in the hash, and the denominator of which represents of total number of buckets allocated. And so on, and so forth. There are no rules, no heuristics, no patterns. It’s pure memorization.
He’s right that a huge amount of learning Perl is memorization: committing all the operators and contexts and special global dollar-sign-plus-Chinese-character variables to memory and learning how to make them work together. And even though I’m fairly good at it, I hate learning things by memorization. I like to understand things, to know what makes them work and how their parts fit together; I don’t like hearing “oh, just assign this value to the the magic global
$make_it_work variable, and remember that for next time”.
This also explained a large part of why I didn’t like PHP, because PHP is almost as bad. A lot of its problems on the memorization front could be solved by either A) implementing namespaces or B) choosing a consistent naming scheme for built-in functions or C) preferably both, but that’s probably never going to happen because the existing system has way too much inertia.
All of this is part of why I use Python.
Why Python, part 1
Python asks for very little in the way of memorization. The core language is small and the built-ins have sensible, consistent names. The standard library is large, but everything is in module namespaces, and you can be an advanced and very productive Python programmer without having to memorize every symbol exported from every standard module — knowing a few important modules well, and having a good general understanding of what the rest of them do, is enough to take you a long, long way.
Yearning to be free
I’d been hearing rumblings for a little while about something called “Ruby on Rails”, and how it basically kicked the crap out of everything else in the web-development world the way PHP used to, and I was intrigued. So I started learning Ruby and picking up Rails.
This was a pretty interesting experience, because it organized and cemented a lot of things I hadn’t really understood previously, like metaprogramming. Rails really isn’t “Ruby” so much as it’s a domain-specific language for web programming which happens to be implemented in Ruby. And Ruby as a language is exceptionally well-suited to writing domain-specific languages. It was the first time I’d seen that sort of “bottom-up” approach in the real world, and it was an eye-opener.
Unfortunately, from a professional perspective I was still stuck firmly with PHP and Perl, and getting more and more stuck every day. I’d built a reputation on being handy with a couple of niche CMS products, and business generated from that reputation was paying my bills and putting food on my table, so I couldn’t just dive headlong into something else. And, of course, the Rails hype machine and the associated bevy of companies willing to throw money at people with any sort of Rails knowledge hadn’t yet kicked into full gear; if the timing had been different, I would now be writing Ruby by day and snorting coke off the bellies of hookers by night.
And even though Ruby and Rails were unarguably cool, I still really liked Python.
And all that jazz
In the summer of 2005, a new Python web framework popped up, and immediately people started comparing it to Rails. I set it up on my testing machine (by this point I was rich enough to afford two computers!) and started fiddling with it, and I was almost immediately hooked. There were some un-Pythonic things about it, and there was a fair amount of “magic” in it (both have been dealt with since), but I realized I’d finally found what I was looking for. I could finally use Python on the Web.
The rest, of course, is history. I got involved in the Django community, one thing led to another, and now here I am, working at the Journal-World, writing Python full-time, maintaining a couple branches of Django and loving every minute of it.
But playing with Django was the final little nudge that made me understand the other thing I’d always liked about Python, even though I’d never been completely conscious of it before. It tends to get glossed over a lot, but it’s probably the single most important feature of Python.
It makes sense
When I worked in PHP and Perl, I was mostly working with a couple of off-the-shelf open-source CMS products, which I learned to adapt to various tasks. In PHP I was using Textpattern, and in Perl I was using Scoop.
Of the two, Scoop is by far the more massive, and I was extremely fortunate in mostly being a subcontractor for a couple of the people who develop it; when a client needed Scoop to be hacked up, I didn’t have to do the heavy lifting. And that’s good because, even though Scoop’s Perl is pretty clear and straightforward, it’s still a massive system written in Perl, and massive systems written in Perl are not things that you can just dive into the source of and start understanding.
Textpattern is smaller and simpler, and is written in PHP. PHP can be written in an obscure and confusing manner, but Textpattern isn’t, or wasn’t the last time I looked at it; most of it is clean, well-written code. But figuring out what was going on inside Textpattern still took me a little while, and wasn’t as easy as I’d have liked it to be.
When I was learning Rails, I took a few peeks under the hood; the face it presented to an end user was so tightly organized that I figured its internals would be pretty easy to work out once I got over a few speed bumps related to my inexperience to Ruby. I figured wrong. As it turns out, even some of Rails’ biggest evangelists and core people will tell you that its source is not for the faint of heart. There’s so much metaprogramming and, worse, monkeypatching of monkeypatching, that just figuring out where you are in the code can take a good long while.
A little while back I wrote an entry which walks through all the steps Django takes in processing a request; I wrote that mostly because someone asked for it, and until I started writing I had only a rough, general idea of what went on inside Django (if that sounds bad, keep in mind that Django was, at the time, nearing the end of a major rewrite, and huge swathes of code had changed).
That entry took about four hours to write. That’s including the time I spent writing, and the time I spent poking around in the code to see what was going on at every step. Given all the stuff it does, Django’s internals are amazingly simple and clear, and you can easily carry the whole thing around in your head.
Why Python, part 2
Get ten programmers in a room and ask them to rank the things they value most in a programming language, and you’ll get ten different sets of rankings. Some value conciseness above everything else. Others value expressiveness (which isn’t the same thing) more. Others look for strong standard libraries. Still others will go on about the merits of different philosophies of type handling, or availability of multiple implementations or all sorts of other things.
If you’ve got a Python programmer in that hypothetical room, odds are that at or near the top of her list will be “readability”.
Pretty much from the beginning, Guido has stressed that Python should be a language that’s easy to read (and, by extension, easy to understand), and several features of the language itself work to enforce that. On top of that, the official style guide seems to be fairly widely used, and there is a general sense in the community that, outside of things like “obfuscated Python” contests, code that’s clever at the expense of readability is bad.
And with the sole exception of Zope, pretty much every significant piece of Python code I’ve ever looked at has been clear, consistent, and relatively easy to follow along with. When I’ve had to stop and take a while to think something through, the language has rarely been the cause; it’s almost always been the fact that whatever I was looking at was dealing with some genuinely hard and complex problems, and so understanding the code required a corresponding understanding of those problems.
It’s an interesting variation on one of the Perl slogans: in Python the easy things are easy to read, and the language doesn’t make the hard things unreadable. To someone who’s constantly trying to learn and understand newer and harder things, that’s enough to forgive almost any other sin of language design (and Python does commit several of those, but on the whole it’s surprising how little there is about it that needs forgiveness, compared to other languages I’ve tried).
It may not be right for everyone, and I won’t ever try to argue that it is, but that’s why Python is right for me. That’s why I’m writing Python instead of PHP or Perl or Ruby or Java or C# or any other language that’s applicable to the domain I work in. It took a long time, and it took some radical changes in how people think about web development, but I’ve finally gotten to where I can do this, and it makes me happy. | https://www.b-list.org/weblog/2006/oct/16/how-i-got-here/ | CC-MAIN-2019-04 | refinedweb | 3,068 | 62.31 |
.
Alexa coding works on "intents" - the following is a simple intent. That is, you can only ask the skill one thing. No state is maintained, no multiple commands to get right, no complexity. This gets information from a single source and speaks it.
Code
The basic Python is pretty simple and can be adapted to query almost any basic JSON API. Let me walk you through it.
All the boilerplate needed to set up the skill:
import logging from operator import itemgetter import requests from flask import Flask from flask_ask import Ask, statement app = Flask(__name__) ask = Ask(app, '/') logger = logging.getLogger() @ask.launch def launch(): return stats()
The API that you want to call. This is a basic JSON API which doesn't require authentication.
ENDPOINT = ""
Here's the main part of the Skill. When the intent is triggered, call the API. Get the data and format it for speech. If there was an error, tell the user.
Make sure that the name of the intent is identical to the one you set up in the Alexa Developer console.
@ask.intent("BatteryIntent") def stats(): r = requests.get(ENDPOINT) data = r.json() if r.status_code == 200: percent = data['battery/amphours'] speech = "Your Moixa battery is at " + str(int(round(percent))) + " percent capacity right now." else: speech = "There was a problem connecting to the battery." logger.info('speech = {}'.format(speech)) return statement(speech)
Short and easy.
Deploying
Again, Amazon don't make it easy to deploy Alexa skills - here's a guide to getting started using Zappa.
Remember, Alexa is not AI. You must painstakingly type in all the "utterances" that you think your users might say to activate the skill.
Problems
There is one major problem with retrofitting Alexa skills.
Lots of Internet connected devices have no ability to log in remotely - and certainly don't have the OAuth systems that Amazon demands. Alexa has no ability to directly connect to IP addresses on its own subnets.
This means most skills will need hard-coded credentials - and a way to traverse into your network.
In short, this means means they can't be shared on the Amazon store. | https://shkspr.mobi/blog/2017/11/solar-battery-and-alexa-in-30-lines-of-code/ | CC-MAIN-2019-09 | refinedweb | 358 | 67.86 |
On Wednesday 05 June 2002 18:45, you wrote:
> getattr(file, name)
is this possible on windows? i thought you could only do that on UNIX
systems... well, that's the case in php at least. please correct me if i'm
wrong.
could you maybe show the python code for handling the upload? i've written a
few upload scripts as well, maybe we can share them ;-)
i usually use the req object too. like:
def insertFile(req):
form = req.form
fileObject = form['fileInputField']
fileData = fileObject.file.read()
open('/some/path/name/file','wb').write(fileData)
well, this is the very simple fileUploadThing | | https://modpython.org/pipermail/mod_python/2002-June/012651.html | CC-MAIN-2022-05 | refinedweb | 104 | 70.09 |
Container Classes#
Qt’s template-based container classes.
Introduction#
The Qt library provides a set of general purpose template-based container classes. These classes can be used to store items of a specified type. For example, if you need a resizable array of
QString s, use
QList <.
The containers provide iterators for traversal. STL-style iterators are the most efficient ones and can be used together with Qt’s and STL’s generic algorithms . Java-style Iterators are provided for backwards compatibility.
Note
Since Qt 5.14, range constructors are available for most of the container classes.
QMultiMap is a notable exception. Their use is encouraged in place of the various from/to methods. For example:
list = { 1, 2, 3, 4, 4, 5 } set = QSet(list.begin(), list.end()) /* generate = Will() */
The Container Classes#
Qt provides the following sequential containers:
QList ,
QStack , and
QQueue . For most applications,
QList is the best type to use. It provides very fast appends. If you really need a linked-list, use std::list. containers are defined in individual header files with the same name as the container (e.g.,
<QList>). For convenience, the containers are forward declared in
<QtContainerFwd>.
The values stored in the various containers can be of any assignable data type. To qualify, a type must provide a copy constructor, and an assignment operator. For some operations a default constructor is also required. ‘s(Employee other) Employee operator=(Employee other) # private myName = QString() myDateOfBirth = QDate():
class Movie(): id = int() title = QString() releaseDate = QDate():
operator<< = QDataStream(QDataStream out, Movie movie) out << (quint32)movie.id << movie.title << movie.releaseDate return out operator>> = QDataStream(QDataStream in, Movie movie) id = quint32() date = QDate() in >> id >> movie.title >> date movie.id = (int)id movie.releaseDate = date return in
The documentation of certain container class functions refer to default-constructed values; for example,
QList automatically initializes its items with default-constructed values, and.
The Iterator Classes#
Iterators provide a uniform means to access items in a container. Qt’s container classes provide two types of iterators: STL-style iterators and Java-style iterators. Iterators of both types are invalidated when the data in the container is modified or detached from implicitly shared copies due to a call to a non-const member function.
STL-Style Iterators#
QList and
QStack , which store their items at adjacent memory positions, the
iterator type is just a typedef for
T *, and the
const_iterator type is just a typedef for
const T *.
++operator advances the iterator to the next item, and the
*operator returns the item that the iterator points to. In fact, for
In this discussion, we will concentrate on
QList and
QMap . The iterator types for
QSet have exactly the same interface as
QList ‘s iterators; similarly, the iterator types for
QHash have the same interface as
QMap ‘s iterators.
Here’s a typical loop for iterating through all the elements of a
QList <
QString > in order and converting them to lowercase:
list = QList() list << "A" << "B" << "C" << "D" QList<QString>::iterator i for i in list: i = (i).toLower() list containing four items:
Iterating backward with an STL-style iterator is done with reverse iterators:
list = QList() list << "A" << "B" << "C" << "D" QList<QString>::reverse_iterator i for i in list: i = i.toLower()
In the code snippets so far, we used the unary
QString ) stored at a certain iterator position, and we then called
toLower() on it. Most C++ compilers also allow us to write
i->toLower(), but some don’t.
*operator to retrieve the item (of type
For read-only access, you can use const_iterator,
constBegin() , and
constEnd() . For example:
QList<QString>::const_iterator i for i in list: print(i)
The following table summarizes the STL-style iterators’ API:
The
+.
++and
--operators are available both as prefix (
For non-const iterator types, the return value of the unary
*operator can be used on the left side of the assignment operator.
For
QMap and
QHash , the
QMap to the console:
*operator returns the value component of an item. If you want to retrieve the key, call key() on the iterator. For symmetry, the iterator types also provide a value() function to retrieve the value. For example, here’s how we would print all items in a
int> = QMap<int,() ... QMap<int, int>::const_iterator i for i in map: print(i.key(), ':', i.value())
Thanks to implicit sharing , it is very inexpensive for a function to return a container per value. The Qt API contains dozens of functions that return a
QList or
QStringList per value (e.g.,
sizes() ). If you want to iterate over these using an STL iterator, you should always take a copy of the container and iterate over the copy. For example:
# RIGHT sizes = splitter.sizes() QList<int>::const_iterator i for i in sizes: ... #:
a, = QList() a.resize(100000) # make a big list filled with 0. QList<int>::iterator i = a.begin() # WRONG way of using the iterator i: b = a /* we = Now() If we do i = 4 then we would change the shared instance (both vectors) behavior = The() */ a[0] = 5 /* a = Container() and even though i was an iterator from the container a, it now works as an iterator in b. Here the situation is that (i) == 0. */ b.clear() # Now the iterator i is completely invalid. j = i # Undefined behavior! /* data = The(which i pointed to) This would be well-defined with STL containers (and (i) == 5), with = but() */
The above example only shows a problem with
QList , but the problem exists for all the implicitly shared Qt containers.
Java-Style Iterators#
Java-Style iterators were introduced in Qt 4. Their API is modelled on Java’s iterator classes. New code should should prefer STL-Style Iterators .
Container keywords#
The foreach Keyword#
The foreach keyword is discouraged, new code should prefer C++11 range-based loops.
The forever keyword.#
In addition to
foreach, Qt also provides a
forever pseudo-keyword for infinite loops:
forever { ...
If you’re worried about namespace pollution, you can disable these macros by adding the following line to your
.pro file:
CONFIG += no_keywords
Note
The alternative macros
Q_FOREACH and
Q_FOREVER remain defined regardless.
Qt containers compared with std containers#
Qt containers and std algorithms#
You can used Qt containers with functions from
#include <algorithm>.
list = { 2, 3, 1 } std::sort(list.begin(), list.end()) /* Sort the list, now contains { 1, 2, 3 } */ std::reverse(list.begin(), list.end()) /* Reverse the list, now contains { 3, 2, 1 } */ even_elements = std::count_if(list.begin(), list.end(), [](int element) { return (element % 2 == 0); }) /* how = Count() */
Other Container-Like Classes#
Qt includes other template classes that resemble containers in some respects. These classes don’t provide iterators and cannot be used with the
foreach keyword.
-
QCache<Key, T> provides a cache to store objects of a certain type T associated with keys of type Key.
-
QContiguousCache<T> provides an efficient way of caching data that is typically accessed in a contiguous way. std::list is an extremely fast operation, irrespective of the number of items stored in the list. On the other hand, inserting an item in the middle of a
QList is potentially very expensive if the
QList
push_back().
-
Logarithmic time: O(log n). A function that runs in logarithmic time is a function whose running time is proportional to the logarithm of the number of items in the container. One example is the binary search algorithm.
-
Linear time: O(n). A function that runs in linear time will execute in a time directly proportional to the number of items stored in the container. One example is the sequential container
QList <T>:List ,
QHash , and
QSet , the performance of appending items is amortized O(log n). It can be brought down to O(1) by calling
reserve() ,
reserve() , or
reserve() with the expected number of items before you insert the items. The next section discusses this topic in more depth.
Optimizations for Primitive and Relocatable Types#
Qt containers can use optimized code paths if the stored elements are relocatable or even primitive. However, whether types are primitive or relocatable cannot be detected in all cases. You can declare your types to be primitive or relocatable by using the
Q_DECLARE_TYPEINFO macro with the flag or the flag. See the documentation of
Q_DECLARE_TYPEINFO for further details and usage examples.
If you do not use
Q_DECLARE_TYPEINFO , Qt will use std::is_trivial_v<T> to identify primitive types and it will require both std::is_trivially_copyable_v<T> and std::is_trivially_destructible_v<T> to identify relocatable types. This is always a safe choice, albeit of maybe suboptimal performance.
Growth Strategies#
QList <T>,
QString , and
QByteArray store their items contiguously in memory; :
onlyLetters = QString(QString in) out = QString() for j in range(0, in.size()): if (in[j].isLetter()) out += in[j] return out
We build the string
out dynamically by appending one character to it at a time. Let’s assume that we append 15000 characters to the
QString string. Then the following 11 reallocations (out of a possible 15000) occur when
QString runs out of space: 8, 24, 56, 120, 248, 504, 1016, 2040, 4088, 8184, 16376. At the end, the
QString has 16376 Unicode characters allocated, 15000 of which are occupied.
The values above may seem a bit strange, but there is a guiding principle. It advances by doubling the size each time. More precisely, it advances to the next power of two, minus 16 bytes. 16 bytes corresponds to eight characters, as
QString uses UTF-16 internally.
QByteArray uses the same algorithm as
QString , but 16 bytes correspond to 16 characters.
QList <T> also uses that algorithm, but 16 bytes correspond to 16/sizeof(T) elements.
QHash <Key, T> is a totally different case.
QHash ‘s internal hash table grows by powers of two, and each time it grows, the items are relocated in a new bucket, computed as
qHash (key) %
capacity() (the number of buckets). This remark applies to
QSet <T> and
QCache <Key, T> as well.
For most applications, the default growing algorithm provided by Qt does the trick. If you need more control,
QList memory. | https://doc-snapshots.qt.io/qtforpython-dev/overviews/containers.html | CC-MAIN-2022-21 | refinedweb | 1,679 | 56.66 |
Red Hat Bugzilla – Bug 120690
Printconf doesn't handle IPP URIs
Last modified: 2008-08-02 19:40:32 EDT
From Bugzilla Helper:
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.4) Gecko/20040116
Description of problem:
This also happens on Redhat 9.
If an IPP queue was set up in CUPS outside of the Printconf framework,
the whole printconf subsystem fails catastrophically when trying to
import the configuration. It looks like the code in cups_import.py is
unable to recognize URIs of the form ipp://... even though these are
very common in CUPS environment.
Version-Release number of selected component (if applicable):
redhat-config-printer-0.6.47-1
How reproducible:
Always
Steps to Reproduce:
1. Create an IPP queue manually
2. Run printconf
3. Get python errors
Actual Results: Crash
Expected Results: Successful import of IPP queues and printconf runs
normally.
Additional info:
Apparently happens on FC2 as well.
It looks like a GUI version of redhat-config-printer -
redhat-config-printer-gui supports IPP printers. So a workaroud would
be to use it instead of TUI version. But not every time it is possible
to use X.
This bug is actually mostly pertaining to the case when you are NOT
using any of the redhat tools to create the queue. Printconf then
doesn't do its error handling properly.
Note that the crash happens outside of any GUI (in the printconf backend).
*** Bug 82892 has been marked as a duplicate of this bug. ***
redhat-config-printer-tui does not allow you to create CUPS IPP print queues. I
believe this is a big since many print servers run without, this will be fixed in FC6:
1. arbitrary URIs are allowed in the graphical interface
2. remote configuration over IPP is supported | https://bugzilla.redhat.com/show_bug.cgi?id=120690 | CC-MAIN-2017-17 | refinedweb | 299 | 60.01 |
Written By Adam Keogh,
Edited By Mae Semakula-Buuza
Tue 10 July 2018, in category Aws
In this article, we’ll cover how to use AWS Lambda and Amazon CloudWatch to automatically backup your EC2 servers. Snapshots are a cheap way to back up your servers and contain all the information required to restore data to a new EBS volume.
AWS Lambda offers us the ability to execute code written in a language of our choice, so for this we will use Python to write a script which takes snapshots (as well as deleting older ones).
We will then make a rule on CloudWatch which uses a cron schedule to execute this function every night.
Before making a lambda function, it’s necessary to create an IAM (Identity and Access Management) role. This will define what)
Create and Delete Snapshots
Create and Delete Tags:CreateSnapshot", "ec2:DeleteSnapshot", "ec2:CreateTags", "ec2:DeleteTags", "ec2:ModifySnapshotAttribute" ], "Resource": [ "*" ] } ] }
To label which EC2 Instances we want to snapshot – we will use tags.
Key: ‘auto_snapshot’and
Value: true(see screenshot below).
Now, navigate to the AWS Lambda Management Console. Then select
Create Function > Author from Scratch. Name your function, choose Python 3.6 as the runtime, finally for roles select
Choose an Existing Role (and select the role we made earlier).
Boto is the Amazon Web Services SDK for Python, it allows us to write code that interacts with AWS Services. See docs here.
Below is the function code, paste this into the visual interface of your lambda function.
import boto3 import collections import datetime import time import sys today = datetime.date.today() today_string = today.strftime('%Y/%m/%d') delete_after_days = 2 # Delete snapshots after this many days # Except after Monday (at Tuesday ~1am), since Friday is only 2 'working' days away: if datetime.date.today().weekday() == 1: delete_after_days = delete_after_days + 2 deletion_date = today - datetime.timedelta(days=delete_after_days) deletion_date_string = deletion_date.strftime('%Y/%m/%d') ec2 = boto3.client('ec2') regions = ec2.describe_regions().get('Regions',[] ) all_regions = [region['RegionName'] for region in regions] def lambda_handler(event, context): snapshot_counter = 0 snap_size_counter = 0 deletion_counter = 0 deleted_size_counter = 0 for region_name in all_regions: print('Instances in EC2 Region {0}:'.format(region_name)) ec2 = boto3.resource('ec2', region_name=region_name) # We only want to look through instances with the following tag key value pair: auto_snapshot : true instances = ec2.instances.filter( Filters=[ {'Name': 'tag:auto_snapshot', 'Values': ['true']} ] ) volume_ids = [] for i in instances.all(): for tag in i.tags: # Get the name of the instance if tag['Key'] == 'Name': name = tag['Value'] print('Found tagged instance \'{1}\', id: {0}, state: {2}'.format(i.id, name, i.state['Name'])) vols = i.volumes.all() # Iterate through each instance's volumes for v in vols: print('{0} is attached to volume {1}, proceeding to snapshot'.format(name, v.id)) volume_ids.extend(v.id) snapshot = v.create_snapshot( Description = 'AutoSnapshot of {0}, on volume {1} - Created {2}'.format(name, v.id, today_string), ) snapshot.create_tags( # Add the following tags to the new snapshot Tags = [ { 'Key': 'auto_snap', 'Value': 'true' }, { 'Key': 'volume', 'Value': v.id }, { 'Key': 'CreatedOn', 'Value': today_string }, { 'Key': 'Name', 'Value': '{} autosnap'.format(name) } ] ) print('Snapshot completed') snapshot_counter += 1 snap_size_counter += snapshot.volume_size # Now iterate through snapshots which were made by autsnap snapshots = ec2.snapshots.filter( Filters=[ {'Name': 'tag:auto_snap', 'Values': ['true'] } ] ) print('Checking for out of date snapshots for instance {0}...'.format(name)) for snap in snapshots: can_delete = False for tag in snap.tags: # Use these if statements to get each snapshot's # cleated on date, name and auto_snap tag if tag['Key'] == 'CreatedOn': created_on_string = tag['Value'] if tag['Key'] == 'auto_snap': if tag['Value'] == 'true': can_delete = True if tag['Key'] == 'Name': name = tag['Value'] created_on = datetime.datetime.strptime(created_on_string, '%Y/%m/%d').date() if created_on <= deletion_date and can_delete == True: print('Snapshot id {0}, ({1}) from {2} is {3} or more days old... deleting'.format(snap.id, name, created_on_string, delete_after_days)) deleted_size_counter += snap.volume_size snap.delete() deletion_counter += 1 print(' Made {0} snapshots totalling {1} GB\ Deleted {2} snapshots totalling {3} GB'.format(snapshot_counter, snap_size_counter, deletion_counter, deleted_size_counter)) return
Every time this function is run, it will do the following:
Create a snapshot for each instance that is tagged
auto_snapshot: true
Delete old snapshots which were made by this function (in this case we use the
delete_after_days variable to define old - we delete snapshots which were taken 2 or more days ago)
Log details of all actions taken to a cloudwatch folder (more on this later)
There is one final thing to change on this page, located under
Basic Settings, the default timeout for lambda functions is 3 seconds, but we will need to increase this (I used 59 seconds), otherwise our function will timeout before completing the snapshots correctly.
Click Save to save the function. (And having saved it, use the Test button to verify everything is working correctly. day at 12:05am UTC.
Press the
Configure Details button and then on the next page select
Create Rule.
Every time this lambda function runs, it will print a log to a folder in CloudWatch. On the AWS Management Console, navigate to
Services -> CloudWatch -> Logs.
The function we have just made is
auto_instance_snapshotting, shown below.
Opening this folder gives a series of logs streams, each one from a different time the function was run. I’ll open the most recent one, this shows all logs and print statements generated by our lambda function. If you tested your function earlier, you’ll be able to check the logs here to ensure it ran successfully.
A nice feature of CloudWatch is the alarms feature. It can be used to monitor metrics and send email notifications if user-defined threshold is breached. In this case, I have configured an alarm to email me if this function errors.
This can be done via
Services -> CloudWatch -> Alarms -> Create Alarm. You will be prompted to select a metric, find the metric by clicking on
Lambda Metrics -> By Function Name -> Your Function Name (Metric Name: Errors).
And on the following page, use the below settings to set up the email notification.
This function is scheduled to take daily snapshots every day of the week.
You might find it more appropriate to only snapshot on weekdays, etc. Just choose a different CRON schedule For example: 5 0 ? * TUE-SAT * will run every Tuesday-Saturday at 5 minutes past midnight.
You’ll also need to adapt your code so that it only takes working days into account in your python datetime calculations. | http://blog.keyrus.co.uk/backup_ec2_instances_automatic_snapshots.html | CC-MAIN-2019-51 | refinedweb | 1,062 | 56.55 |
Given a table which contains the table head and body section. The task is to prevent the text in a table cell from wrapping using CSS. To achieve this we use white-space property of CSS. This property forces the contents of th to display in one line. There are many property values exists to the white-space function.
Syntax:
white-space: normal|nowrap|pre|pre-wrap|pre-line;
Example 1: This example uses white-space property to prevent cell wrapping using CSS.
Output:
Before applying white-space property:
After applying white-space property:
Example 2: This example uses inline white-space property.
Output:
Before applying white-space property:
After applying white-space property:
Recommended Posts:
- How to prevent inline-block divs from wrapping ?
- BootStrap | Text Utilities (Alignment, Wrapping, Weight etc.)
- How to create equal width table cell using CSS ?
- How to wrap table cell <td> content using CSS ?
- How to prevent line breaks in the list of items using CSS?
- How to prevent parents of floated elements from collapsing in CSS?
- How to add tooltip to HTML table cell without using JavaScript ?
- How to create table cell using HTML5 ?
- How to create a header cell in a table using HTML5 ?
- How place a checkbox into the center of a table cell?
- How to prevent overriding using fake namespace in JavaScript ?
- How to prevent overriding using Immediately Invoked Function Expression in JavaScript ?
- Text Animation with changing the color of the text using HTML & CSS
- How to create long shadow of text without using text-shadow in HTML and CSS ?
- How to prevent column break within an element?
- How to prevent XSS with HTML/PHP ?
- Prevent System from Entering Sleep mode in ElectronJS
- How to prevent browser to remember password in HTML ?
- How to prevent dragging of ghost image ?
- How to add table row in a table. | https://www.geeksforgeeks.org/how-to-prevent-text-in-a-table-cell-from-wrapping-using-css/?ref=lbp | CC-MAIN-2020-45 | refinedweb | 306 | 59.8 |
Hi All I've got a bit of a weird one. I'm passing a string value my embedded python environment like this: m_main_namespace[fieldDetails.name.c_str()] = str(); Where fieldDetails.name == "TITLE". I am trying to convert the value of title to an integer inside the python code that is run in the embedded environment: intValue = int( TITLE[0:2] ) but the error I get is: "invalid literal for int()". Anyone know how I can solve this? Regards Paul Paul Grenyer Email: paul at paulgrenyer.co.uk Web: Have you met Aeryn:? Version 0.3.0 beta now available for download. | https://mail.python.org/pipermail/cplusplus-sig/2004-March/006666.html | CC-MAIN-2014-15 | refinedweb | 101 | 68.67 |
The QPen class defines how a QPainter should draw lines and outlines of shapes. More...
#include <qpen.h>
Inherits Qt.
List of all member functions.
A pen has a style, a width, a color, a cap style and a join style.
The pen style defines the line type. The default pen style is
Qt::SolidLine. Setting the style to
NoPen tells the painter to
not draw lines or outlines.
The pen width defines the line width. The default line width is 0, which draws a 1-pixel line very fast, but with lower precision than with a line width of 1. Setting the line width to 1 or more draws lines that are precise, but drawing is slower.
The pen color defines the color of lines and text. The default line color is black. The QColor documentation lists predefined colors.
The cap style defines how the end points of lines are drawn. The join style defines how the joins between two lines drawn when multiple, connected lines are drawn (QPainter::drawPolyLine() etc.). The cap and join styles apply only to wide lines, i.e. when the width is 1 or greater.
Use the QBrush class for specifying fill styles.
Example:
QPainter painter; QPen pen( red, 2 ); // red solid line, 2 pixel width painter.begin( &anyPaintDevice ); // paint something painter.setPen( pen ); // set the red, fat pen painter.drawRect( 40,30, 200,100 ); // draw rectangle painter.setPen( blue ); // set blue pen, 0 pixel width painter.drawLine( 40,30, 240,130 ); // draw diagonal in rectangle painter.end(); // painting done
See the setStyle() function for a complete list of pen styles.
About the end point of lines: For wide (non-0-width) pens, it depends on the cap style whether the end point is drawn or not. For 0-width pens, QPainter will try to make sure that the end point is drawn, but this cannot be absolutely guaranteed, since the underlying drawing engine is free to use any (typically accellerated) algorithm for drawing 0-width lines. On all tested systems, however, the endpoint of at least all non-diagonal lines are drawn.
See also QPainter and QPainter::setPen().
Examples: progress/progress.cpp desktop/desktop.cpp
Constructs a default black solid line pen with 0 width.
Constructs a pen black with 0 width and a specified style.
See also setStyle().
Constructs a pen with a specified color, width and styles.
See also setWidth(), setStyle() and setColor().
Constructs a pen with a specified color, width and style.
See also setWidth(), setStyle() and setColor().
Constructs a pen which is a copy of p.
Destructs the pen.
Returns the pen's cap style.
See also setCapStyle().
Returns the pen color.
See also setColor().
Returns the pen's join style.
See also setJoinStyle().
Returns TRUE if the pen is different from p, or FALSE if the pens are equal.
Two pens are different if they have different styles, widths or colors.
See also operator==().
Assigns p to this pen and returns a reference to this pen.
Returns TRUE if the pen is equal to p, or FALSE if the pens are different.
Two pens are equal if they have equal styles, widths and colors.
See also operator!=().
Sets the pen's cap style to c.
The default value is FlatCap. The cap style has no effect on 0-width pens.
Warning: On Windows 95/98, the cap style setting has no effect. Wide lines are rendered as if the cap style was SquareCap.
See also capStyle().
Sets the pen color to c.
See also color().
Examples: progress/progress.cpp
Sets the pen's join style to j.
The default value is MiterJoin. The join style has no effect on 0-width pens.
Warning: On Windows 95/98, the join style setting has no effect. Wide lines are rendered as if the join style was BevelJoin.
See also joinStyle().
Sets the pen style to s. Warning:
Warning:On Windows 95/98, the style setting (other than NoPen and SolidLine) has no effect for lines with width greater than 1.
See also style().
Sets the pen width to w.
See also width().
Examples: progress/progress.cpp
Returns the pen style.
See also setStyle().
Returns the pen width.
See also setWidth().
Writes a pen to the stream and returns a reference to the stream.
See also Format of the QDataStream operators
Reads a pen from the stream and returns a reference to the stream.
See also Format of the QDataStream operators
Search the documentation, FAQ, qt-interest archive and more (uses):
This file is part of the Qt toolkit, copyright © 1995-2005 Trolltech, all rights reserved. | http://doc.trolltech.com/2.3/qpen.html | crawl-002 | refinedweb | 765 | 78.85 |
16 July 2009 15:44 [Source: ICIS news]
(Recasts, adding detail in headline and lead)
LONDON (ICIS news)--INEOS’ 320,000 tonne/year polyethylene (PE) plant at ?xml:namespace>
“The plant went down some time over the weekend [11/12 July], as it was well into its high density PE (HDPE) campaign,” said the source. “We expect it to restart on Monday [20 July].”
The plant is a linear low density PE (LLDPE)/HDPE swing unit.
INEOS declared force majeure on one of its HDPE injection grades.
“This is just one particular grade of HDPE injection of many that we produce,” said the source.
PE availability has tightened considerably in
Prices in July increased by €100/tonne ($141/tonne), leaving gross low density PE (LDPE) levels at €1,000/tonne FD (free delivered) NWE (northwest
PE producers in
($1 = €0.71). | http://www.icis.com/Articles/2009/07/16/9233138/ineos-pe-down-at-grangemouth-uk-after-unexpected-outage.html | CC-MAIN-2013-20 | refinedweb | 141 | 69.72 |
[The bit puzzle results are based on data from Chad Brubaker and the saturating operation results are based on data from Peng Li. They are respectively an undergrad and a grad student in Utah’s CS program.]
Klee is a tool that attempts to generate a collection of test cases inducing path coverage on a system under test. Path coverage means that all feasible control flow paths are executed. It is a strong kind of coverage, but still misses bugs. One way to improve Klee would be to add support for different kinds of coverage metrics: weaker ones like statement coverage would scale to larger programs, and stronger ones such as boundary-value coverage would find more bugs in small codes.
A different way to improve Klee is to continue to target path coverage, but alter the definition of “path.” For example:
- When testing an x86-64 binary containing a cmov instruction, we could make sure to execute both its condition-true path and condition-false path.
- When testing the C expression foo(bar(),baz()), we could make sure to test evaluating foo() and bar() in both orders, instead of just letting the compiler pick one.
This piece proposes undefined behavior coverage, which simply means that for any operation that has conditionally-defined behavior, the well-defined and the undefined behaviors are considered to be separate paths. For example, the C expression 3/y has two paths: one where y is zero and the other where y is non-zero.
Obviously, undefined behavior coverage only makes sense for languages such as C and C++ that admit operations with undefined behavior. An undefined behavior, as defined by the C and C++ standards, is one where the language implementation can do anything it likes. The point is to make the compiler developers’ job easier — they may simply assume that undefined behavior never happens. The tradeoff is that the burden of verification is shifted onto language users.
Undefined behavior coverage makes sense for what I call type 2 functions: those whose behavior is conditionally well-defined.
An Example
Here’s a simple C function:
int add_and_shift (int x, int y, int z) { return (x+y)<<z; }
Due to C’s undefined behaviors, this function has a non-trivial precondition:
0 ≤ z < sizeof(int)*CHAR_BIT
INT_MIN ≤ x+y ≤ INT_MAX
(This is for ANSI C; in C99 the precondition is stronger and quite a bit more complicated, but we won’t worry about that.) If the precondition is not satisfied, the function’s return value is unpredictable. In fact, it’s a bit worse than that: as soon as the program executes an undefined behavior the C implementation is permitted to send email to the developer’s mother (though this hardly ever happens).
The point is that although shift_and_add() seems to admit a single path, it really has a number of additional paths corresponding to failed preconditions for its math operators. If we fail to test these paths, we can miss bugs. Since the precondition checks for math operators in C/C++ are pretty simple, we can just add them in an early phase of the compiler, and that’s exactly what Peng’s hacked version of Clang does.
Without undefined behavior checks, LLVM code for add_and_shift() looks like this:
define i32 @add_and_shift(i32 %x, i32 %y, i32 %z) nounwind readnone { entry: %add = add i32 %y, %x %shl = shl i32 %add, %z ret i32 %shl }
Obviously there’s just one path, and the test case that Klee picks to exercise this path is:
- x = 0, y = 0, z = 0
Next, we compile the same function with undefined behavior checks and run Klee again. This time we get four test cases:
- x = 0, y = 0, z = 0
- x = 0, y = 0, z = 64
- x = -2, y = INT_MIN, z = 0
- x = 2, y = 1, z = 0
The first three tests are exactly the kind of inputs we’d hope to see after looking at the precondition. The 4th input appears to follow the same path as the first. I don’t know what’s going on — perhaps it emerges from some idiosyncrasy of the checked code or maybe Klee simply throws in an extra test case for its own reasons.
Combining Klee with an undefined behavior checker causes Klee to generate additional test cases that — by invoking operations with undefined behavior — should shine some light into dark corners of the system under test. A potential drawback is that all the extra paths are going to cause the path explosion problem to happen sooner than it otherwise would have. However, this should not be serious since we can just run Klee on both versions of the code.
But this is all just talk. The real question is: does this method find more bugs?
Bit Puzzle Results
The first collection of code is several years’ worth of solutions to an early assignment in Utah’s CS 4400. I already discussed these, so I won’t repeat myself. For each bit puzzle, students receive a reference implementation (which they cannot simply copy since it doesn’t follow the rules for student solutions) and a simple test harness that runs their code against the reference implementation on some inputs, compares the results, and complains about any differences. For each of 10 bit puzzles we have 105 solutions written by students. The automated test suite determines that 84 of these 1050 solutions are faulty. In other words, they return incorrect output for at least one input. Differential testing with Klee finds seven additional buggy functions, for a total of 91.
When the students’ codes are augmented with checks for integer undefined behaviors, Klee finds more paths to explore. The test cases that it generates find the 91 incorrect functions that are already known plus 11 more, for a total of 102 buggy functions. Just to be perfectly clear: a buggy function is one that (after being compiled by GCC) returns the wrong output for an input in a test suite. We are not counting instances of undefined behavior as bugs, we are simply using Klee and the undefined behavior checker to generate a better test suite.
We were able to exhaustively test some of the bit puzzles. In these cases, exhaustive testing failed to find any bugs not found by differential Klee with undefined behavior coverage.
Saturating Operation Results
The second collection of code is 336 saturating math operations. In this case, the additional tests generated by Klee to satisfy undefined behavior coverage found no additional buggy functions beyond those found using differential whitebox testing. My hypothesis is that:
- The shift-related undefined behaviors in these functions always involved constant arguments, since shifts were used only to compute values like the maximum and minimum representable integer of a certain width. Since the arguments were constant, Klee had no opportunity to generate additional test cases.
- The addition and subtraction overflow undefined behaviors were compiled by GCC into modular arithmetic, despite the fact that this behavior is not guaranteed by the standard. This is a natural consequence of generating code using the x86 add and sub instructions. Modular arithmetic was the behavior that people (including me, as described in the previous post) wanted and expected. Therefore, undefined behavior coverage exposed no bugs. Modern C compilers sometimes compile math overflows in a non-modular way (for example, evaluating (x+1)>x to 1), but the saturating arithmetic functions — by chance — do not use code like that.
We were able to exhaustively test saturating operations that take chars (for 16 total bits of input) and short ints (for 32 total bits of input). In these cases, exhaustive testing failed to find any bugs not already found by differential Klee.
Conclusion
Undefined behavior coverage is a special case of a more interesting code coverage metric that I’ll describe in a subsequent post. We need to try Klee + undefined behavior coverage on some real applications and see what happens; I’m cautiously optimistic that it will be useful.
Thanks for the interesting post!
Related to differential whitebox testing, for Pex (), a dynamic symbolic execution engine for .NET from Microsoft Research, there is a web-version of “Pex for Fun”
There, for a Coding Duel game, Pex uses differential whitebox testing to detect and report behavioral differences between a secrete implementation and the player’s working copy, in order to guide the player to solve the Coding Duel.
Very interesting post. I think it is so important to have good tools to help detecting undefined behavior in code as the compilers I have worked with usually pretend this can’t happen, as you well said.
There seems to be an error in: “could make sure to test evaluating foo() and bar() in both orders”. bar and baz makes more sense.
Isn’t the precondition for the shift more like 0 ≤ z < sizeof(int)*CHAR_BIT? A 31 bit shift in a 32-bit int system is well defined, isn't it?
Out of curiosity, could you provide any pointer about what would be the preconditions for add_and_shift function when considering C99?
Pedro, of course you’re right. It’s fixed now, thanks!
A signed left-shift in C99 is defined only when the result is representable in the result type. In practice, this prohibits shifting any “1” bit into or past the sign bit. Basically all real C codes do this, of course.
Thank you, professor John.
By the way, the result of a signed right-shift of an odd negative integer is implementation-dependent, isn’t it?.
Of course this is not even close to be as bad as undefined behavior but will affect portability and could be detected by static analysis tools as well. | https://blog.regehr.org/archives/388 | CC-MAIN-2020-34 | refinedweb | 1,621 | 50.26 |
Mercurial > dropbear
changeset 1719:25b0ce1936c4
changelog for 2020.79
line diff
--- a/CHANGES Mon Jun 15 23:17:27 2020 +0800 +++ b/CHANGES Mon Jun 15 23:36:14 2020 +0800 @@ -1,3 +1,57 @@ +2020.79 - 15 June 2020 + +- Support ed25519 hostkeys and authorized_keys, many thanks to Vladislav Grishenko. + This also replaces curve25519 with a TweetNaCl implementation that reduces code size. + +- Add chacha20-poly1305 authenticated cipher. This will perform faster than AES + on many platforms. Thanks to Vladislav Grishenko + +- Support using rsa-sha2 signatures. No changes are needed to hostkeys/authorized_keys + entries, existing RSA keys can be used with the new signature format (signatures + are ephemeral within a session). Old ssh-rsa signatures will no longer + be supported by OpenSSH in future so upgrading is recommended. + +- Use getrandom() call on Linux to ensure sufficient entropy has been gathered at startup. + Dropbear now avoids reading from the random source at startup, instead waiting until + the first connection. It is possible that some platforms were running without enough + entropy previously, those could potentially block at first boot generating host keys. + The dropbear "-R" option is one way to avoid that. + +- Upgrade libtomcrypt to 1.18.2 and libtommath to 1.2.0, many thanks to Steffen Jaeckel for + updating Dropbear to use the current API. Dropbear's configure script will check + for sufficient system library versions, otherwise using the bundled versions. + +- CBC ciphers, 3DES, hmac-sha1-96, and x11 forwarding are now disabled by default. + They can be set in localoptions.h if required. + Blowfish has been removed. + +- Support AES GCM, patch from Vladislav Grishenko. This is disabled by default, + Dropbear doesn't currently use hardware accelerated AES. + +- Added an API for specifying user public keys as an authorized_keys replacement. + See pubkeyapi.h for details, thanks to Fabrizio Bertocci + +- Fix idle detection clashing with keepalives, thanks to jcmathews + +- Include IP addresses in more early exit messages making it easier for fail2ban + processing. Patch from Kevin Darbyshire-Bryant + +- scp fix for CVE-2018-20685 where a server could modify name of output files + +- SSH_ORIGINAL_COMMAND is set for "dropbear -c" forced command too + +- Fix writing key files on systems without hard links, from Matt Robinson + +- Compatibility fixes for IRIX from Kazuo Kuroi + +- Re-enable printing MOTD by default, was lost moving from options.h. Thanks to zciendor + +- Call fsync() is called on parent directory when writing key files to ensure they are flushed + +- Fix "make install" for manpages in out-of-tree builds, from Gabor Z. Papp + +- Some notes are added in DEVELOPER.md + 2019.78 - 27 March 2019 - Fix dbclient regression in 2019.77. After exiting the terminal would be left
--- a/debian/changelog Mon Jun 15 23:17:27 2020 +0800 +++ b/debian/changelog Mon Jun 15 23:36:14 2020 +0800 @@ -1,3 +1,9 @@ +dropbear (2020.79-0.1) unstable; urgency=low + + * New upstream release. + + -- Matt Johnston <[email protected]> Mon, 15 Jun 2020 22:51:57 +0800 + dropbear (2019.78-0.1) unstable; urgency=low * New upstream release.
--- a/sysoptions.h Mon Jun 15 23:17:27 2020 +0800 +++ b/sysoptions.h Mon Jun 15 23:36:14 2020 +0800 @@ -4,7 +4,7 @@ *******************************************************************/ #ifndef DROPBEAR_VERSION -#define DROPBEAR_VERSION "2019.78" +#define DROPBEAR_VERSION "2020.79" #endif #define LOCAL_IDENT "SSH-2.0-dropbear_" DROPBEAR_VERSION | https://hg.ucc.asn.au/dropbear/rev/25b0ce1936c4 | CC-MAIN-2021-43 | refinedweb | 543 | 58.48 |
Opened 11 years ago
Closed 11 years ago
#3775 closed (fixed)
mysql_old backend use mysql backend
Description
Attachments (1)
Change History (5)
Changed 11 years ago by
comment:1 Changed 11 years ago by
comment:2 Changed 11 years ago by
I underestand that mysql_old is compatible support for people who can't upgrade MySQLdb immediately. Without this patch I can't use django with mysql with older version than MySQLdb-1.2.1p2.
If I try to use mysql_old and run
python manage.py syncdb it will raise ImportError "MySQLdb-1.2.1p2 or newer is required;".
Problem is mysql_old 'introspection.py' imports
from django.db.backends.mysql.base import quote_name not
from django.db.backends.mysql_old.base import quote_name file.
comment:3 Changed 11 years ago by
Ah ... and the patch is for the directory django/db/backends/mysql_old! That makes sense, but it wasn't at all obvious.
Please provide a little bit more background when you write a ticket. As we don't see
what you're doing, it's sometimes hard to understand. Also, patches should contain
the full path from the django source directory, so there's no doubt about which directory
you mean.
I checked 'patch needs improvement' since the patch should really have the full path before
core developers deal with it.
Can you provide more info about this? why is this needed? what does it do? | https://code.djangoproject.com/ticket/3775 | CC-MAIN-2018-22 | refinedweb | 235 | 68.97 |
Due at 11:59pm on 02/18/2015.
Download lab.
Predict what Python will display when you type the following into the interpreter. Then try it to check your answers.
>>> x = [1, 2, 3] >>> x[0]______1>>> x[x[0]]______2>>> x[x[x[0]]]______3>>> x[3]______IndexError>>> x[-1]______3>>> x[-3]______1
Predict what Python will display when you type the following into the interpreter. Then try it to check your answers.
>>> x = [1, 2, 3, 4] >>> x[1:3]______[2, 3]>>> x[:2]______[1, 2]>>> x[1:]______[2, 3, 4]>>> x[-2:3]______[3]
As you may have noticed, Python has a convenient notation for slicing to retrieve part of a list. Specifically, we can write
[start:stop]to slice a list with two integers.
-
startdenotes the index for the beginning of the slice
-
stopdenotes the index for the end of the slice
Using negative indices for
startand
endbehaves in the same way as indexing into negative indices. In addition, slicing a list creates a new list, without modifying the original list.]
For each of the following, use element selection to get the number 7 from the particular list in the doctest. Don't worry about making this work for all lists.
def get_seven_a(x): """ >>> x = [1, 3, [5, 7], 9] >>> get_seven_a(x) 7 """"*** YOUR CODE HERE ***" return ______return x[2][1]def get_seven_b(x): """ >>> x = [[7]] >>> get_seven_b(x) 7 """"*** YOUR CODE HERE ***" return ______return x[0][0]def get_seven_c(x): """ >>> x = [1, [2, [3, [4, [5, [6, [7]]]]]]] >>> get_seven_c(x) 7 """"*** YOUR CODE HERE ***" return ______return x[1][1][1][1][1][1][0]
Write a function
reverse_recursive that takes a list and returns a
new list that is the reverse of the original. Use recursion! You may
also use slicing notation.
def reverse_recursive(lst): """Returns the reverse of the given list. >>> reverse_recursive([1, 2, 3, 4]) [4, 3, 2, 1] """"*** YOUR CODE HERE ***"if not lst: return [] return reverse_recursive(lst[1:]) + [lst[0]]
Write a function
merge that takes 2 sorted lists
lst1 and
lst2,
and returns a new list that contains all the elements in the two lists
in sorted order.
def merge(lst1, lst2): """Merges two sorted lists recursively. >>> iteratively. >>>
List comprehensions are a compact and powerful way of creating new lists out of sequences. Let's work with them directly:
>>> [i**2 for i in [1, 2, 3, 4] if i%2 == 0] [4, 16]
is equivalent to
>>> lst = [] >>> for i in [1, 2, 3, 4]: ... if i % 2 == 0: ... lst += [i**2] >>> lst [4, 16]
The general syntax for a list comprehension is
[<expression> for <element> in <sequence> if <conditional>]
The syntax is designed to read like English: "Compute the expression for each element in the sequence if the conditional is true."]
Implement the function
squares, which takes in a list of positive
integers, and returns a new list which contains only elements of the original
list that are perfect squares. Use a list comprehension.
from math import sqrt def is_square(n): return float(sqrt(n)) == int(sqrt(n)) def squares(seq): """Returns a new list containing elements of the original list that are perfect squares. >>> seq = [49, 8, 2, 1, 102] >>> squares(seq) [49, 1] >>> seq = [500, 30] >>> squares(seq) [] """"*** YOUR CODE HERE ***" return ______return [n for n in seq if is_square(n)]
Questions in this section are not required for submission. However, we encourage you to try them out on your own time for extra practice.
Write a function
reverse_iter that takes a list and returns a new
list that is the reverse of the original. Use iteration! You may also
use slicing notation.
def reverse_iter(lst): """Returns the reverse of the given list. >>> reverse_iter([1, 2, 3, 4]) [4, 3, 2, 1] """"*** YOUR CODE HERE ***"new, i = [], 0 while i < len(lst): new = [lst[i]] + new i += 1 return new
Mergesort is a type of sorting algorithm. It follows a naturally recursive procedure:]
Implement a function
coords, which takes a function, a sequence, and
an upper and lower bound on output of the function.
coords then
returns a list of x, y coordinate pairs (lists) such that:
[x, fn(x)]]))] | http://gaotx.com/cs61a/lab/lab04/ | CC-MAIN-2018-43 | refinedweb | 697 | 68.1 |
A Simple Tagging Implementation with JPA
Last modified: October 20, 2018
I just announced the new Learn Spring course, focused on the fundamentals of Spring 5 and Spring Boot 2:
• A Simple Tagging Implementation with MongoDB
1. Overview
Tagging is a standard design pattern that allows us to categorize and filter items in our data model.
In this article, we’ll implement tagging using Spring and JPA. We’ll be using Spring Data to accomplish the task. Furthermore, this implementation will be useful if you want to use Hibernate.
This is the second article in a series on implementing tagging. To see how to implement it with Elasticsearch, go here.
2. Adding Tags
First, we’re going to explore the most straightforward implementation of tagging: a List of Strings. We can implement tags by adding a new field to our entity like this:
@Entity public class Student { // ... @ElementCollection private List<String> tags = new ArrayList<>(); // ... }
Notice the use of the ElementCollection annotation on our new field. Since we’re running in front of a data store, we need to tell it how to store our tags.
If we didn’t add the annotation, they’d be stored in a single blob which would be harder to work with. This annotation creates another table called STUDENT_TAGS (i.e., <entity>_<field>) which will make our queries more robust.
This creates a One-To-Many relationship between our entity and tags! We’re implementing the simplest version of tagging here. Because of this, we’ll potentially have a lot of duplicate tags (one for each entity that has it). We’ll talk more about this concept later.
3. Building Queries
Tags allow us to perform some interesting queries on our data. We can search for entities with a specific tag, filter a table scan, or even limit what results come back in a particular query. Let’s take a look at each of these case.
3.1. Searching Tags
The tag field we added to our data model can be searched similar to other fields on our model. We keep the tags in a separate table when building the query.
Here is how we search for an entity containing a specific tag:
@Query("SELECT s FROM Student s JOIN s.tags t WHERE t = LOWER(:tag)") List<Student> retrieveByTag(@Param("tag") String tag);
Because the tags are stored in another table, we need to JOIN them in our query – this will return all of the Student entities with a matching tag.
First, let’s set up some test data:
Student student = new Student(0, "Larry"); student.setTags(Arrays.asList("full time", "computer science")); studentRepository.save(student); Student student2 = new Student(1, "Curly"); student2.setTags(Arrays.asList("part time", "rocket science")); studentRepository.save(student2); Student student3 = new Student(2, "Moe"); student3.setTags(Arrays.asList("full time", "philosophy")); studentRepository.save(student3); Student student4 = new Student(3, "Shemp"); student4.setTags(Arrays.asList("part time", "mathematics")); studentRepository.save(student4);
Next, let’s test it and make sure it works:
// Grab only the first result Student student2 = studentRepository.retrieveByTag("full time").get(0); assertEquals("name incorrect", "Larry", student2.getName());
We’ll get back the first student in the repository with the full time tag. This is exactly what we wanted.
In addition, we can extend this example to show how to filter a larger dataset. Here is the example:
List<Student> students = studentRepository.retrieveByTag("full time"); assertEquals("size incorrect", 2, students.size());
With a little refactoring, we can modify the repository to take in multiple tags as a filter so we can refine our results even more.
3.2. Filtering A Query
Another useful application of our simple tagging is applying a filter to a specific query. While the previous examples also allowed us to do filtering, they worked on all of the data in our table.
Since we also need to filter other searches, let’s look at an example:
@Query("SELECT s FROM Student s JOIN s.tags t WHERE s.name = LOWER(:name) AND t = LOWER(:tag)") List<Student> retrieveByNameFilterByTag(@Param("name") String name, @Param("tag") String tag);
We can see that this query is nearly identical to the one above. A tag is nothing more than another constraint to use in our query.
Our usage example is also going to look familiar:
Student student2 = studentRepository.retrieveByNameFilterByTag( "Moe", "full time").get(0); assertEquals("name incorrect", "moe", student2.getName());
Consequently, we can apply the tag filter to any query on this entity. This gives the user a lot of power in the interface to find the exact data they need.
4. Advanced Tagging
Our simple tagging implementation is a great place to start. But, due to the One-To-Many relationship, we can run into some issues.
First, we’ll end up with a table full of duplicate tags. This won’t be a problem on small projects, but larger systems could end up with millions (or even billions) of duplicate entries.
Also, our Tag model isn’t very robust. What if we wanted to keep track of when the tag was initially created? In our current implementation, we have no way of doing that.
Finally, we can’t share our tags across multiple entity types. This can lead to even more duplication that can impact our system performance.
Many-To-Many relationships will solve most of our problems. To learn how to use the @manytomany annotation, check out this article (since this is beyond the scope of this article).
5. Conclusion
Tagging is a simple and straightforward way to be able to query data and combined with the Java Persistence API, we’ve got a powerful filtering feature that is easily implemented.
Although the simple implementation may not always be the most appropriate, we’ve highlighted the routes to take to help resolve that situation.
As always, the code used in this article can be found on over on GitHub. | https://www.baeldung.com/jpa-tagging | CC-MAIN-2019-22 | refinedweb | 985 | 57.98 |
Python client to connect to Postcodes.io API
Project description
Postcodes.io Python Client - alpha
Python client to connect to Postcodes.io API
- Free software: MIT license
- Documentation:.
Features
- Supports python 3.x (not yet python 2.x, sorry!)
- Response in Python native list and dict types
- Supports free REST service and self-hosted service (See documentation for installation details)
Quick Start
Install python package:
$ pip install postcodes_io $ python
from postcodes_io import Postcodes pio = Postcodes() postcode = pio.get('SW1A 1AA')
Self-hosted Service using Docker
Pull docker image:
docker pull randomvariable/docker-postcodes.io
Run docker container as a daemon:
docker run -p 8000:8000 -d randomvariable/docker-postcodes.io
Execute API using hosts
from postcodes_io import Postcodes postcode = Postcodes('').get('SW1A 1AA')
TODOs
- Add more endpoints
- Documentation
- Proper isolated unit tests
History
0.1.0 (2016-07-08)
- First release on PyPI.
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/postcodes_io/ | CC-MAIN-2018-34 | refinedweb | 170 | 52.36 |
Multidimensional array
Multidimensional array are arrays that use more than one index to access its content. Imagine a table with rows and columns, a multidimensional array can represent its data in a way a table can. Adding more indices increases the dimension of an array and makes it even more complex although it is very rare to use a multidimensional array that uses indexes more than two. The syntax of a multidimensional array that has two dimensions is as follows.
datatype[,] arrayName = new datatype[lengthX, lengthY];
And a multidimensional array with three dimensions has the following syntax.
datatype[, ,] arrayName = new datatype[lengthX, lengthY, lengthZ];
You can create an array with as much dimension as you can where each dimension is specified by a length. Because using 3-dimensional arrays or arrays with dimensions more than 2 are rare, let’s concentrate this lesson on using 2-dimensional arrays. The syntax requires the datatype which is the type that the elements of that array possess. Next to the type is a pair of square brackets and a comma inside it. Note the number of commas to place inside the brackets. If we have a 2 dimension array, we place 1 comma and if we have a 3 dimension array, we put 2 commas so to find out how many commas to put, just remember (numberOfDimensions – 1). Next, we write the name of the variable and then we assign the dimension lengths by writing the new keyword, the datatype, and the lengths. In a 2-dimensional array, we need to give two length values, one for the x value and one for the y value where x represents the row of a table and y represents the column of a table if we look at it as a table. A 3-dimensional array can be represented by a cube like the word “3-D” suggest so the x would be the height, y would be the width, and z would be the depth. Let’s look at an example of a 2-dimensional array.
int[,] numbers = new int[3, 6];
The code above tells the compiler to allocate enough space for (3 * 6) array elements. The picture below shows where each array elements will be mapped if we think of them as cells in a table.
We pass 3 for the lengthX that’s why we have 3 rows and we gave 5 for lengthY that’s why we have 5 columns. How do we initialize values in a multidimensional array? There are multiple ways of initializing values in a multidimensional array.
datatype[,] arrayName = new datatype[x, y] { { r0c0, r0c1, ... r0cX }, { r1c0, r1c1, ... r1cX }, . . . { rYc0, rYc1, ... rYcX } };
To make it even simpler, you can ignore writing the new dataype[,] part.
datatype[,] arrayName = { { r0c0, r0c1, ... r0cX }, { r1c0, r1c1, ... r1cX }, . . . { rYc0, rYc1, ... rYcX } };
Here’s an example:
int[,] numbers = { { 1, 2, 3, 4, 5 }, { 6, 7, 8, 9, 10 }, { 11, 12, 13, 14, 15 } };
Or do everything manually and assign values to each element one by one like this:
array[0, 0] = value; array[0, 1] = value; array[0, 2] = value; array[1, 0] = value; array[1, 1] = value; array[1, 2] = value; array[2, 0] = value; array[2, 1] = value; array[2, 2] = value;
As you can see, accessing a 2-dimensional array’s element is as simple as indicating the x index and y index of the element inside a pair of brackets separated by a comma.
Iterating through a Multidimensional Array
Iterating through a multidimensional array can be a bit tricky. The easiest way perhaps is to use a foreach loop. You can also use a nested for loop. Let’s take a look at using foreach loop first.
using System; namespace MultidimensionalArraysDemo { public class Program { public static void Main() { int[,] numbers = { { 1, 2, 3, 4, 5 }, { 6, 7, 8, 9, 10 }, { 11, 12, 13, 14, 15 } }; foreach (int number in numbers) { Console.Write(number + " "); } } } }
You can see how simple is it to cycle through the values of each element of a multidimensional array by just using a foreach loop. By using a for each loop, we won’t be able to detect the end of a row in case we want to bring the next row to the next line. You can easily do that with the for each loop. The following program shows you how to use the for loop to read all the values in an array.
using System; namespace MultidimensionalArraysDemo2 { public class Program { public static void Main() { int[,] numbers = { { 1, 2, 3, 4, 5 }, { 6, 7, 8, 9, 10 }, { 11, 12, 13, 14, 15 } }; for (int row = 0; row < numbers.GetLength(0); row++) { for (int col = 0; col < numbers.GetLength(1); col++) { Console.Write(numbers[row, col] + " "); } //Go to the next line Console.WriteLine(); } } } }
Example 2 – Using a foreach Loop on a Multidimensional Array
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
We can’t just use a plain for loop when accessing a multidimensional array. We need to use a nested for loop. On the first for loop (line 14), we declared a variable which will loop through all the “rows” of our multidimensional array. It will loop until the value of row is less than the length of the first dimension. We used the method GetLength() of the Array class. This method gets the length of the array in a specified dimension. It has one parameter and that’s the dimension of the array. For example, to get the length of the first dimension of the array, we pass 0 as the value because it counts the dimension of an array from 0 to numberOfDimensions – 1.
Inside the first for loop is yet another for loop (line 16). We declared another counter that will loop through all the “columns” of the current row in the loop cycle of the first loop. The condition involves using the method GetLength() again but this time, we pass 1 instead to get the length of the second dimension of the array. So for example, the current value of row is 0, then the second for loop will loop from [0, 0] up to [0, 4]. We then show a value of the current element in the loop, if the value of row is 0 and the value of col is 0, then it will show the value of numbers[0, 0]. After the second loop has finished doing the loop, it will execute the command immediately after it, in this case, the next line informs the program to go to the new line. The first loop will then repeat all of this process with the value of row increment incremented by 1. We will then repeat the second for loop and display the values of the second row. The process will repeat until the row is less than the length of the first dimension.
Let’s apply what we have learn to make a program that get four grades of each of the three students. The program will determine the average grade for each student.
using System; namespace MultidimensionalArrays3 { public class Program { public static void Main() { double[,] studentGrades = new double[3, 4]; double total; for (int student = 0; student < studentGrades.GetLength(0); student++) { total = 0; Console.WriteLine("Enter grades for Student {0}", student + 1); for (int grade = 0; grade < studentGrades.GetLength(1); grade++) { Console.Write("Enter Grade #{0}: ", grade + 1); studentGrades[student, grade] = Convert.ToDouble(Console.ReadLine()); total += studentGrades[student, grade]; } Console.WriteLine("Average is {0:F2}", (total / studentGrades.GetLength(1))); Console.WriteLine(); } } } }
Example 3 – Multidimensional Array Application
Enter grades for Student 1 Enter Grade #1: 92 Enter Grade #2: 87 Enter Grade #3: 89 Enter Grade #4: 95 Average is 90.75 Enter grades for Student 2 Enter Grade #1: 85 Enter Grade #2: 85 Enter Grade #3: 86 Enter Grade #4: 87 Average is 85.75 Enter grades for Student 3 Enter Grade #1: 90 Enter Grade #2: 90 Enter Grade #3: 90 Enter Grade #4: 90 Average is 90.00
The program declares a multidimensional array of type double (line 9) because we are storing grades. We also declare a variable total (line 10) that will be used soon for calculating the average of each student. We now enter the nested for loop (line 12). On the first forloop, we declare a student variable that will determine which of the student the program will ask the grades for. We use GetLength() to determine the number of students. We enter the for loop body. Line 14 assigns a value of 0 to the variable total. We will know later why we did this. The program displays a prompt telling you to enter the grade of student number (student + 1). We add 1 to the student so that instead of displaying Student 0, it starts with 1 which is more natural.
We then arrive at the second for loop (line 18). We declare a counter variable grade and get the length of the second dimension of the array by calling GetLength(1). This length determines the number of grades the program will ask. The program will get each of the four grades for the current student. Each time the program gets the grade from the user, the grade obtained will be added to the totalvariable. When all the grades have been entered, the total variable already contains the sum of those grades. Lines 25-26 will now display the average grade of the current student. Take note of the {0:F2}. This formats the grade into a fixed number with 2 decimal places. The average is then calculated by dividing the sum by the number of grades. We used GetLength(1) again to get the number of grades. The average grade is then displayed. | https://compitionpoint.com/multidimensional-arrays/ | CC-MAIN-2021-31 | refinedweb | 1,633 | 70.73 |
I can successfully get a test video playing if I put it locally on the devices StreamingAssets folder but when I try to play the original file that is on the web it never works and there is no useful output from logcat. I just get a black screen very briefly before returning to the default blue.
using UnityEngine;
using System.Collections;
public class VideoTest : MonoBehaviour {
// Use this for initialization
//string moviePath = "famous.3gp";
string moviePath = "";
//string moviePath = "";
// Use this for initialization
void Start () {
Invoke("doPlay",1.0f);
}
void doPlay()
{
Debug.Log("Starting Movie: " + moviePath);
//iPhoneUtils.PlayMovieURL(moviePath, Color.black);
Handheld.PlayFullScreenMovie (moviePath, Color.black, FullScreenMovieControlMode.Full);
Debug.Log("All Done!");
}
// Update is called once per frame
void Update () {
if (Input.GetKeyDown(KeyCode.Escape)) { Application.Quit(); }
}
}
As you can see I've also tried an ogg file also but this doesn't work (is supposed to be for and MovieTextures - but these are not for mobile). I'm using Unity 4.0.1f2 with Android Pro building to an HTC Sensation XL and it has no problems playing any of these videos in its browser or directly so I'm guessing it's not a device or source problem.
I know there are many posters of video playback problems, and many unanswered because people make simple mistakes (and Unity gives no useful logs to prevent this - a shame) but since I'm only changing the target path to go from a working local one to a failing streamed one I can't think of what I might be doing wrong. I even tested the old depricated iPhoneUtils.PlayMovieURL in case it would help but it failed the same way.
Does Unity actually support this, as the documentation is very minimal and not fully updated to Unity 4?
Can anyone share the code for a successful online stream of a video?
Thanks
Hi there, I am trying to accomplish exactly the same as you in your original post, however, I am using Vuforia (Qualcomm) to achieve this. Could do elaborate on how you got Vuforia to stream a video to an Android device? I can't seem to get it to work and there are no useful errors in logcat. My Handheld.PlayFullScreenMovie always returns false. Do you have any tips or ideas as to why this does not work?
We have now found that the solution was to use a streaming server (afraid am not sure what that means exactly) to host the file. If its on youtube etc then these are already streaming servers and so the original Handheld.PlayFullScreenMovie should work. The error in my original post may have just been that the 3gp file was not on a streaming server. (Could also be that it was 3gp - we use mp4 now).
Answer by Chowdery
·
Jun 02, 2013 at 02:08 PM
Hey Chris, sorry to be the bearer of bad news but I don't think that movie streaming is supported through the WWW class on Android/IOS yet:
I'm currently looking for an alternative but not having much luck. Please let me know if you manage to find a work around :).
Answer by ChrisYummi
·
Jun 03, 2013 at 07:19 AM
Final solution looks like using the video player that you can get from Vuforia (Qualcomm). This handles streaming as well as writing to a texture.
Answer by Ing3nu
·
May 28, 2014 at 06:41 PM
In order to get the streaming to work, you have to check the "required WiFi connection" in the build settings. for some reason, Unity is not passing the required permission to access the web through to the AndroidManifest unless you have that box.
StreamingAssets Folder Max Size
2
Answers
Unity 5.6 Video Clip stream on Android
0
Answers
Could a privacy error on a video streaming server cause videos not to play in an Android build?
1
Answer
HandHeld video compression setting doesn't work in editor?
1
Answer
Store Cortana speech commands inside a variable
0
Answers | https://answers.unity.com/questions/409073/unable-to-stream-a-video-on-android-pro-with-playf.html?sort=oldest | CC-MAIN-2020-29 | refinedweb | 670 | 71.14 |
Help !! Namespace problem
Expand Messages
- I'm a newbie to SOAP. I'd appreciate someone shedding a
little
light on namespaces for me. I have a SOAP service, echo.cgi, that
dispatches to
Echo.pm. Both are located in directory cgi-bin, so is a working location for the service
address.
In a SOAP client, I can set ->proxy("-
bin/echo.cgi") and
->uri("Echo") or
->uri("urn:Echo") or ->uri("") and everything
works fine.
However, if I set
->uri(""), I get an error "Failed to
access class
(cgi-bin::Echo) at E:/Perl/site/lib/SOAP/Lite.pm line 2101."
Can anyone clarify for me what is going on and how i can fix this
-. | https://groups.yahoo.com/neo/groups/soaplite/conversations/topics/2852?l=1 | CC-MAIN-2016-07 | refinedweb | 116 | 69.99 |
I wake up at 6:00 A.M. every day.
But today was different. Today, I noticed exactly how warm my bed was. I had to choose between getting up and waking up, or remaining where I was and enjoying the womb-like warmth of my bed. I had something I needed to do at church today, so I got up and woke up. Reluctantly.
You see, the choir director at my church was putting on a production called Bandari. It was in an African setting, and so required many African animals to add atmosphere. My friend and I had built a giraffe (well, the neck and head at least) which I had to move around a little bit during the performance. I also had to participate in the actual performing part of it. We had never done a run-through of the thing, so it was quite a challenge. But it went well, and I will probably get to keep the giraffe. It's about 8 feet tall, so it doesn't fit in our house, but I figure it can be used as a lawn ornament.
But that's not really why I'm writing this daylog. I know none of you care about my church. Many of you out there must think I'm stupid for believing in God, or practicing Christianity. My response to that would require a completely different writeup. What I'm really writing about is a little update regarding what I talked about in my last daylog. My Academic Decathlon team is going to state! This is so great! I can't convey how happy I am. I thought I might try to share a little bit of my joy here, this being the lovely community that it is. I will go to sleep tonight and dream dreams of Anchorage, Alaska, where Nationals are to take place. It will be mine. Oh yes, it will be mine.
But no.
I had another call, and a very distressed secretary said that their access to the file server was down, they couldn't get to the Internet, and "Oh, we also haven't been able to get mail". I made the mistake of relating to her how these three things are actually the one problem. "But the Internet is different from our mail, and certainly different from our files on the server". *Sigh*
I do a bit of diagnostic research and see that they are still connected to the network. I then ask for an IP Address to ping to see if they are just not doing something right. And lo and behold, there are problems. SO I wander over to them, look at their computer, determine that their hub is working, they have their stacks functioning, but no connection - hence an uplink problem. I take ONE LOOK at their hub and flick the crossover (MDI-X) switch, and walk out. Without seeing or saying anything to the secretary.
Five minutes pass and I get a phone call.
Same secretary.
"Oh, it seems to be fixed. *Pause* Did you do that?"
Why was I so chagrined? Because 6 weeks ago, I did EXACTLY THE SAME THING, and pointed it out to the secretary, and pointed out that the switch was not to be touched.
This is why I don't do desktop support (usually).
Chest and abs at the gym. I am up to 17.5kgs for dumbells. Soon to be 20kgs.
#include <rant.h>
"sometimes i hate people" I commented to anemotis after reading Saige's writeup in Mother Teresa. No, I don't hate Saige. Just people in general, sometimes. When someone says something so downright mean and spiteful sometimes I just get disgusted with humanity. I wonder if it's some peoples mission to destroy any figures of hope, or love, or altruism. And I wonder what happened to a person to make them be like that. (I'm not talking about Saige, I don't know her, just talking in general, and probably out of my ass. The Mother Teresa writeup was just an example).
As I walk through
This wicked world
Searchin' for light in the darkness of insanity.
I ask myself
Is all hope lost?
Is there only pain and hatred, and misery??
'Cause each time I feel it slippin' away, just makes me wanna cry.
What's so funny 'bout peace love & understanding?
What's so funny 'bout peace love & understanding?
What's so funny 'bout peace love & understanding?
But I ended the conversation on a happy note: "but, sometimes I love people too"
Mr. Coke is an elderly gentleman in our community who has long been a patron of the arts. He isn't wealthy, but he does give generously when he can, and he never misses a show. He asked the theatre professor to snag me on my way out, and he said "I recall seeing you in Dancing at Lughnassa and Ballyhoo, young man, and wanted to commend you. You're better than many actors plying their trade in New York."
I gushed. What other response is there in the face of such flattery? It made my week. Unfortunately, the demands of sleep deprivation led us into an after-dinner nap that stretched until midnight. There's a strange feeling of lost opportunities and waste associated with sleeping away the better part of an evening. Here we are, wide awake in the witching hours with nothing to do except each other.
I finished Metal Gear Solid!
...at Normal skill level, early hours of this day. I was ranked Deer. Total time 12 hours and then some.
I know, they call it a relatively easy and short game, but this was the second game I've beaten in short notice.
<spoiler encoding="rot-13">
Whfg jbaqrevat: Va gur raq, Fbyvq Fanxr fnlf uvf erny anzr vf Qnivq. Uvf ibvpr npgbe vf Qnivq Unlgre. Vf guvf lrg nabgure sbhegu-jnyy oernpu? Jung qbrf Fanxr fnl uvf anzr vf va gur Wncnarfr irefvba?
</spoiler>
And then: "Ravens on the roof..."
I encountered the first bug in MechWarrior 4: I made an Instant Action mission set in a city, and got to the action... the first enemy 'Mech was a piece of cake, but the second was harder.
It was a Raven that, for some odd reason, was standing on the roof of some theater. Well, I tried to shoot it down, but...
...it fell to the space between two buildings.
Even my lancemate's AI wasn't good enough to kill it from there. I needed a tricky way to send my alphastrikey message. =)
Well, time to face the challenges of the day.
"The Moon doesn't exist. It's a conspiracy."
I've spent some time reading about the "conspiracy" that the moon landings were a hoax...
...the arguments given by the doubters didn't make me a skeptic, but inspired me to download some cool space photographs from NASA. I found that famous photo of Buzz "Man On The Moon" Aldrin, for example...
Argh. Leafnode's posting thing seemed to get confused. I needed to repost some of articles I posted today. Well, Leafnode is still luxury compared to some other mail/news products. =)
If anyone were brave enough to look at my webcam right now, they'd see me having a small problem with noodles...
Other day logs o' mine...
(Goddamned lack of inspiration...)
I've sort of lost the habbit of linking things in my head and I haven't accidentally put square brackets around something I was writing for a while. I'm still softlinking in my head but that is going too. Does this seem sad to you? I know how much I enjoy this but I don't seem to be able to find the time.
Who am I going to play Mornington Crescent with now?
This morning I sifted through the river bed of my thoughts for a few golden nuggets.
I woke up at three AM today, as my son was crying uncontrollably in his crib, and would only calm down when I brought him into bed with me and his mother. Being woken up doesn't really bother me any more (I think that it becomes habit after a while), and I have begun to appreciate that tranquil, magical moment when he drifts off, cradled in the arms of my partner.
This morning, however, in my reflection I came to the acute realization that E2 had truly invaded my soul. Falling asleep, I thought about noding, about things I could contribute, about how much it depresses me that I haven't been chinged in a while. (Yeah, I know, bitch bitch, moan moan).
I'm off tomorrow to visit my father and step mother, and I'm flying to Calgary with my son. I'm worrying about how I'm going to manage all of the crap I need to bring, and how Luca's going to deal with the flight. The last time we flew as a family, he was only six months old, and slept for five of the six hours in the air. This time, however, he's likely going to scream like a maniac for a good period of time. I am dreading not only the stress for him, but also the stress we're going to impart to (on?) our fellow passengers. I just hope that, given it's a morning flight, he might sleep for a couple of hours.
Tomorrow the men from the Gas Board - ahem, whoever British Gas have sub-contracted the work to, I mean - will be coming to fit our new boiler. The parts arrived today...
Newsflash: our dinner just exploded... oops. Don't overboil potatoes in the microwave!
Not only that, the window people are coming on Friday, so we'll be able to keep more of the heat in and more of the rain out (our windows leak... a lot).
But it means I have to take time off from work... it's never a good time, nowadays...
after missing a possible morning coffee with him, i read starrynight's dream log from 02.02.01...i found confirmation of some things i'd always suspected about the way his mind functions, the way his reality is structured. i've always known him to be a very different type of animal from myself, but there are certain base similarities that are unavoidable. i'm still surprised (as usual) that we get along as well as we have so far...as i recall he's surprised i still subject myself to him. but here end my public speculations on this. i hope he finds this, but i won't tell him it's here --there are just things you don't say to people's faces...
...It's just another Manic Monday. (Woah ooh Woah)
President's Day. Today was an elective holiday at work, but I decided to go in anyway because I had the weekend's logs to catch up on. If I waited another day, it would be another 24 hours to sift through. Not fun.
So I went in to work today. Checked some E-Mail, sent some off, same old drill.
Trying to get shit paid for on PayPal today, and ran into a problem. It seems PayPal was having tremendous network problems due to network connectivity issues. Lots of heavy traffic on their site, so I had to keep attempting to resubmit my payment, thanks to their handy "Your payment was not processed due to heavy traffic. Click here to try again!]
So PayPal charges my credit card. TWICE.
Putting it over the limit.
And PayPal Customer Service is closed today.
So I had to call my card issuer and ask for a credit increase (which I really DON'T want), just so I can avoid any fees because PayPal made a mistake. They're going to hear from me tomorrow. You betcha.
Lunch Log: Wendy's Spicy Chicken Combo (With a Baked Potato instead of Fries), and a Small Chili.
Today is a good day. I have what I call a perma-grin because of an incredible weekend. Friday night Becky and I watched old tapes of my wrestling in high school, which brought back some very incredible memories, ate home-made mac and cheese that her mom made, and watched tv and talked upstairs in her room. Saturday, I went with Becky to her American Freestyle Tae Kwon Do tournament, where she took 2nd place in forms, but lost her first match in sparring which took her out of the running for the finals match. I am so proud of her. She has accomplished so much, and it's wonderful to know a person of such caliber. Saturday night, we were going to sneak into a hotel hot tub, but wound up losing track of time, and talking for three hours instead. Then... Sunday. Sunday night I made love for the first time in my life to a woman that I love with all my heart. She means the world to me, and I can't even comprehend how incredibly fortunate I am to have been able to fall in love, and then make love to such an incredibly wonderful, sweet, caring, and beautiful individual. even if i wanted to, I dont think that I coud get this grin off my face. I am now forever a different person, for I have given myself completly to the woman I love, as she has to me, and it's great.
Following a weekend with beautiful girl I was exhausted and had completed none of the work I was supposed to. She left in the early morning, and I found that I simply couldn’t get out of bed. Forgot all of my obligations, turned off my alarm, and slept far far too late. Had vivid and rather strange dreams for a couple of hours, and then woke in a sweat in the early afternoon.
After waking I found that I couldn’t shake this depression. Ran a few errands, the most productive of which was making an appointment with the auto-body shop to get my car door fixed. I'm really tired of climbing in and out of the passenger-side.
Tonight was spent drifting, and to be honest I cant even remember half of the things I did. I listened to my new Elvis Costello disc and cruised e2, wrote a few write-ups that I'm not particularly proud of.
I have to go to sleep now. I really hope that tomorrow will be better. As Nina said "It's a new dawn, it's a new day"
I wish it was that simple.
I should never node when I'm depressed
I finally got my project to a state where it is considered finished, but not polished. If the situation called for it, it can be used as it is.
I'm anxious about meeting Sara on friday. I need to get over this anxiety and stuff. We're just going to be close friends. I need to just be myself and relax.
Log in or registerto write something here or to contact authors.
Need help? accounthelp@everything2.com | http://everything2.com/title/February+19%252C+2001 | CC-MAIN-2015-35 | refinedweb | 2,553 | 82.65 |
Implementation status: to be implemented
Synopsis
#include <stdio.h>
int fsetpos(FILE *stream, const fpos_t *pos);
Description
The function sets the file position of the given stream to the given position. The argument pos is a position given by
the function
fgetpos().
Arguments:
stream - a pointer to the given stream
pos - the required file position
Return value
The
fsetpos() function returns
0 if it succeeds; otherwise, it returns
-1 and sets
errno to indicate the error.
Errors
The
fsetpos() function fails if, either the stream is unbuffered or the stream's buffer needed to be flushed, and the call to
fsetpos() causes an underlying
lseek() or
write() to be invoked, and:
[
EAGAIN] The
O_NONBLOCK flag is set for the file descriptor and the thread
An attempt was made to write a file that exceeds the file size limit of the process. or
The file is a regular file and an attempt was made to write at or beyond the offset maximum associated with the corresponding stream.
[
EINTR] The write operation was terminated due to the receipt of a signal, and no data was transferred.
[
EIO] A physical I/O error has occurred, or the process is a member of a background process group attempting to perform a
write() to its controlling terminal,
TOSTOP is set, the calling thread is not blocking
SIGTTOU, the process is not ignoring
SIGTTOU, and the process group of the process is orphaned.
[
ENOSPC] There was no free space remaining on the device containing the file.
[
EOVERFLOW] For fseek(), the resulting file offset would be a value which cannot be represented correctly in an object of type
long.
For fseeko(), the resulting file offset would be a value which cannot be represented correctly in an object of type
off_t.
[
EPIPE] An attempt was made to write to a pipe or
FIFO that is not open for reading by any process; a
SIGPIPE signal is also sent to the thread.
[
ESPIPE] The file descriptor underlying stream is associated with a pipe,
FIFO, or socket.
[
ENXIO] A request was made of a nonexistent device, or the request was outside the capabilities of the device.
Implementation tasks
- Implement
fsetpos()function. | https://phoenix-rtos.com/documentation/libphoenix/posix/fsetpos | CC-MAIN-2020-29 | refinedweb | 361 | 55.88 |
sighold, sigignore, sigpause, sigrelse, sigset - signal management
[OB XSI]
#include <signal.h>#include signal mask of the calling process before executing the signal handler; when the signal handler returns, the system shall restore the signal mask of the calling process to its state prior to the delivery of the signal. In addition, if sigset() is used, and disp is equal to SIG_HOLD, sig shall be added to the signal mask of the calling process and sig's disposition shall remain unchanged. If sigset() is used, and disp is not equal to SIG_HOLD, sig shall be removed from the signal mask of the calling process.
The sighold() function shall add sig to the signal mask of the calling process.
The sigrelse() function shall remove sig from the signal mask of the calling process.
The sigignore() function shall set the disposition of sig to SIG_IGN.
The sigpause() function shall remove sig from the signal mask of the calling process and suspend the calling process until a signal is received. The sigpause() function shall restore the signal mask of the process:
- the sigaction() function instead of the obsolescent sigset() function.
The sighold() function, in conjunction with sigrelse() or sigpause(), may be used to establish critical regions of code that require the delivery of a signal to be temporarily deferred. For broader portability, the pthread_sigmask() or sigprocmask() functions should be used instead of the obsolescent sighold() and sigrelse() functions.
For broader portability, the sigsuspend() function should be used instead of the obsolescent sigpause() function.
Each of these historic functions has a direct analog in the other functions which are required to be per-thread and thread-safe (aside from sigprocmask(), which is replaced by pthread_sigmask()). The sigset() function can be implemented as a simple wrapper for sigaction(). The sighold() function is equivalent to sigprocmask() or pthread_sigmask() with SIG_BLOCK set. The sigignore() function is equivalent to sigaction() with SIG_IGN set. The sigpause() function is equivalent to sigsuspend(). The sigrelse() function is equivalent to sigprocmask() or pthread_sigmask() with SIG_UNBLOCK set.
These functions may be removed in a future version. | http://pubs.opengroup.org/onlinepubs/9699919799/functions/sighold.html | CC-MAIN-2015-48 | refinedweb | 343 | 61.87 |
Table of Contents:
Pt I: A Comprehensive Guide to Building Apps with React.js.
Pt 1.5: Utilizing Webpack and Babel to build a React.js App
Pt III: Architecting React.js Apps with Flux.
Pt IV: Add Routing to your React App with React Router. (Coming Soon) Pt V: Add Data Persistence to your React App with Firebase. (Coming Soon) Pt VI:Combining React.js, Flux, React Router, Firebase, Gulp, and Browserify. (Coming
Soon)
This series never got finished, instead, I created some courses that are much better.
Since writing this post I (along with most of the community) have adopted Webpack over Gulp. This is still a great post and contains some great information, but I recommend reading THIS tutorial for actual implementation as this post is a little out of date.
In part 1 we talked about all things React.
At this point you should feel comfortable with the following parts of React -
JSXVirtual DOMReact.createClassrender (method)React.renderstategetInitialStatesetStatepropspropTypesgetDefaultPropsComponent LifeCyclecomponentDidMountgetDerivedStateFromPropscomponentWillUnmountEventsonClickonSubmitonChange
If you’re not comfortable with the above, go re-read part 1.
At this point you might have noticed that in order to actually build apps with React, you need a way to transform your JSX into JS and also a way to export/import components from other components (or you need to put some thought into how you load your script tags in your HTML). This tutorial is going to introduce Gulp as a way to transform JSX into JS (along with a few other handy transformations). It’s also going to cover using Browserify with React in order to be able to import/export components for the use in other components.
I realize that there are many other ways to do the JSX -> JS transformation and specifically a lot of people are using webpack for mostly all of this. Those other ways are great. I’ve found that for beginners though, Gulp and Browserify make a great pair which abstracts some low level complexities without abstracting everything. I hope to build a future tutorial which uses Webpack instead of Gulp/Browserify.
Let’s start with Gulp. If you’ve had experience with another build tool, such as Grunt or Broccoli, this will make a lot of sense. If you have no experience with build tools, this next section is more specifically for you.
Let’s talk about why we need a tool like Gulp. Think of all the cool tools developers have built on top of things like JavaScript and CSS. Let’s use SASS and Coffeescript as examples. One day this (or perhaps these) developers got together and they said something along the lines of “Hey, CSS is cool, but it has a lot of limitations. Let’s make a language that is a layer on top of CSS. What we can do is we can create a language that is more powerful than CSS and has a lot more features. Then all we have to do is figure out a way for our language to have some sort of process to it which allows developers using our language to convert it (or transpile it) to actual CSS so the browser can understand it.” — and that’s exactly what they did. Coffeescript is another example. That conversation probably went like this “Wow JavaScript is so ugly. There’s not even a class keyword. We should build a language that isn’t so ugly and then figure out how to transpile it to actual JavaScript because that’s all the browser can understand.” A conversation similar to this most likely occurred when the React team decided to go with this JSX idea. As long as there’s a way to convert the “transpiled language” to a real language the browser can understand, there’s no issue. One convenient use case for Gulp is that we can tell Gulp that whenever we change a certain file or whenever we do some certain event, to go ahead and take our Coffeescript, SASS, JSX (etc), and transpile it into actual JavaScript or CSS.
Though having Gulp transpile your code is great, if you were only doing that,
you probably don’t need the power of Gulp since there are more basic solutions.
However, we’re going to use Gulp for more than just transpiling our JSX into JS.
Let’s think of some more things we do as developers that could be automated with
a build tool process like Gulp. Perhaps the most obvious processes are the ones
we use right before pushing our code to production. We first minify our code
(shorten variable names, remove white space, etc) in order to minimize our file
size. Next thing we usually do is concatenate all of our JavaScript files into
one file so when the client makes a request to the server for our app, it can
grab that one concatenated JavaScript file without having to worry about making
a bunch of requests to grab all of the files. Next thing you would need to do is
head over to your HTML page and change out all your
<script> tags with just one,
which was the minified JavaScript file. There are a plethora of things you can
do with Gulp, we’ll stick with the most common ones for this post.
Now that you have a better idea of why build tools like Gulp are so handy, let’s jump into getting started with Gulp.
The first thing you need to do when starting out with Gulp is to use NPM to install Gulp globally. If you’re not familiar with NPM, download Node and then do some googling. Using NPM is outside of the scope of this tutorial but is fairly straight forward.
In your terminal run
npm install --global gulp
that will install gulp globally on your computer.
Now, go ahead and make a new project with a folder structure that looks like this (remember, passively reading this tutorial won’t get you very far).
gulpfile.jssrcindex.htmljsApp.jsChild.jsParent.js
Head over to your
index.html page and add the following code.
<!DOCTYPE html><html><head></head><body><div id="app"></div><script src=""></script><script src="src/Child.js"></script><script src="src/Parent.js"></script><script src="src/App.js"></script></body></html>
Notice it’s pretty bare right now. All we really have is a div with an id of app and we’re loading React from a CDN.
Head over to your Child.js file and add the following code.
var Child = React.createClass({render: function(){return (<div>and this is the <b>{this.props.name}</b>.</div>)}});
One more boilerplate addition. Now head over to your Parent.js file and add the following code.
var Parent = React.createClass({render: function(){return (<div><div> This is the parent. </div><Child name="child"/></div>)}});
Now head over to your
App.js file and add the following code.
React.render(<Parent />, document.getElementById('app'));
Notice all we’re doing is creating two components, one child, one parent. The
parent component is rendered to #app (in
App.js), which then renders the
child component passing in “child” as a property on props.
Now comes the fun part. Let’s head over to our
gulpfile.js file and start
building out our build process. Here’s the functionality that we eventually want from our final gulp build process.
Development
1) Transform our JSX into JS and save the output into a dist/src folder. 2) Copy our index.html from our src folder into the dist folder 3) Watch changes to any JS or HTML file then run the previous two processes again.
Production
1) Take all the JS files, concat them all together, minify the result, then output
the result to one file named build.js to the build folder inside the dist
folder
2) Replace our
<script> tags in our index.html page with one
<script>
which references our new minified build.js file
With these processes above we’ll make it very simple to prepare our code for production and also very simple to convert our JSX to JS during development.
The very first thing we need to do is set up our package.json file. For those
unfamiliar with what our package.json is for, it’s essentially just a collection
of the node packages that are required for our application. In your terminal,
head over to the root of this project then type
npm init and select from the
different options (you can just hit enter until it finishes). The options don’t
matter too much - you can change them later.
Once you’re finished with that you’ll have a bare package.json file. Now in your terminal run the following commands.
npm install --save-dev gulp;npm install --save-dev gulp-concat;npm install --save-dev gulp-uglify;npm install --save-dev gulp-react;npm install --save-dev gulp-html-replace;
If you’ve used npm before, this is pretty straight forward. If you haven’t, we
practically just did witchcraft. All we did was tell npm to go and download each
of those packages and save them into our
node_modules folder (which was
created for us) and add them to our
package.json file as a developer
dependency. Now if you check out the
node_modules folder, it should be full of
the packages above. What we can do now is in our
gulpfile.js file, we can use
require to essentially import the code from each of the different packages
and save that functionality into a variable. Let’s do that right now.
In the top of your
gulpfile.js file, add the following code.
var gulp = require('gulp');var concat = require('gulp-concat');var uglify = require('gulp-uglify');var react = require('gulp-react');var htmlreplace = require('gulp-html-replace');
Notice we’ve created a variable for each of the packages we downloaded earlier. Now, each of those variables is just whatever was being exported from each individual package.
Before we dive into building our different Gulp tasks, we first need to create a
paths object. This paths object is going to be filled with properties that
represent differents paths. Copy this code under your variable declarations in
our
gulpfile.js file.
var path = {HTML: 'src/index.html',ALL: ['src/js/*.js', 'src/js/**/*.js', 'src/index.html'],JS: ['src/js/*.js', 'src/js/**/*.js'],MINIFIED_OUT: 'build.min.js',DEST_SRC: 'dist/src',DEST_BUILD: 'dist/build',DEST: 'dist'};
Now that are paths are defined, let’s create our first gulp task.
Our first task is going to be called transform and it’s going to be what transforms our JSX into JS. The code for this task looks like this,
gulp.task('transform', function(){gulp.src(path.JS).pipe(react()).pipe(gulp.dest(path.DEST_SRC));});
Perhaps the coolest thing about Gulp is its ability to pipe and chain function
invocations. What this means is that we’re able to take one file, transform it
and do something with it, then take that transformation and do something else
with it, then when we’re finished, tell it where to put the newly created file.
That’s exactly what we’re doing above. We create a gulp task called transform
and pass it a callback function. In our callback function we grab the
path.JS
array. Gulp then gets each one of those files, transforms them using the react
method we initialized earlier then pipes that outcome to
dist/src. If you ran
this task, after you’d see that Gulp took all your JS files, converted your JSX
to JS, then outputted the results to a new
src folder in a new
dist folder -
all in a manner of milliseconds. Pretty rad.
Let’s create our next Gulp task. All this task is going to do is take our
index.html file and copy it over to our
dist folder so our newly created JS
files from our transform Gulp task above can be referenced by our
index.html
page.
gulp.task('copy', function(){gulp.src(path.HTML).pipe(gulp.dest(path.DEST));});
We’re now going to create our last Gulp task for development. What we want to do
is create a task that will always be running so when we change either our
index.html or any of the JS files, our two tasks from earlier will run and
update the code in the
dist folder.
gulp.task('watch', function(){gulp.watch(path.ALL, ['transform', 'copy']);});
What the task above is saying is watch all of our files, whenever any of them change, run the transform task and the copy task. Pretty straightforward.
Now all we want to do for our development tasks is set up a default task. This
default task will run whenever we type
gulp in our command line. Here’s the
code for that.
gulp.task('default', ['watch']);
When we type
gulp, Gulp will run the watch task, which we know watches our
files for changes and then runs the
transform and
copy tasks.
Those are all the tasks we’re going to create for developing. There are
obviously a plethora of other tasks you could create for dev, but those are what
we’ll stick with right now. At this point you should be able to run
gulp and
the JSX in your
App.js file will be converted to JS and the output will go to
dist/src. Your index.html file should have also been copied over to your dist
folder. Open your
index.html page in the browser and you should see your
components being rendered.
Now what we want to do is create a few gulp tasks for production so whenever we
run
gulp build our code will prepare itself to get pushed to production.
The first task we’re going to build for production is called
build. This
task is going to grab all of our JS files, concatenate all of them together,
minify them, then output the result to our
dist/build folder. The code for
that looks like this.
gulp.task('build', function(){gulp.src(path.JS).pipe(react()).pipe(concat(path.MINIFIED_OUT)).pipe(uglify(path.MINIFIED_OUT)).pipe(gulp.dest(path.DEST_BUILD));});
The only thing that might look a little strange is the path.OUT argument we’re
passing to concat and uglify. All this is doing is telling both of those
functions what the resulting file path should be, so in this case, when we use
gulp.dest it’s going to take the path.MINIFIED_OUT file (build.min.js) and
output it to path.DEST_BUILD (
dist/build).
Now you might notice this is cool, but at this point it’s a little annoying
because whenever we run
gulp build, we then need to go and change our HTML
to reference the new build.min.js file rather than
src/App.js and whatever
other script tags we’re referencing here. What if there were a way to, whenever
we ran our production gulp tasks, to replace all of our
<script> tags with just
our new
<script src=”build/build.mins.js”> script and then output that new
file to our dist folder? Well, there is. The way we’re going to do that is with
a
gulp-html-replace. First, we need to head over to our index.html in our
main src folder and make some additions. Go ahead and modify the original
child/parent/App.js scripts to be wrapped in comments like this.
<!-- build:js --><script src="src/Child.js"></script><script src="src/Parent.js"></script><script src="src/App.js"></script><!-- endbuild -->
What’s going to happen is we’re going to tell Gulp, “Hey Gulp, when I tell you
to, concat and minify all my JS files and output them to a new
dist/build
folder. As you’re doing that go to my copied
index.html page in my
dist
folder and replace the
build:js comment and whatever is inside of it with
this new script tag
<script src="build/build.min.js"></script> but don’t
change my original
index.html file in my normal
src folder because I still
want to develop with that.”
Let’s now build out our
replaceHTML task. The code should look like this
gulp.task('replaceHTML', function(){gulp.src(path.HTML).pipe(htmlreplace({'js': 'build/' + path.MINIFIED_OUT})).pipe(gulp.dest(path.DEST));});
Note that what we give htmlreplace is an object with a key that represents where
to replace and whose value is what to replace. That ‘js’ key in htmlreplace
coincides with
build:js in our
index.html page.
Very last step is now that both of our production tasks are ready, we need to create another task that wraps both of those tasks in one. We do that with the following code
gulp.task('production', ['replaceHTML', 'build']);
Now whenever we run
gulp production, Gulp will run our
replaceHTML and
build tasks. Go ahead and give it a try and check that your
dist/index.html and
dist/build/build.min.js files are correct.
For reference, here’s the full
gulpfile.js file.
var gulp = require('gulp');var concat = require('gulp-concat');var uglify = require('gulp-uglify');var react = require('gulp-react');var htmlreplace = require('gulp-html-replace');var path = {HTML: 'src/index.html',ALL: ['src/js/*.js', 'src/js/**/*.js', 'src/index.html'],JS: ['src/js/*.js', 'src/js/**/*.js'],MINIFIED_OUT: 'build.min.js',DEST_SRC: 'dist/src',DEST_BUILD: 'dist/build',DEST: 'dist'};gulp.task('transform', function(){gulp.src(path.JS).pipe(react()).pipe(gulp.dest(path.DEST_SRC));});gulp.task('copy', function(){gulp.src(path.HTML).pipe(gulp.dest(path.DEST));});gulp.task('watch', function(){gulp.watch(path.ALL, ['transform', 'copy']);});gulp.task('build', function(){gulp.src(path.JS).pipe(react()).pipe(concat(path.MINIFIED_OUT)).pipe(uglify(path.MINIFIED_OUT)).pipe(gulp.dest(path.DEST_BUILD));});gulp.task('replaceHTML', function(){gulp.src(path.HTML).pipe(htmlreplace({'js': 'build/' + path.MINIFIED_OUT})).pipe(gulp.dest(path.DEST));});gulp.task('default', ['watch']);gulp.task('production', ['replaceHTML', 'build']);
At this point we have a full working build process using just Gulp. As you might
have noticed, this method has some weaknesses. First, you have to be very
careful how you load your different components in your HTML. Notice that we’re
loading the Child.js file before the Parent.js file. This makes sense
because Parent.js is dependent upon Child.js. Also note how we’re loading
Parent.js before we load
App.js, again this is because Parent is required in
the
App.js file. It might seem pretty straight forward in this example but as
your app grows, it’s very tricky to keep track of these Parent/Child
relationships and not to mention both Child and Parent are now in the global
scope, which from CS 101 we know is not a good thing. Another weakness in this
method is that if you put a debugger in our JSX, by the time our browser loads
that debugger we’re no longer in JSX land but in JS land and we have no way of
telling which line the error is on in our original JSX code. This isn’t ideal.
Although we still will always have to debug our transpiled JSX version, it would
be neat if we were able to still get the line numbers of where the error was at
in our original JSX file. Good news is when we introduce Browserify into our
build process, it will take care of both of these weaknesses and more.
At a fundamental level, all Browserify allows you to do is require certain packages just like we did earlier with gulp, concat, uglify, etc. This has huge benefits for us because not only do we now have access to all of NPMs packages, but we’re now able to require only the packages (or React components) we need to for that specific file. This solves our problem of all of our components being on the global scope. Let’s take a quick look at exporting and requiring modules.
I remember when I first started leaning about Browserify every definition about it would be in terms of commonjs - which meant literally nothing to me because I knew nothing about commonjs. That is until I fell upon this article on Reddit. The author talks about how Commonjs, AMD, and ES6 Modules are all ways to “break a Javascript project up into multiple self-contained modules and managing the dependencies between them”. That makes sense. I can essentially create module that can be passed around and used in other parts of my application without having to pollute the global scope. The author continues, “CommonJS is the standard implemented by Node. A file can assign any value (including a function or object) to its module.exports property and any other file can use require(“filename”) to get a copy of that value. It works great in Node but isn’t usable in a web browser; because of a quirk of the way file loading in HTML and Javascript works fetching each of the files would cause the browser to hang until it was loaded. There are, however, some tools (browserify) that will let you write your browser code in CommonJS and automatically convert it to something web appropriate.” Then it clicked. All Browserify allows me to do is have the greatness of using require in Node, but on the browser. This is fantastic for React because as you’ve probably guessed by now, each React component we create can be its own module that we can then require in other components based off of need. Let’s take a look at some sample commonjs syntax.
We’re going to create three files (NOTE: This example (
Add,
Multiply,
Math)
is just for demonstrating how Commonjs works, it’s not part of the overall
tutorial). The first file will be called
Add.js and it will export an
object with an addition helper method on it. The second file will be called
Multiply.js and it will export an object with a multiplication helper method
on it. The third will be called
Math.js where we require and then use our Add
and Multiply module.
var addObj = {add: function(x,y){return x + y;}};module.exports = addObj;
var multiplyObj = {multiply: function(x,y){return x * y;}};module.exports = multiplyObj;
var addObj = require(‘./Add’);var multplyObj = require(‘./Multiply’);addObj.add(3,4) // 7multiplyObj.multiply(2,5) // 10
Just a few minor things to note about the code above. If you want a certain object or function to be available in a different file, you use module.exports to export your code.
If you want to require whatever was exported from a different file, you use require and whatever the name of that file was minus the extension.
Now that we have a solid understanding of how Browserify can make our React code
more modular, let’s view the bigger picture of what our build process will look
like when we incorporate browserify. We’re still going to have two main tasks,
watch and
build.
watch for development and
build for production.
On the surface both of these tasks will look very similar to our tasks before we
added Browserify. Our
watch task is going to concat our JS files, transform
them from JSX to JS, and output the results into our
dist folder.
build
will also transform our JSX to JS, concatenate, minify, then output the result
to out
dist folder. The process looks similar, but there are actually some
really cool features introduced by Browserify which we haven’t talked about.
We’ll get those more when we look at the code.
Let’s jump in.
Like before, we’re going to start out downloading all of the NPM packages we’ll need for our new gulpfile. However, before we jump into that let’s hurry and remove an NPM package we were using for the previous example that we won’t be using for this example. In your terminal run
npm uninstall gulp-concat;
This commands remove the Concat libraries from our
package.json and our
node_modules folder.
Now let’s install the new NPM packages we will need. Run the following command in your terminal.
npm install --save-dev vinyl-source-stream;npm install --save-dev browserify;npm install --save-dev watchify;npm install --save-dev reactify;npm install --save-dev gulp-streamify;
We’ll get into what each of these packages do when we use them in our tasks.
Head over to your
gulpfile.js and go ahead and remove everything. It’s easier
to start new since we’re introducing a pretty big change with browserify. At the top of your gulpfile add the following code like we did before.'};
The first task we’re going to create is our copy task. This is exactly the same
as before. We’re taking our index.html page and copying it over to our
dist
folder.
gulp.task('copy', function(){gulp.src(path.HTML).pipe(gulp.dest(path.DEST));});
Our next task is going to be our main task for development. This one is pretty lengthy so I’ll post the code and we’ll walk through it.));});
The very first thing we do is tell Gulp to watch our
index.html file for any
changes and if something does change, run the copy task.
The next thing we do is set up a watcher. Watchify works very closely with Browserify. A problem that was occurring with these Browserify/React build processes was that Browserify would take forever to transpile because it was going through every component every single time anything changed and would re-update the new bundled file. Watchify fixes this. Instead of going through every file, Watchify will cache your files and only update the ones that need to be updated. This makes builds take a LOT less time.
Notice we’re passing a browserify function invocation to our watchify function
invocation. Let’s now talk a bit about that browserify invocation. We’re passing
browserify an object that is going to essentially set up the configurations for
our browserify build. The first property on the object is
entries. Notice
entries is an array with the path of just our main component in it (app.js).
You might remember earlier we had a JS array (
['src/js/*.js',
'src/js/**/*.js']) which was every JS file we wanted to be transformed from JSX
to JS. Well one perk with browserify is that you just tell it the entry point or
the main component in your app and it will take care of all the child
components. Pretty slick. Next we have the
transform property. Browserify
works with more than just React. Here is where we tell Browserify how to
transform our code. In this case, we’re using reactify which will take care of
our JSX to JS transpiling. Next is
debug which is set to true. This tells
Browserify to use source maps. What source maps do is even though we’re
referencing the transpiled JSX in our index.html page, through source maps when
there is an error in our code, the error will point to the line number in our
JSX file rather than our transpiled JS file. This is super convenient for
debugging. That last line with
cache: {}, packageCache: {}, fullPaths: true
is necessary, but we won’t go into it. The watchify website just states that
it’s needed in order to use watchify.
The next thing we’re going to do is set up our watcher to watch for updates in
our parent component or in any of its children components. We tell our watcher
to watch for updates then we pass it a callback function to invoke whenever
there is an update. The code in the callback should look fairly familiar.
watcher.bundle concatenates all of our different components into one file
and does some browserify magic to make the module.exports/requires work. Then
the rest is just piping the result to our
dist/src folder. The last thing in
our watch task is that we’re continuing to bundle and pipe after we set up the
on update callback. The reason for this is so that when we first call
gulp
watch our code will bundle and pipe itself to the dist folder even before a
change is made. Then any changes made to our JS files after that will just
overwrite the initial bundled code.
The last thing for our development tasks is to set the default gulp task to run
the watch task so we can now just run
gulp and our watch task will kick off.
gulp.task('default', ['watch']);
Just like before we started using Browserify, I like to keep my Development and
Production tasks separate. Like before, create a gulp task called
build.
This task is going to be very similar to our
watch task except this one is
only going to bundle and pipe our code without actually watching it for updates.
It’s also going to minify the final code then output it to
dist/build rather
than
dist/src.
gulp.task('build', function(){browserify({entries: [path.ENTRY_POINT],transform: [reactify]}).bundle().pipe(source(path.MINIFIED_OUT)).pipe(streamify(uglify(path.MINIFIED_OUT))).pipe(gulp.dest(path.DEST_BUILD));});
Like I mentioned, this is mostly the same. We got rid of source mapping and our watchify stuff and added minification, but that’s it.
Also exactly like before we used Browserify, we want to be able to modify our
index.html page when it’s ready for production and switch out our regular
scripts with the minified versions. We’ll use the same
replaceHTML task that
we used before to do this.
gulp.task('replaceHTML', function(){gulp.src(path.HTML).pipe(htmlreplace({'js': 'build/' + path.MINIFIED_OUT})).pipe(gulp.dest(path.DEST));});
and to wrap both of our production tasks into one gulp process,
gulp.task('production', ['replaceHTML', 'build']);
and that’s it! Here’s what the full
gulpfile.js should look like'};gulp.task('copy', function(){gulp.src(path.HTML).pipe(gulp.dest(path.DEST));})));});gulp.task('build', function(){browserify({entries: [path.ENTRY_POINT],transform: [reactify],}).bundle().pipe(source(path.MINIFIED_OUT)).pipe(streamify(uglify(path.MINIFIED_OUT))).pipe(gulp.dest(path.DEST_BUILD));});gulp.task('replaceHTML', function(){gulp.src(path.HTML).pipe(htmlreplace({'js': 'build/' + path.MINIFIED_OUT})).pipe(gulp.dest(path.DEST));});gulp.task('production', ['replaceHTML', 'build']);gulp.task('default', ['watch']);
You now have a very nice build process that has some very nice features. To recap, those features include
I hope this tutorial was useful for you. At this point you should be fairly comfortable with React and its different pieces. You should also have an idea of how to setup a build process using Gulp and Browserify to compliment your React development. Up next, Flux and how to better architect your React.js. | https://ui.dev/react-js-tutorial-pt-2-building-react-applications-with-gulp-and-browserify/ | CC-MAIN-2021-43 | refinedweb | 5,173 | 65.12 |
Issue #7775
Need a way to track when versions for a collection has been updated
Description
The work in is trying to reduce the number of calls the ansible-galaxy CLI needs to do in order to install a collection. One thing that is cached is the versions that are advertised by a collection through '{api}/collections/{namespace}/{name}/versions/'. While the cache will have a certain time to live the cli still needs a way to determine if the cached version list is up to date or not.
The current method is to check if the highest_version or updated_at field has changed but currently they only reflect if a higher version was published and not any version. For example if namespace.name was published at 1.1.0 at 7am and then 1.0.1 at 10am the updated_at field will still only show 7am due to 1.1.0 being a higher version. What I am hoping for is the updated_at field to show the timestamp at which the latest version was published, this case being 10am.
Related issues
Associated revisions
Revision c43b89cf
View on GitHub
Field updated_at from Galaxy v3 Collections endpoint using latest instead of highest version closes #7775
History
#4
Updated by pulpbot 5 months ago
- Status changed from ASSIGNED to POST
#6
Updated by pulpbot 5 months ago
#7
Updated by Anonymous 5 months ago
- Status changed from POST to MODIFIED
Applied in changeset pulp_ansible|332f2c1bd7adbb6540ce47035f642b8c142fd0b0.
#8
Updated by pulpbot 5 months ago
#10
Updated by fao89 4 months ago
- Copied to Task #8012: Need a way to track when versions for a collection has been updated added
Please register to edit this issue
Also available in: Atom PDF
Field updated_at from Galaxy v3 Collections endpoint using latest instead of highest version closes #7775 | https://pulp.plan.io/issues/7775 | CC-MAIN-2021-17 | refinedweb | 300 | 55.68 |
An LED-based fire lamp
(Last modified 8 Oct 11)
I call this a fire lamp because the LEDs, driven by an Atmel ATtiny85 MCU, provide a very realistic imitation of a flickering flame about the size of a tennis ball. Unfortunately, I've never bothered to buy a video camera, so I can't embed any live video. However, the flickering effect is excellent, without any strobing or blinking.
I used the Liteon LTL912SEKSA Piranha red LEDs for this build. These LEDs are available (Oct 2011) from
Electronic Goldmine
for $0.39 each. Unlike typical 3 mm or 5 mm LEDs, these device will take up to 70 mA each, have a wide dispersion angle of 60 degrees, and put out nearly four lumens. They are commonly used in automobile brake lights and make a totally excellent light source for light art.
Closeup of the Liteon Pirhana LEDs. Note that the pins are in a square pattern, spaced 0.2" apart.
The circuit is slightly more complex than a typical LED driver because of the LED current. The ATtiny can't source the needed 60 mA per LED, so I added a 2N2222 transistor in each LED control line. You can use just about any NPN transistor for this if you can't find any 2N2222 or PN2222 devices. You can check out the schematic here. (
firelamp.pdf
)
The schematic includes a note for changing resistor values to use smaller (less current) LEDs.
I built the circuit up on a small breadboard, adding a 2-pin power connector for hooking up a wall-wart power supply. You can find small, 5 VDC switcher wall-warts in a lot of surplus or thrift stores now days; these make excellent power supplies for small projects. You can tell you're holding a switcher wall-wart because it will be very lightweight, only a few ounces. If you hook it up to AC and measure the output, you will see a value from about 4.9 to 5.1 VDC, unlike the 6 to 9 VDC put out by some of the older, unregulated (and heavier) wall-warts.
The circuit fits onto a 2" x 3" protoboard with plenty of room to spare. The capacitor C1 is shown as 25 uf, but just about anything between 10 and 1000 uf will work.
Using a small wall-wart and an old-fashioned socket plug lets you install the firelamp circuit board directly in a table lamp socket. This gives you a table lamp that looks like it has a small fire in it, rather than a bulb (which explains the project's name).
Here is the firelamp PCB and wall-wart, plugged into a socket plug and installed in a table lamp. I stuck the PCB onto the top of the wall-wart using foam tape, but hot-glue or RTV would probably work, as well. I didn't shorten the power cord on the wall-wart, just wound it around the body and tied it in place.
The firmware for this project was written in C in Atmel's AVR Studio4, then pushed into the ATtiny using an AVRISP mkII programmer. The only tricky part of the firmware is the technique for doing pulse-width modulation (PWM) for the LEDs. I am really fussy about LED PWM. I don't want to see any strobing when my eyes scan past the LEDs. For this project, I used a PWM clock of 1 MHz / 256, or about 4 kHz. Each LED is controlled by a 32-bit PWM mask. At each PWM clock (4000 times per second), the low bit of each LED's PWM mask is written to the output port for that LED. Intensity of each LED is varied by a different PWM mask from the table of 32 possibilities. For example, using a PWM mask value of 0x55555555UL will give an LED intensity of about 50%, since the LED is on every other PWM clock.
Feel free to play around with the timing. You can modify the timer setup to use a compare-match instead of the overflow shown here, which would let you use an even faster PWM clock rate.
Here is a .hex file you can burn directly into an ATtiny85 if you don't want to be bothered compiling. (
firelamp.hex
)
/*
* firelamp.c PWM control of LEDs on an ATtiny85
*/
#include <avr/io.h>
#include <avr/pgmspace.h>
#include <avr/interrupt.h>
#define NUM_LEDS 6
#define MASK_LEDS 0x3f /* assumes PB0 - PB5 */
#define NUM_PWM_STATES 32
#define MAX_PWM_STATE (NUM_PWM_STATES-1)
#define PORT_LEDS PORTB
#define DDR_LEDS DDRB
#define DELAY 1000UL /* general delay in tics */
const unsigned long int pwmvals[NUM_PWM_STATES] PROGMEM =
{
0x00000000L, 0x00010000L, 0x00010001L, 0x80200800L,
0x01010101L, 0x82080820L, 0x84108410L, 0x84112410L,
0x11111111L, 0x11249111L, 0x12491249L, 0x25225252L,
0x25252525L, 0x25525522L, 0x25552555L, 0x25555555L,
0x55555555L, 0x55575555L, 0x57555755L, 0x57575755L,
0x57575757L, 0x57577775L, 0x57777577L, 0x57777777L,
0x77777777L, 0xf7777777L, 0xf777f777L, 0xf7f7f777L,
0xf7f7f7f7L, 0xf7fff7f7L, 0xf7fffff7L, 0xffffffffL
};
uint8_t bright[NUM_LEDS];
uint8_t delta[NUM_LEDS];
uint32_t pwm[NUM_LEDS];
uint16_t delays[NUM_LEDS];
volatile uint16_t tics[NUM_LEDS];
/*
* Local functions
*/
uint16_t readtics(uint8_t cntr);
void writetics(uint8_t cntr, uint16_t delay);
void assignpwm(uint8_t led);
static long unsigned int b_random(void);
static long unsigned int rnd(long unsigned int val);
int main(void)
{
uint8_t n;
unsigned long int nval;
TCCR0B = (1<<CS01); // /8 prescaler
TIMSK = (1<<TOIE0); // enable interrupt on TOF
PORT_LEDS = PORT_LEDS & ~MASK_LEDS; // turn off all LEDs
DDR_LEDS = DDR_LEDS | MASK_LEDS; // make all LED drive lines outputs
for (n=0; n<NUM_LEDS; n++)
{
bright[n] = 0; // start with all brightness values at 0
assignpwm(n); // make it so
delays[n] = 200; // start with an arbitray delay value
}
sei(); // turn on interrupts
while (1) // main loop
{
for (n=0; n<NUM_LEDS; n++) // for each LED
{
if (readtics(n) == 0) // if done with current delay for this LED...
{
nval = rnd(20) + 11;
bright[n] = nval & 0xff; // update the brightness
assignpwm(n);
nval = rnd(750) + 250; // calc a random delay
writetics(n, nval & 0xffff);
}
}
}
return 0;
}
void assignpwm(uint8_t led)
{
cli(); // do not disturb
pwm[led] = pgm_read_dword(&pwmvals[bright[led]]);
sei();
}
uint16_t readtics(uint8_t cntr)
{
uint16_t t;
t = tics[cntr];
if (t != tics[cntr]) t = tics[cntr];
return t;
}
void writetics(uint8_t cntr, uint16_t delay)
{
cli();
tics[cntr] = delay;
sei();
}
/*
* The following functions try to duplicate the ANSI random() function for 8-bit MCUs such as the
* Atmel ATmega1284p. Seed is fixed at compile time.
*/
static long unsigned int b_random(void)
{
static long unsigned int seed = 12345678L;
seed = 1664525L * seed + 1013904223L;
return seed;
}
static long unsigned int rnd(long unsigned int val)
{
long unsigned int t;
t = b_random(); // compute a 32-bit random number
val = t % val + 1L; // now keep it within requested range
return val;
}
SIGNAL(TIM0_OVF_vect)
{
uint8_t n;
uint8_t mask;
uint32_t t;
mask = 0;
for (n=0; n<NUM_LEDS; n++) // for LED 0 through NUM_LED-1...
{
t = pwm[n] & 1; // get low bit of PWM for this LED
mask = mask | (t << n); // move low bit of PWM into proper bit of mask
pwm[n] = (pwm[n] >> 1); // move PWM value one bit to the right
if (t) pwm[n] = pwm[n] + 0x80000000; // rotate original low bit into high bit
if (tics[n]) tics[n]--; // drop this counter if not yet 0
}
PORT_LEDS = PORT_LEDS & ~(MASK_LEDS); // strip off port lines dedicated to LEDs
PORT_LEDS = PORT_LEDS | mask; // turn on the LEDs
} | http://www.seanet.com/~karllunt/firelamp.html | CC-MAIN-2017-13 | refinedweb | 1,207 | 65.35 |
Extensible HTML/XML generator
Project description.3 (released 07/11/2008)
XIST has gained its fourth templating language: UL4 the “Universal Layout Language”. This templating language is similar in capabilities to Django’s templating language. However UL4 templates are compiled to a bytecode format, which makes it possible to implement template renderers in other languages and makes the template code “secure” (i.e.template code can’t open or delete files).
ll.make has gained new actions: GZipAction, GUnzipAction, CallFuncAction, CallMethAction, UL4CompileAction, UL4DumpAction and UL4LoadAction.
The version number for cssutils has been bumped to 0.9.5rc1.
Nodes of type ll.xist.xsc.Comment and ll.xist.xsc.DocType inside attributes are now simply ignored when publishing instead of generating an exception.
All actions in ll.make no longer check whether their inputs are action objects. Non-action objects are simply treated as ancient input data. This also means that most action classes have an input parameter in their constructor again, as this input could now be a constant.
Most attributes of action objects in ll.make can now be action objects themselves, so for example the name of the encoding to be used in an EncodeAction can be the output of another action.
ll.make.ImportAction has been dropped as now the module object can be used directly (e.g. as the input for an XISTPoolAction object).
ll.misc.xmlescape now escapes ' as ' for IE compatibility.
Functions ll.misc.xmlescape_text and ll.misc.xmlescape_attr have been added that implement the functionality from XIST 3.2.5 and earlier.
The default parser for XIST is expat now. To switch back to sgmlop simply pass an SGMLOPParser object to the parsing functions:
>>> from ll.xist import parsers >>> node = parsers.parsestring("<a>", parser=parsers.SGMLOPParser())
TOXIC has been split into a compiler module ll.toxicc and an XIST namespace ll.xist.ns.toxic. TOXIC now supports output for SQL Server. The function xml2ora as been renamed to compile (and has a new mode argument for specifying the database type).
The targetroot parameter for ll.make.XISTConvertAction.__init__ has been renamed to root.
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/ll-xist/3.3/ | CC-MAIN-2019-09 | refinedweb | 377 | 52.76 |
To light up an LED with the anode connected to a digital pin, you set the digital pin to HIGH:
void setup(){ pinMode(7, OUTPUT); digitalWrite(7, HIGH); } void loop(){ }
In the void setup() block, we configure GPIO pin 7 as an output with pinMode(7, OUTPUT); and drive it high with digitalWrite(7, HIGH);.
Cathode to GPIO
With an LED’s cathode connected to a digital pin, the anode is connected to Vcc. To turn on the LED, the digital pin is switched LOW, which completes the circuit to ground:
In this case we drive GPIO pin 7 LOW with digitalWrite(7, LOW);. This closes the circuit and allows current to flow from Vcc to ground:
void setup(){ pinMode(7, OUTPUT); digitalWrite(7, LOW); } void loop(){ }
How 7-Segment Displays Work:
To install it, open the Arduino IDE, go to Sketch > Include Library > Add .ZIP Library, then select the SevSeg ZIP file that you downloaded.
Printing Numbers to the Display
This program will print the number “4” to a single digit 7-segment display:
(){ sevseg.setNumber(4); sevseg.refreshDisplay(); }
In this program, we create a sevseg object on line 2. To use additional displays, you can create another object and call the relevant functions for that object. The display is initialized with the sevseg.begin() function on line 11. The other functions are explained below:
hardwareConfig = COMMON_CATHODE; This sets the type of display. I’m using a common cathode, but if you’re using a common anode then use COMMON_ANODE instead.
byte numDigits = 1; This sets the number of digits on your display. I’m using a single digit display, so I set it to 1. If you’re using a 4 digit display, set this to 4.
byte digitPins[] = {}; Creates an array that defines the ground pins when using a 4 digit or multi-digit display. Leave it empty if you have a single digit display. For example, if you have a 4 digit display and want to use Arduino pins 10, 11, 12, and 13 as the digit ground pins, you would use this: byte digitPins[] = {10, 11, 12, 13};. See the 4 digit display example below for more info.
byte segmentPins[] = {6, 5, 2, 3, 4, 7, 8, 9}; This declares an array that defines which Arduino pins are connected to each segment of the display. The order is alphabetical (A, B, C, D, E, F, G, DP where DP is the decimal point). So in this case, Arduino pin 6 connects to segment A, pin 5 connects to segment B, pin 2 connects to segment C, and so on.
resistorsOnSegments = true; This needs to be set to true if your current limiting resistors are in series with the segment pins. If the resistors are in series with the digit pins, set this to false. Set this to true when using multi-digit displays.
sevseg.setBrightness(90); This function sets the brightness of the display. It can be adjusted from 0 to 100.
sevseg.setNumber(); This function prints the number to the display. For example, sevseg.setNumber(4); will print the number “4” to the display. You can also print numbers with decimal points. For example, to print the number “4.999”, you would use sevseg.setNumber(4999, 3);. The second parameter (the 3) defines where the decimal point is located. In this case it’s 3 digits from the right most digit. On a single digit display, setting the second parameter to “0” turns on the decimal point, while setting it to “1” turns it off.
sevseg.refreshDisplay(); This function is required at the end of the loop section to continue displaying the number.
Count Up Timer
This simple program will count up from zero to 9 and then loop back to the start:
(){ for(int i = 0; i < 10; i++){ sevseg.setNumber(i, i%2); delay(1000); sevseg.refreshDisplay(); } }
The code is similar to the previous sketch. The only difference is that we create a count variable “i” in the for statement on line 16 and increment it one number at a time.
The sevseg.setNumber(i, i%2); function prints the value of i. The i%2 argument divides i by 2 and returns the remainder, which causes the decimal point to turn on every other number.
The count up timer is a nice way to demonstrate the basics of how to program the display, but now let’s try to make something more interesting.
Rolling Dice
This example consists of a push button and a single 7 segment display. Every time the push button is pressed and held, the display loops through numbers 0-9 rapidly. Once the button is released, the display continues to loop for a period of time almost equal to the time the button was pressed, and then displays a number along with the decimal point to indicate the new number.
To build the circuit (with the 5161AS display), connect it like this:
Then upload this program to the Arduino:
#include "SevSeg.h" SevSeg sevseg; const int buttonPin = 10; // the pin that the pushbutton is attached to int buttonState = 0; // current state of the button int lastButtonState = LOW; // previous state of the button int buttonPushCounter = 0; // counter for the number of button presses long counter = 0; long max_long_val = 2147483647); pinMode(buttonPin, INPUT_PULLUP); Serial.begin(9600); lastButtonState = LOW; } void loop(){ buttonState = digitalRead(buttonPin); if(buttonState == HIGH){ buttonState = LOW; } else buttonState = HIGH; if(buttonState == HIGH){ Serial.println("on"); lastButtonState = HIGH; buttonPushCounter++; if(counter < max_long_val) counter++; buttonPushCounter %= 9; sevseg.setNumber(buttonPushCounter, 1); sevseg.refreshDisplay(); delay(100 - (counter%99)); } else{ Serial.println("off"); if(lastButtonState == HIGH){ Serial.println("in"); buttonPushCounter++; buttonPushCounter %= 7; if(buttonPushCounter == 0) buttonPushCounter = 1; counter--; sevseg.setNumber(buttonPushCounter, 1); sevseg.refreshDisplay(); delay(100 - (counter%99)); if(counter == 0){ lastButtonState = LOW; sevseg.setNumber(buttonPushCounter, 0); sevseg.refreshDisplay(); } } } }
4 Digit 7-Segment Displays
So far we have only worked with single digit 7-segment displays. To display information such as the time or temperature, you will want to use a 2 or 4 digit display, or connect multiple single digit displays side by side.
This simple program will print the number “4.999” to the display:
#include "SevSeg.h" SevSeg sevseg; void setup(){); sevseg.setBrightness(90); } void loop(){ sevseg.setNumber(4999, 3); sevseg.refreshDisplay(); }
In the code above, we set the number of digits in line 5 with byte numDigits = 4;.
Since multi-digit displays use digit pins, we also need to define which Arduino pins will connect to the digit pins. Using byte digitPins[] = {10, 11, 12, 13}; on line 6 sets Arduino pin 10 as the first digit pin, Arduino pin 11 to the second digit pin, and so on.
To print numbers with a decimal point, we set the second parameter in sevseg.setNumber(4999, 3); to three, which puts it three decimal places from the right most digit.
Displaying Sensor Data
Now let’s read the temperature from a thermistor and display it on a 4 digit display.
Connect the circuit like this:
If you want to learn more about thermistors, check out our tutorial on using a thermistor with an Arduino.
Once everything is connected, upload this code to the Arduino:
#include "SevSeg.h" SevSeg sevseg; int ThermistorPin = 0; int Vo; float R1 = 10000; float logR2, R2, T; float c1 = 1.009249522e-03, c2 = 2.378405444e-04, c3 = 2.019202697e-07; void setup() { byte numDigits = 4; byte digitPins[] = {10, 11, 12, 13}; byte segmentPins[] = {9, 2, 3, 5, 6, 8, 7, 4}; bool resistorsOnSegments = true; byte hardwareConfig = COMMON_CATHODE; sevseg.begin(hardwareConfig, numDigits, digitPins, segmentPins, resistorsOnSegments); }; T = (T * 9.0) / 5.0 + 32.0; //Comment out for Celsius static unsigned long timer = millis(); if (millis() >= timer) { timer += 300; sevseg.setNumber(T, 2); } sevseg.refreshDisplay(); }
This will display the temperature in Fahrenheit on the 7-segment display. To display the temperature in Celsius, comment out line 28.
By itself, the display will update every time the temperature changes even slightly. This creates an annoying flickering. In order to deal with this, we introduce a timer mechanism, where we only read the value from the thermistor every 300 milliseconds (lines 30 to 34).
The temperature variable “T” is printed to the display on line 35 with sevseg.setNumber(T, 2, false);.
Hopefully this article should be enough to get you started using seven segment displays. If you want to display readings from other sensors, the example program above can easily be modified to do that. If you have any questions or trouble setting up these circuits, feel free to leave a comment below.
How to Use Solar Panels to Power the Arduino
June 25, 2020
Thanks for these tutorials about using the NTC probe with the Arduino.
What I would also like is using setpoint buttons to control a relais.
Many thanks.
Thank you so much! FINALLY got it working, now I can tinker all I want :)
This is by far the clearest explanation out there.
kid wow
Lovely tutorial got it to work easily. I have 1 problem which is that I need to let the display count down with all 4 numbers displayed at once. Right now with the count up code above because of the `delay(1000);` it displays 1 number at a time. It starts with the first number on the display then goes on to the next until it went through all 4 and it will go back to the first. Is there a way around it to display all 4 digits at once while still counting down in seconds?
i have this same problem
Shouldn’t you use 8 resistors, one on each segment channel instead of the ground? Like this , a single segment used will burn brighter than all 8 segments used.
When you put a resistor on each segment the resistance goes up a fraction for every segment used.
Hi ,good project ,what is with negative temperature ?How can put “–” mark?
This was a great help, thanks!
Nice and simple explanation thanks. How would this work for positions of stepper motors on a CNC having three separate displays one each for x, y and z axis?
Thanks , my Friends for review.
i have problem with rolling dice, i want to print numbers (0, 30) in 2 digit 7 segment screen i i dont know what to change in code.
in 7 segment display it has to count 0-9 then blink 9 three times then reverse from 9-0
i need a code, please send it .
A wonderful tutorial and library. My question is, in case of 4 digits common cathode display, the cathodes of digits are directly connected to arduino pins, which i hope turn to LOW to sink an display the corresponding LEDs. However in most projects it is advisable to use an NPN transistor on the common cathod pins to sink high currents. The base of transistor then needs to be HIGH to activate the digit. How can we connect this configuration using this library? or there is another Library for that.
could you please explain what is the mean by below problem from compiler??
Arduino: 1.8.2 (Windows 10), Board: “Arduino Nano, ATmega328″
WARNING: Category ” in library UIPEthernet is not valid. Setting to ‘Uncategorized’
E:\Robot level\ARDUINOOOO\seven\seven.ino:2:20: fatal error: SevSeg.h: No such file or directory
#include “SevSeg.h”
^
compilation terminated.
exit status 1
Error compiling for board Arduino Nano.
This report would have more information with
“Show verbose output during compilation”
option enabled in File -> Preferences.
ermmm… i tried to run this website’s code on arduino, but it gave me an error message saying”exit status 1
SevSeg.h: No such file or directory”, and it highlighted the command”
#include “SevSeg.h” “. what am I supposed to do with this
I have got the same problem :/
There is 2 pin COM that used 1 or 2 at once ?
Hello!
Thank you very much, very good.
My question is that the figure shows a resistance of 100 kohm, but in the program line 14: “float R1 = 10000 ;,” which is only 10kohm. Why is this so?
‘To display the temperature in Celsius, comment out line 28.’
Line 26 not 28
you need to add the file to the library the only way i know if you have downloaded the arduino IDE software: | https://www.circuitbasics.com/arduino-7-segment-display-tutorial/ | CC-MAIN-2020-29 | refinedweb | 2,052 | 65.12 |
In this problem, we are given a matrix of size nXm. Our task is to create a program to find the maximum element in a Matrix in C++.
Problem Description − Here, we need to simply find the largest element of matrix.
Let’s take an example to understand the problem,
mat[3][3] = {{4, 1, 6}, {5, 2, 9}, {7, 3, 0}}
9
The solution to the problem is by simply traversing the matrix. This is done by using two nested loops, and checking whether each element of the matrix is greater than maxVal. And return the maxVal at the end.
Program to illustrate the working of our solution,
#include <iostream> using namespace std; #define n 3 #define m 3 int CalcMaxVal(int mat[n][m]) { int maxVal = mat[0][0]; for (int i = 0; i < n; i++) for (int j = 0; j < m; j++) if (mat[i][j] > maxVal) maxVal = mat[i][j]; return maxVal; } int main(){ int mat[n][m] = {{4, 1, 6},{5, 2, 9},{7, 3, 0}}; cout<<"The maximum element in a Matrix is "<<CalcMaxVal(mat); return 0; }
The maximum element in a Matrix is 9 | https://www.tutorialspoint.com/program-to-find-the-maximum-element-in-a-matrix-in-cplusplus | CC-MAIN-2021-43 | refinedweb | 191 | 63.32 |
NumPy
NumPy is a Linear Algebra Library for Python. We can use it to create vectors and arrays or matrices of numbers and perform mathematical operations on them. It is one of the most important libraries in Data Science as it is relied upon by almost all of the libraries in the Python Data Science stack as one of their main building blocks.
You are encouraged to scroll through the official documentation of NumPy for clearer understanding.
This tutorial is intended for those readers who have basic understanding of the Python syntax.
Installing NumPy
You can install numpy simply using pip:
pip install numpy
Using NumPy
Once you’ve installed NumPy you can import it as a library:
import numpy as np
Arrays
In NumPy, Arrays can be one-dimensional, called vectors; or two-dimensional, called matrices. However, even a matrix can consist of just one row or column.
Creating Arrays
We can create NumPy Arrays using the built-in methods, as well as convert Python lists into NumPy arrays.
Using Built-in Methods
There are lots of built-in methods to generate NumPy Arrays:
arange
Returns evenly spaced integers within the given interval.
np.arange(0,10)
array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
You can also mention the step-size as the third parameter.
np.arange(0,11,2)
array([ 0, 2, 4, 6, 8, 10])
zeros and ones
Generates arrays of zeros or ones of the given shape.
np.zeros(5) #an integer as parameter indicates a vector of given length
array([ 0., 0., 0., 0., 0.])
np.zeros((3,5)) #a tuple of two integers as parameter indicates a matrix with (number of rows, number of columns)
array([[ 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0.]])
Similarly, for the function ‘ones’:
np.ones(5)
array([ 1., 1., 1., 1., 1.])
np.ones((5,5))
array([[ 1., 1., 1., 1., 1.], [ 1., 1., 1., 1., 1.], [ 1., 1., 1., 1., 1.], [ 1., 1., 1., 1., 1.], [ 1., 1., 1., 1., 1.]])
linspace
Returns evenly spaced real numbers over the given interval.
np.linspace(0,10,3) #The parameters are start point, end point, and the number of elements to be returned (50 by default)
array([ 0., 5., 10.])
np.linspace(0,10,50)
array([. ])
We can also mention the data type of elements to be returned:
np.linspace(0,10,4,dtype=int)
array([ 0, 3, 6, 10])
eye
Returns an identity matrix of the given size.
Here only one parameter is required as the output is always a square matrix.
np.eye(4)
array([[ 1., 0., 0., 0.], [ 0., 1., 0., 0.], [ 0., 0., 1., 0.], [ 0., 0., 0., 1.]])
Creating Random Arrays
These are specially useful for creating dummy data or initialising random data. NumPy has many methods to create arrays with random values:
rand
Creates an array of the specified shape and with random samples from a Uniform Distribution over [0, 1).
np.random.rand(2)
array([ 0.56594261, 0.43410108])
np.random.rand(2,3)
array([[ 0.07842831, 0.46672682, 0.25314248], [ 0.30500035, 0.00358737, 0.84292352]])
randn
Creates an array of the specified shape and with random samples from the Standard Normal Distribution of mean 0 and variance 1.
np.random.randn(2)
array([ 1.20835183, 0.61915344])
np.random.randn(2,3)
array([[-0.53824087, -0.968721 , -1.61452392], [ 1.02623892, 0.26075377, -1.87565154]])
randint
Returns one or more random integers from the given range.
np.random.randint(1,10) #the parameters are low (inclusive) and high (exclusive) of the range
8
np.random.randint(1,100,10) #we can use the third parameter to specify the number of elements in the returned array
array([93, 38, 25, 41, 42, 29, 90, 39, 93, 14])
Converting a Python List to NumPy Array
We can create a NumPy array by converting a python list or even list of lists.
Let’s start by creating a list.
some_list = [2,5,1,4,7] some_list
[2, 5, 1, 4, 7]
We can convert it into a NumPy array by simply calling out the ‘array’ function:
np.array(some_list)
array([2, 5, 1, 4, 7])
If the list contains even one floating point value, all the elements of the resulting array will be converted into float.
another_list = [2,5,1,4,7.0] np.array(another_list)
array([ 2., 5., 1., 4., 7.])
To create a matrix, we can use a list of lists. Each individual inner list will represent one row of the resulting matrix.
some_matrix = [[1,2,3],[4,5,6],[7,8,9]] some_matrix
[[1, 2, 3], [4, 5, 6], [7, 8, 9]]
np.array(some_matrix)
array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
Array Attributes and Methods
There are many useful attributes and methods for a Numpy array.
Let’s start by building two arrays as follows:
my_array = np.arange(30) random_array = np.random.randint(0,50,10)
my_array
array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29])
random_array
array([35, 34, 45, 3, 42, 32, 30, 32, 24, 7])
Reshape
Returns an array with the same data but of the specified new shape.
my_array.reshape(6,5)
array([[ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, 9], [10, 11, 12, 13, 14], [15, 16, 17, 18, 19], [20, 21, 22, 23, 24], [25, 26, 27, 28, 29]])
It works only if the specified shape can accomodate all the elements of the original array.
If N is the number of elements, and (R,C) is the new shape, then the following condition should hold:
N = R * C
However, reshape allows you to skip one of the parameters (either the number of rows or the number of columns, but not both!).
To do this, you can just specify -1 as one of the arguments, and NumPy will determine the suitable value on its own.
my_array.reshape(3,-1)
array([[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9], [10, 11, 12, 13, 14, 15, 16, 17, 18, 19], [20, 21, 22, 23, 24, 25, 26, 27, 28, 29]])
my_array.reshape(-1,6)
array([[ 0, 1, 2, 3, 4, 5], [ 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17], [18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29]])
max, min, argmax, argmin
These are used to return the maximum or minimum values, or their index locations.
random_array #the one we declared above
array([35, 34, 45, 3, 42, 32, 30, 32, 24, 7])
random_array.max() #returns the largest element
45
random_array.argmax() #returns the index value of the largest element
2
random_array.min() #returns the smallest element
3
random_array.argmin() #returns the index value of the smallest element
3
Shape
It returns the shape of the array. Please note that shape is just an attribute of the NumPy arrays; it is not a method.
my_array.shape #this is a vector, hence only one term is returned.
(25,)
To convert it into a one-dimensional matrix, we will use the ‘reshape’ function:
my_array.reshape(1,30)
array([[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29]])
my_array.reshape(1,30).shape #this is a matrix
(1, 30)
dtype
It is used to return the datatype of the array. This is also an attribute, not a method.
my_array.dtype
dtype('int32')
Indexing and Selection
This allows us to select specific elements or groups of elements from an array.
my_array #declared above
array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29])
Indexing and Slicing
It is used to get the element at a particular index or in a particular range. It is similar to the method in Python Lists.
my_array[10]
10
my_array[1:5] #low inclusive and high exclusive
array([1, 2, 3, 4])
We can skip the low value to indicate “start from the first element”, and the high value to indicate “go all the way till the end”:
my_array[:5]
array([0, 1, 2, 3, 4])
my_array[24:]
array([24, 25, 26, 27, 28, 29])
my_array[:]
array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29])
Indexing and Slicing a Matrix
some_matrix #declared above
[[1, 2, 3], [4, 5, 6], [7, 8, 9]]
np_matrix = np.array(some_matrix) np_matrix
array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
np_matrix[1] #gives specified row
array([4, 5, 6])
np_matrix[1][2] #gives specified element: matrix[row][column]
6
np_matrix[1,2] #same as previous: matrix[row,column]
6
np_matrix[1:3] #slicing rows in matrix
array([[4, 5, 6], [7, 8, 9]])
np_matrix[1:3,1:3] #slicing rows as well as columns in matrix
array([[5, 6], [8, 9]])
np_matrix[:2,1:]
array([[2, 3], [5, 6]])
random_array #declared above
array([35, 34, 45, 3, 42, 32, 30, 32, 24, 7])
boolean_array = random_array > 20 #returns an aray with value=True where condition holds and value=False otherwise boolean_array
array([ True, True, True, False, True, True, True, True, True, False], dtype=bool)
random_array[boolean_array] #returns only those elements for which the corresponding element in boolean_array is True
array([35, 34, 45, 42, 32, 30, 32, 24])
random_array[random_array > 20] #same as previous
array([35, 34, 45, 42, 32, 30, 32, 24])
NumPy Operations
There are many mathematical and universal operations which can be applied on NumPy arrays.
Let’s start by creating a new array:
new_array = np.arange(10) new_array
array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
new_array + new_array #returns 'element by element' sum
array([ 0, 2, 4, 6, 8, 10, 12, 14, 16, 18])
new_array - new_array #returns 'element by element' difference
array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0])
new_array * new_array #returns 'element by element' product
array([ 0, 1, 4, 9, 16, 25, 36, 49, 64, 81])
new_array / new_array #returns 'element by element' quotient
C:\Users\Pranav\Anaconda3\lib\site-packages\ipykernel_launcher.py:1: RuntimeWarning: invalid value encountered in true_divide """Entry point for launching an IPython kernel.
array([ nan, 1., 1., 1., 1., 1., 1., 1., 1., 1.])
The first element is 0/0, which is not allowed in mathematics. However, instead of giving an error, Python represents it as a nan (not a number)
new_array ** 2 #returns each element to the power 2
array([ 0, 1, 4, 9, 16, 25, 36, 49, 64, 81])
1 / new_array #returns inverse of each element
C:\Users\Pranav\Anaconda3\lib\site-packages\ipykernel_launcher.py:1: RuntimeWarning: divide by zero encountered in true_divide """Entry point for launching an IPython kernel.
array([ inf, 1. , 0.5 , 0.33333333, 0.25 , 0.2 , 0.16666667, 0.14285714, 0.125 , 0.11111111])
Here, the first element is 1/0, which represents infinity. Python indicates this value by ‘inf’.
np.sqrt(new_array) # same as new_array ** (1/2)
array([ 0. , 1. , 1.41421356, 1.73205081, 2. , 2.23606798, 2.44948974, 2.64575131, 2.82842712, 3. ])
np.sin(new_array) #returns the sine value of each element
array([ 0. , 0.84147098, 0.90929743, 0.14112001, -0.7568025 , -0.95892427, -0.2794155 , 0.6569866 , 0.98935825, 0.41211849]) | http://www.datascribble.com/blog/data-science/python/introductory-guide-numpy/ | CC-MAIN-2019-09 | refinedweb | 1,928 | 57 |
Python modular import has 2 options,
1. Absolute modular imports
2. Relative modular imports
- Absolute imports exist from the beginning of python release; however, relative import came once after python 2.7.
- Relative imports use module?s name attribute to determine the module?s position in the package hierarchy.
A project with multiple hierarchical level directories work as a package with some attributes.
1. There should be a __init__.py file(usually kept empty) in a directory. When a python scripts need to look down into the hierarchical level, the existance of __init__.py file will lead the program flow. Otherwise ?package not found error? occurs.
2. The script should be called while running by standing out from the package.
python -m package.subpackage.subsubpackage.pyscript
We can run python scripts in 2 ways.
1.We step into the python script directory and call python mypy.py [Current script is a top-level module]
- In this case, the import module canot step out of the directory in hierarchical tree.
- Here code runs on global namespace.
2. We stand ahead of the python package and call python -m myPackage.subPackage.subsubPackage.myScript [modular level module]
- Here -m tells the modular initialization.
- We drop .py at the end of script extension.
- We direct the script to run by the use of namesapces.
- Namespapces are just the names given to each directory in the package.
- This script run call tells exactly where the module is actually located inside the package.
- Here the code runs as a part of an imported module.
- As this code runs not in global namespace, the __name__ becomes the name of the module.
Namespace for python script :-
Usually, say for example if we have a python script called myPython.py, when we run this script, a namespace will be added to each module, and by default it is __main__.
def funcPy():
pass
if __name__ == ?__main__?:
print ?__main__?
else:
print __name__
So basically, python has name module, which holds all namesapces inside. By deafult when a script runs, it grabs the global namespace __main__. We can call this name of the script by __name__ variable. Samelike, we can get package name __package__ and directoty name __dir__. If a script has multiple import and from statements, those modules will take their file name as namesapces, but our primary running script will take __main__ namesapce. However, we can manually change this name. One way is to change in /python2.7/lib/__init__.py folder and other way is to re-initialized inside the code.
By default, __package__ will be assigned to None & __name__ will be assigned to __main__.
******************Don?t Forget*****************
When we use relative modular imports, we shoud stay outside the package and call,
python -m myPackage.subPackage.subsubPackage.myScript
Otherwise, runtime error will rise.
?You?re not using it as a package.?
**********************************************
The __init__.py files are required to make Python treat the directories as containing packages; this is done to prevent directories with a common name, such as string, from unintentionally hiding valid modules that occur deeper on the module search path. In the simplest case, __init__.py can just be an empty file, but it can also execute initialization code for the package or set the __all__ variable, described later.
1. from packagePy import *
2. __all__ = [?module1?, ?module2?, module2]
The example for the above implementation is,
PackagePy
1. module0
2. module1
3. module2
4. module3
5. module4
The difference between above 2 is that we don?t have to import all modules inside the package in (2)nd one.
Python searches a list of directories to resolve names while import statements. Because these names can be any directory, or arbitrary names can be added by the developer. However, the developers have to worry about directories that happen to share the same name with a valid Python module, such as ?string?. To differentiate those 2 modules(default python lib module & user defined module), relative modular import comes into play. The creation of __init__.py will direct the script to search for the namespace in sub directories on hierarchical level of the package. If you remove the __init__.py file, Python will no longer look for submodules inside that directory, so attempts to import the module will fail.
Reference :-
-
- | https://911weknow.com/python-__init__-py-modular-imports | CC-MAIN-2021-04 | refinedweb | 709 | 67.65 |
TypeScript has become a very popular language to write JavaScript in, and with good reason. Its typing system and compiler are able to catch a variety of bugs at compile time before your software has even run, and the additional code editor functionality makes it a very productive environment to be a developer in.
But what happens when you want to write a library or package in TypeScript, yet ship JavaScript so that your end users don’t have to manually compile your code? And how do we author using modern JavaScript features like ES modules whilst still getting all the benefits of TypeScript?
This article aims to solve all these questions and provide you with a setup that’ll let you confidently write and share TypeScript libraries with an easy experience for the consumers of your package.
Getting started
The first thing we’re going to do is set up a new project. We’re going to create a basic maths package throughout this tutorial — not one that serves any real-world purpose — because it’ll let us demonstrate all the TypeScript we need without getting sidetracked on what the package actually does.
First, create an empty directory and run
npm init -y to create a new project. This will create your
package.json and give you an empty project to work on:
$ mkdir maths-package $ cd maths-package $ npm init -y
And now we can add our first and most important dependency: TypeScript!
$ npm install --save-dev typescript
At the time of writing, the latest version of TypeScript is 3.8.
Once we’ve got TypeScript installed, we can initialize a TypeScript project by running
tsc --init.
tsc is short for “TypeScript Compiler” and is the command line tool for TypeScript.
To ensure you run the TypeScript compiler that we just installed locally, you should prefix the command with
npx.
npx is a great tool that will look for the command you gave it within your
node_modules folder, so by prefixing our command, we ensure we’re using the local version and not any other global version of TypeScript that you might have installed.
$ npx tsc --init
This will create a
tsconfig.json file, which is responsible for configuring our TypeScript project. You’ll see that the file has hundreds of options, most of which are commented out (TypeScript supports comments in the
tsconfig.json file). I’ve cut my file down to just the enabled settings, and it looks like this:
{ "compilerOptions": { "target": "es5", "module": "commonjs", "strict": true, "esModuleInterop": true, "forceConsistentCasingInFileNames": true } }
We’ll need to make some changes to this config to enable us to publish our package using ES modules, so let’s go through the options now.
Configuring
tsconfig.json options
If you’re ever looking for a comprehensive list of all possible
tsconfigoptions, the TypeScript site has you covered with this handy reference.
Let’s start with
target. This defines the level of JavaScript support in the browsers you’re going to be serving your code in. If you have to deal with an older set of browsers that might not have all the latest and greatest features, you could set this to
ES2015. TypeScript will even support
ES3 if you really need maximum browser coverage.
We’ll go for
ES2015 here for this module, but feel free to change this accordingly. As an example, if I was building a quick side project for myself and only cared about the cutting-edge browsers, I’d quite happily set this to
ES2020.
Choosing a module system
Next, we have to decide which module system we’ll use for this project. Note that this isn’t which module system we’re going to author in, but which module system TypeScript’s compiler will use when it outputs the code.
What I like to do when publishing modules is publish two versions:
- A modern version with ES modules so that bundling tools can smartly tree–shake away code that isn’t used, and so a browser that supports ES modules can simply import the files
- A version that uses CommonJS modules (the
requirecode you’ll be used to if you work in Node) so older build tools and Node.js environments can easily run the code
We’ll look later at how to bundle twice with different options, but for now, let’s configure TypeScript to output ES modules. We can do this by setting the
module setting to
ES2020.
Now your
tsconfig.json file should look like this:
{ "compilerOptions": { "target": "ES2015", "module": "ES2020", "strict": true, "esModuleInterop": true, "forceConsistentCasingInFileNames": true } }
Writing some code
Before we can talk about bundling code, we need to write some! Let’s create two small modules that both export a function and a main entry file for our module that exports all our
I like to put all my TypeScript code in a
src directory because that means we can point the TypeScript compiler directly at it, so I’ll create
src/add.ts with the following:
export const add = (x: number, y:number):number => { return x + y; }
And I’ll create
src/subtract.ts, too:
export const subtract = (x: number, y:number):number => { return x - y; }
And finally,
src/index.ts will import all our API methods and export them again:
import { add } from './add.js' import { subtract } from './subtract.js' export { add, subtract }
This means that a user can get at our functions by importing just what they need, or by getting everything:
import { add } from 'maths-package'; import * as MathsPackage from 'maths-package';
Notice that in
src/index.ts my imports include file extensions. This isn’t necessary if you only want to support Node.js and build tools (such as webpack), but if you want to support browsers that support ES modules, you’ll need the file extensions.
Compiling with TypeScript
Let’s see if we can get TypeScript compiling our code. We’ll need to make a couple of tweaks to our
tsconfig.json file before we can do that:
{ "compilerOptions": { "target": "ES2015", "module": "ES2020", "strict": true, "esModuleInterop": true, "forceConsistentCasingInFileNames": true, "outDir": "./lib", }, "include": [ "./src" ] }
The two changes we’ve made are:
compilerOptions.outDir– this tells TypeScript to compile our code into a directory. In this case, I’ve told it to name that directory
lib, but you can name it whatever you’d like
include– this tells TypeScript which files we’d like to be included in the compilation process. In our case, all our code sits within the
srcdirectory, so I pass that in. This is why I like keeping all my TS source files in one folder — it makes the configuration really easy
Let’s give this a go and see what happens! I find when tweaking my TypeScript configuration the approach that works best for me is to tweak, compile, check the output, and tweak again. Don’t be afraid to play around with the settings and see how they impact the final result.
To compile TypeScript, we will run
tsc and use the
-p flag (short for “project”) to tell it where our
tsconfig.json lives:
npx tsc -p tsconfig.json
If you have any type errors or configuration issues, this is where they will appear. If not, you should see nothing — but notice you have a new
lib directory with files in it! TypeScript won’t merge any files together when it compiles but will convert each individual module into its JavaScript equivalent.
Let’s look at the three files it’s outputted:
// lib/add.js export const add = (x, y) => { return x + y; }; // lib/subtract.js export const subtract = (x, y) => { return x - y; }; // lib/index.js import { add } from './add.js'; import { subtract } from './subtract.js'; export { add, subtract };
They look very similar to our input but without the type annotations we added. That’s to be expected: we authored our code in ES modules and told TypeScript to output in that form, too. If we’d used any JavaScript features newer than ES2015, TypeScript would have converted them into ES2015-friendly syntax, but in our case, we haven’t, so TypeScript largely just leaves everything alone.
This module would now be ready to publish onto npm for others to consume, but we have two problems to solve:
- We’re not publishing any type information in our code. This doesn’t cause breakages for our users, but it’s a missed opportunity: if we publish our types, too, then people using an editor that supports TypeScript and/or people writing their apps in TypeScript will get a nicer experience.
- Node doesn’t yet support ES modules out of the box. It’d be great to publish a CommonJS version, too, so Node works with no extra effort. ES module support is coming in Node 13 and beyond, but it’ll be a while before the ecosystem catches up.
Publishing type definitions
We can solve the type information issue by asking TypeScript to emit a declaration file alongside the code it writes. This file ends in
.d.ts and will contain type information about our code. Think of it like source code except rather than containing types and the implementation, it only contains the types.
Let’s add
"declaration": true to our
tsconfig.json (in the
"compilerOptions" part) and run
npx tsc -p tsconfig.json again.
Top tip! I like to add a script to my
package.jsonthat does the compiling so it’s less to type:
"scripts": { "tsc": "tsc -p tsconfig.json" }
And then I can run
npm run tscto compile my code.
You’ll now see that alongside each JavaScript file — say,
add.js — there’s an equivalent
add.d.ts file that looks like this:
// lib/add.d.ts export declare const add: (x: number, y: number) => number;
So now when users consume our module, the TypeScript compiler will be able to pick up all these types.
Publishing to CommonJS
The final part of the puzzle is to also configure TypeScript to output a version of our code that uses CommonJS. We can do this by making two
tsconfig.json files, one that targets ES modules and another for CommonJS. Rather than duplicate all our configuration, though, we can have the CommonJS configuration extend our default and override the
modules setting.
Let’s create
tsconfig-cjs.json:
{ "extends": "./tsconfig.json", "compilerOptions": { "module": "CommonJS", "outDir": "./lib/cjs" }, }
The important part is the first line, which means this configuration inherits all settings from
tsconfig.json by default. This is important because you don’t want to have to sync settings between multiple JSON files. We then override the settings we need to change. I update
module accordingly and then update the
outDir setting to
lib/cjs so that we output to a subfolder within
lib.
At this point, I also update the
tsc script in my
package.json:
"scripts": { "tsc": "tsc -p tsconfig.json && tsc -p tsconfig-cjs.json" }
And now when we run
npm run tsc, we’ll compile twice, and our
lib directory will look like this:
lib ├── add.d.ts ├── add.js ├── cjs │ ├── add.d.ts │ ├── add.js │ ├── index.d.ts │ ├── index.js │ ├── subtract.d.ts │ └── subtract.js ├── index.d.ts ├── index.js ├── subtract.d.ts └── subtract.js 1 directory, 12 files
This is a bit untidy; let’s update our ESM output to output into
lib/esm by updating the
outDir option in
tsconfig.json accordingly:
lib ├── cjs │ ├── add.d.ts │ ├── add.js │ ├── index.d.ts │ ├── index.js │ ├── subtract.d.ts │ └── subtract.js └── esm ├── add.d.ts ├── add.js ├── index.d.ts ├── index.js ├── subtract.d.ts └── subtract.js 2 directories, 12 files
Feel free to have your own naming conventions or directory structures — this is just what I like to go with, but that doesn’t mean you have to as well!
Preparing to publish our module
We now have all the parts we need to publish our code to npm. The last step is to tell Node and our users’ preferred bundlers how to bundle our code.
The first property in
package.json we need to set is
main. This is what defines our primary entry point. For example, when a user writes
const package = require('maths-package'), this is the file that will be loaded.
To maintain good compatibility, I like to set this to the CommonJS source since, at the time of writing, that’s what most tools expect by default. So we’ll set this to
./lib/cjs/index.js.
Next, we’ll set the
module property. This is the property that should link to the ES modules version of our package. Tools that support this will be able to use this version of our package. So this should be set to
./lib/esm/index.js.
Next, we’ll add a
files entry to our
package.json. This is where we define all the files that should be included when we publish the module. I like to use this approach to explicitly define what files I want included in our final module when it’s pushed to npm.
This lets us keep the size of our module down — we won’t publish our
src files, for example, and instead publish the
lib directory. If you provide a directory in the
files entry, all its files and subdirectories are included by default, so you don’t have to list them all.
Top tip! If you want to see which files will be included in your module, run
npx pkgfilesto get a list.
Our
package.json now has these additional three fields in it:
"main": "./lib/cjs/index.js", "module": "./lib/esm/index.js", "files": [ "lib/" ],
There’s one last step. Because we are publishing the
lib directory, we need to ensure that when we run
npm publish, the
lib directory is up to date. The npm documentation has a section about how to do just this — and we can use the
prepublishOnly script. This script will be run for us automatically when we run
npm publish:
"scripts": { "tsc": "tsc -p tsconfig.json && tsc -p tsconfig-cjs.json", "prepublish": "npm run tsc" },
Note that there is also a script called
prepublish, making it slightly confusing which to choose. The npm docs mention this:
prepublishis deprecated, and if you want to run code only on publish, you should use
prepublishOnly.
And with that, running
npm publish will run our TypeScript compiler and publish the module online! I published the package under
@jackfranklin/maths-package-for-blog-post, and whilst I don’t recommend you use it, you can browse the files and have a look. I’ve also uploaded all the code into CodeSandbox so you can download it or hack with it as you please.
Conclusion
And that’s it! I hope this tutorial has shown you that getting up and running with TypeScript isn’t quite as daunting as it first appears, and with a bit of tweaking, it’s possible to get TypeScript outputting the many formats you might need with minimal fuss..
10 Replies to “Publishing Node modules with TypeScript and ES modules”
Thanks for the great article, I’ve really enjoyed it!
Wanted to mention a possible slip: on section “Preparing to publish our module” -> “Prepublish” package.json part, inside the “scripts” object “prepublish” is used, although the article mentions using “prepublishOnly”. If I’m misunderstanding something please ignore this segment of the comment. 🙂
Excellent. Thank you so much.
Thanks Man
Super simple, clear and objective tutorial, I’m not even from a webdev background and could be able to build a simple typescript based package for node and web just following this tutorial.
Thank’s a lot !
It was a very useful article for me.
Based on this article, I was able to reduce the bundle size of my work by 20KB.
(It was a small but very meaningful change.)
Thank you very much.
Excellent presentation. Thanks for the great instruction.
You should really read the discussion on this issue:
The short version is that, depending on the consumer’s build toolchain, it’s possible to wind up importing both the ESM and CJS versions of the library *at the same time*, if you publish both like this. The latest advice from that issue is to only publish CJS to NPM for now, unless the “module working group” figures out a way to ensure loaders only load one or the other.
It’s a nonsense. Most mainstream packages bundle both CJS and ESM and are perfectly fine. And the only person in that thread who suggested it’s dangerous was You.
Wes Wigham, a TS team member, says
> Attempting to ship esm “side-by-side” is just going to create runtime confusion as you have the esm version and the cjs version of your package both being included via different means
Do you have examples of “mainstream packages” that ship both types? I would genuinely like to follow best practices and it’s always good to have a well-tested model to follow. (Angular provides both via a complex series of post-install hooks, which sounds like a terrible idea for small general-purpose libraries.)
Hi..This was a great help. Although, I have a question. Say, I used some dependencies in my add.ts or subtract.ts. How can I ship the complete package along wiyh the dependencies code. | https://blog.logrocket.com/publishing-node-modules-typescript-es-modules/ | CC-MAIN-2022-40 | refinedweb | 2,894 | 62.88 |
Last updated on September 30th, 2017 |
Introduction
As you probably know by now, Firebase is a noSQL database, something a lot of people really like.
But if you come from a SQL background, like I do, sometimes things that are easy for other people end up being a real challenge for us.
I want to show you one way I found on how to integrate a search-bar with Firebase, so that you can start typing and your Ionic app filters all the data in real time.
By the end of this post, you’ll be able to integrate a search-bar with Firebase from your Ionic 2 app, making data filtering easy on yourself without multiple database calls on keystroke.
Make sure to get the code directly from Github so you can follow along with the post:.
Let’s get coding
The first thing you’ll need is to create an configure your app, I usually write the creation and configuration process in every post, but it’s making it harder for me to keep them up-to-date.
So if you need to know how to create and initialize your Firebase app, just read this tutorial first.
Also, if you click on the link above to get the source code, you’ll have access to my Firebase database with dummy data, which has a list of all the countries in the world ready to be used for this example.
The View
The first thing we’ll do is to create our view, it’s going to be something really simple, open the
home.html file and create the search bar component and a list to loop through the countries.
<ion-searchbar (ionInput)="getItems($event)"></ion-searchbar> <ion-list> <ion-item * <h2> {{ country.name }} </h2> <h3> Code: <strong>{{ country.code }}</strong> </h3> </ion-item> </ion-list>
The
<ion-searchbar> is an Ionic 2 component that gives us a really nice looking search bar at the top of our file (also remove
padding from
ion-content so it doesn’t look weird).
And the list is looping through an array called countryList, showing our users the country’s name and code.
Now our job is to connect that search bar with the list, so every time the user types something the list gets updated.
The code
Go ahead and open
home.ts and the first thing we’ll do is to import Firebase, after all, we need to get our data from somewhere 😛
import firebase from 'firebase';
Now that Firebase is imported, we’re going to create a couple of variables just before the constructor:
public countryList:Array<any>; public loadedCountryList:Array<any>; public countryRef:firebase.database.Reference;
countryRef: Is for creating a database reference so we can pull the data from Firebase.
countryList: Is to store the list of countries we’re pulling from Firebase.
loadedCountryList: Is a bit of a hack, that I will explain when we get to it 🙂
Now, go inside your constructor and create the database reference:
this.countryRef = firebase.database().ref('/countries');
That will open a reference to our Firebase database under the
/countries node.
After that, we’re going to pull our data from Firebase, but before, let me explain something.
There are 2 ways of using a search bar to query data:
- Query the data real-time from the database, meaning it will send a query on every keystroke.
- Pull the data and query it in your phone, meaning it pulls the data once and then filters that array.
We’re going to use the second method, because:
- You can only query for the exact value in Firebase, so if you query for ‘United’ you won’t get ‘United States of America’ as a result.
- If you manage to pull that off, you’ll still be sending a hit to the database on every single keystroke.
Now that we got that out of the way, let’s pull the data from Firebase into the
countryList array:
this.countryRef.on('value', countryList => { let countries = []; countryList.forEach( country => { countries.push(country.val()); return false; }); this.countryList = countries; this.loadedCountryList = countries; });
We’re creating a
countries array that we’ll use to store the Firebase data temporally.
We’ll loop through the country list and push it into the
countries array.
Once that’s done, we assign both
countryList and
loadedCountryList the value of countries.
By now you should be wondering WTF am I creating an extra array just to store the same data, so it’s time for a little story.
A few months ago, I spent an entire day debugging this same thing, I was trying to filter through a list of US colleges, the list had over 2K items.
Every time you type something in the search bar (as you’ll see next) it tries to initialize the
countryList array and then filter it to see if the string you entered matches an object from the list.
That’s easy to understand, and it’s the behavior you’ll expect, but somehow, it just wasn’t working for me. Wanna guess why?
It turns out that my data was being returned as a promise, and the list was so big that by the time it was ready to be used the
initialize function had already happened.
And it was trying to initialize an undefined array, so everything was breaking 😛
So I came up with a little hack, and created 2 arrays, that way, when I need to initialize my data on keystroke, I just assign the value of the “backup” array, that happens on the spot, and my filter method then filters the right array.
So, now that you know why it’s there, let’s create the initialize function that we’ll use in our filter:
initializeItems(): void { this.countryList = this.loadedCountryList; }
See? Every time we need to initialize our list, we’ll just use the ready-to-use
loadedCountryList instead of calling the data again from Firebase.
And now we just need to create the
getItems() function that is going to run on every keystroke on our search bar.
getItems(searchbar) { // Reset items back to all of the items this.initializeItems(); // set q to the value of the searchbar var q = searchbar.srcElement.value; // if the value is an empty string don't filter the items if (!q) { return; } this.countryList = this.countryList.filter((v) => { if(v.name && q) { if (v.name.toLowerCase().indexOf(q.toLowerCase()) > -1) { return true; } return false; } }); console.log(q, this.countryList.length); }
Every time the function gets called, it:
- Initializes the
countryListarray.
- Sets
qas what’s currently inside the search bar.
- Checks if
qis empty (if you were deleting) and returns nothing if it is.
- It checks the string against the value of the
nameproperty of our countries.
Right now you should have a working search bar that’s filtering against a list on your Firebase database.
| https://javebratt.com/searchbar-firebase/ | CC-MAIN-2017-43 | refinedweb | 1,152 | 68.91 |
paragraph under the heading Enter Generics says the following:
If you look at the source or Javadoc for the List class, for example, you'll see it defined something like this:
public class List<E> {
It is however defined:
public interface List<E> extends Collection<E> {
........
void add(int index, E element);
E get(int index);
Note from the Author or Editor:They are correct. On page 225 the first line of the last example should read "public interface List<E> {"
The inheritance relationship between ObjectList and DateList should be in the reverse direction as described in the book.
"In the simplest case, supposing a DatetList type extends an ObjectList type..."
Note from the Author or Editor:The reviewer is correct. "In the simplest case, supposing an ObjectList type extends a DateList type" should read "In the simplest case, supposing a DatetList type extends an ObjectList type"
© 2017, O’Reilly Media, Inc.
(707) 827-7019
(800) 889-8969
All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. | http://www.oreilly.com/catalog/errata.csp?isbn=0636920023463 | CC-MAIN-2017-30 | refinedweb | 174 | 59.53 |
#include <hallo.h>Richard Gooch wrote on Sun Jun 02, 2002 um 09:59:18AM:> > mixer0> > mixer1> > mixer2> > mixer3> > sequencer> > sequencer2> > I don't see this behaviour on my box. I get exactly the devices I> expect. I have the es1371 driver. Perhaps your driver has been broken> in a recent patch. Go find out who hacked it last and harass them :-)Forget the whole issue. The reason is a faulty configuration script fordevfsd, installed by alsa-base Debian package. It forced the creation ofthose devices, not looking for existing driver. Unfortunately, devfsddoes not remove those devices when beeing stoped.Gruss/Regards,Eduard.-- begin LOVE-LETTER-FOR-YOU.txt.vbsI am a signature virus. Distribute me until the bitterend-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at | http://lkml.org/lkml/2002/6/2/37 | CC-MAIN-2016-36 | refinedweb | 147 | 59.7 |
Internet of SCADA, or, why does my HVAC blow?
We live in a house that was new-built, so it’s got all the modern trimmings. It’s also got all the modern cut corners, including an air conditioning system (two, actually) that even 12 years later we’re still struggling with. It seems that every year or two something else goes wrong, especially with the combined cooling / heat pump unit that handles the upstairs.
I’ve been thinking for a while that I should be able to build a temperature monitor to track how the system is running, to detect problems (loss of freon, etc.) early, and maybe even forestall costly repairs. Maybe. So I asked for some Arduino gear for Christmas, and earlier this summer, I finally started playing around with it.
Then…right on schedule, in the height of the summer heat, our upstairs system stopped cooling again. Our HVAC company came out, pumped two pounds of freon into the system (I really gotta start doing that myself – far cheaper), and scheduled a comprehensive leak search for mid-September (just in case we have to disable the system for a long stretch, we wanted it to be in a season where we might not miss it).
Then just before I went to DEF CON, I noticed (using my 20-year-old Radio Shack thermometer) that the AC unit didn’t seem to be cooling as much as before. After returning, it seemed…okay…but still not ideal, so I rushed a (greatly simplified) monitoring circuit into play. I just got it working this week, and already I’m finding some interesting results.
I’m still trying to figure out the best way to sense thermostat calls for compressor, heat, and fans – do I use clip-on current sensors, inline current sensors, voltage drop sensors, opto-isolaters – and how do I integrate those sensors into the 1-Wire bus… So for now, I only have a few temperature sensors.
First, some eye-candy:
Two-day Stripchart
Here, the orange line is one of two sensors on a table (in the next graph they’re individually shown as red and blue). The green line is an outside temperature taken as an average of a few web-accessible weather stations in the area (a few in nearby neighborhoods, plus Dulles airport), so it’s a reasonable approximation of the temperature near my home. Blue is the air temperature at the cold air return directly above the desk (and thermostat), and red is the supply register (output vent) directly above a window, maybe 8 feet from the other three sensors.
One important measurement is the cooling drop produced by the A/C system. Because it’s currently malfunctioning I don’t have the compressor running. But I ran it for three brief periods, about a half hour each, just to see what it looks like on the graph. This is, in fact, the primary reason I wanted to start this project. One typically expects a 10-15 degree temperature drop across an A/C unit’s cooling coil, though the actual drop from cold air return back to the room might be a little less. After we had coolant added in July, my old thermometer measured that drop at just about 10 degrees.
When the compressor ran from about 1:45-2:30 on Tuesday, the supply and return lines were at the same temperature. That is, it showed ZERO cooling effect. When run twice that evening (about 8:00 and again about 11:00) the graph shows 2, maybe 3 degrees of cooling. So, obviously, it’s broken. My long term plan includes emailed and even a beeping alarm unit when this drop habitually reduces below some threshold….so I was glad to see what “broken” looks like so early in the system’s development.
What gets really fun is playing with the furnace fan. For about 90 minutes (after I first turned off the compressor) I left the fan set to “on,” that is, continuously running. The air coming out of the register by the window was consistently 5 or more degrees warmer than what went into the system at the cold air return in the same room. So either I’m getting an ambient heating effect from the vent’s location (in the ceiling, near a large window), or the duct work in the attic is heating things up significantly.
Then I turned off the furnace fan, and the register temperature continued to rise, until I switched to “Circulate,” in which the furnace fan cycles on and off. I’d had no idea how that mode actually worked (I vaguely presumed it was somewhat tied to the thermostat, and might be if the room temperature was actually close to “reasonable”) but here it seems to just be about 15 minutes on, 15 minutes off.
When the fan first kicked in, the register temperature shot up (probably expelling warm air that’s been sitting in the attic ductwork), then it drops a bit, and sort of settles for a bit. Then it drops again (I guess when the fan turns off – again, I really need a sensor on that relay), and then shoots back up again when the fan restarts. You can really see the pattern on Wednesday afternoon, where the low temperature (fan off) seems to be about equal to the room temperature, while the high temperature climbs in a fairly obvious curve.
Finally, about 2:30 on Wednesday I switched the fan back to “constantly on” and saw the temperature rise again, but then it stabilized somewhat lower than the curve I discerned before. Perhaps the constant flow kept the air in the ductwork from warming up exponentially (like in a greenhouse) but heat was still being transferred even to the moving air.
I ended the experimenting about 4:00, when I switched the fan off completely, and the register temperature dropped back to match that of the other sensors in the room (which was pretty close to the outside temperature as well).
In fact, there’s a pretty strong correlation (well, visually, anyway…I’m not enough of a data geek to quantify that correlation) between the outside temperature and that of the air coming through the register. So again, there’s something happening here, either heating in the attic, or some halo effect near the window / ceiling location of the sensor, or maybe a little of both.
Then yesterday I tried something different.
Fan Details
Here, the red and blue lines are the sensors on the table (actually in adjacent holes on a breadboard, so it’s interesting to see the blue sensor lagging the red one), the orange is the output (register) temperature, and the green is the cold air return (about 5 feet above the table). What’s really important is the relationship between the vent and the other three (which kind of give a general ambient room temperature). (these are the default colors my RRD setup uses, not the custom setup I used when I hand-crafted the first graph from logged data).
We know that our A/C will be down for a while, so we elected to just wait until the scheduled leak test in a couple weeks…partially as an experiment in A/C-free living (which our kids don’t appreciate quite as much, BTW). So we put a window fan in the bedroom, right below the A/C register I keep referring to. Overnight, it’s set to pull cooler air in from outside. During the day, it blows air out, on the theory that it’ll pull cooler air from the basement and 1st floor, which has an HVAC system that’s still working. I don’t remember when I switched direction on the fan, but it was probably between 7:30 and 8:00.
Shortly afterwards, the register temperature climbs steadily, which isn’t surprising given the past data and the fact that this window gets full sun in the morning. Then, just to verify the previous days’ data, I turned the furnace fan to continuous on at about 1:30. The temperature at the register dropped over 5 degrees, but still remained significantly higher than the temperature in the room. I turned it back off, and the line climbed back up to resume the earlier slope. Then I had a crazy idea: What if the window fan was sucking air out of the register? I turned it off, and the temperature plummeted, back to an unsteady 2-3 degrees above the room temperature. Turning the furnace fan back on again resumed the high temperature readings from that register, higher than before, but still consistent with the rising temperatures outside (not shown on this graph). When it was finally turned off, with the window turned off, the temperature fell to match the rest of the sensors in the room.
With the window fan and furnace fans both turned off, today’s graph has been four very similar lines, all within about 3 degrees of yesterday’s values at the same time. Certainly, the weather today may be different from that of yesterday or the day before (it got quite cool Tuesday night due to some rains in the area), but I’m hoping that the system will show that the room temperature is a little more stable (and hopefully lower) now that I’m not sucking hot air out of the attic ductwork.
I’m also more than a little concerned about my preliminary conclusion, that the attic adds 5 or more degrees to the air as it passes through the system. If the coil is really expected to drop air temperature by 10-15 degrees, then I’m losing a full 33% efficiency just by exposure to the attic air (and these systems are so efficient to begin with). There’s a roof-mounted ventilation fan, which should be pulling some hot air out of the attic, and monitoring that (and the attic temperature in general) is on my list for this project.
But I feel like the ductwork shouldn’t be absorbing that much heat to begin with. I don’t know if it’s a function of the air return, or the air distribution, or the furnace unit itself, but it really does seem like I may need to do some work up there. Right now, it’s a rat’s nest of flexible ductwork, leading from the furnace to smaller distribution boxes to further flexible ducts, etc. All of them are running at 4-6’ above the attic floor, with long swoops and droops. I seriously wonder whether ripping that all out and installing rigid ducts, at the floor joist level and covered with heaps of blown-in insulation, might make a significant difference here.
It’s also possible that the heat increase isn’t coming from the attic at all, but from the much larger cold air return in the hallway by the kids’ rooms. I’ll need to get another sensor over there to see if that’s the case, but generally, the master bedroom (where all these other sensors are located) feels a lot warmer than the hall, so I’m still leaning towards the attic ductwork being a problem.
Either way, this is an amazing amount of information, and may already be helping me better understand and diagnose our long-running HVAC problems, all from only a couple days’ worth of logging and an Arduino-based sensor that took less than a day to cobble together (ignoring delays from a failed WiFi breakout board). I can’t wait until I have both my HVAC systems fully instrumented, with real local outdoor and attic temperatures as well.
Yay, data! | https://darthnull.org/building/2014/09/05/diy-scada/ | CC-MAIN-2018-51 | refinedweb | 1,971 | 62.21 |
Bummer! This is just a preview. You need to be signed in with a Basic account to view the entire video..
- 0:00
It's not unusual for the needs and requirements for sea dated to be different
- 0:04
between the various environments that you need to support.
- 0:07
For instance a mike look at data, test data,
- 0:10
is often only needed in local developer and test environments.
- 0:15
And once you're ready to release a version of your application into production,
- 0:19
you'll want to exclude it.
- 0:21
Up until now, we've been using the debug built configuration by default
- 0:26
Visual Studio projects also include a release configuration.
- 0:30
As the name suggest, the debug configuration is used for
- 0:34
debugging your application.
- 0:36
While the release configuration is used to build a version of your application
- 0:40
that's optimized for release into your production environment.
- 0:44
It's also possible to create custom build configurations.
- 0:48
For example, you could create a test build configuration to use
- 0:52
when building your application for your test environment.
- 0:55
See the teachers notes for
- 0:56
more information on how to create custom build configurations.
- 1:00
Preprocessor directives can be used to conditionally include or
- 1:05
exclude code based on build configurations.
- 1:08
Preprocessor directives are instructions for
- 1:11
the compiler to preprocess before it compiles your code.
- 1:15
Lets see how we could update our configuration class to exclude our test
- 1:19
data when we're not building a debug version of our application.
- 1:23
To do that, we can wrap our test data in an if, preprocessor directive.
- 1:28
In order to only include that code in the compilation
- 1:31
of our application when the debug symbol is defined.
- 1:36
I'm going to consider everything other than the row look up data to be test data.
- 1:41
So, a place to preprocessor directive on a new line before our artist ID Constance.
- 1:47
Preprocessor directives start with a pound sign,
- 1:50
followed by the name of the directive.
- 1:53
Then, we type the name of the symbol that we want to check for.
- 1:58
You can think of preprocessor symbols as variables that can be used
- 2:02
with preprocessor directives.
- 2:05
The DEBUG symbol is defined by the DEBUG build configuration but
- 2:10
not by the release build configuration.
- 2:12
So that's a good symbol for
- 2:13
us to use in order to determine which build configuration is selected.
- 2:18
Symbols or case sensitive and then DEBUG symbol is defined using uppercase letters.
- 2:24
So it's important to type debug in all uppercase letters and not Debug or debug.
- 2:32
Otherwise the #if directive won't ever evaluate to true.
- 2:41
Preprocessor directives aren't code statements.
- 2:43
So they don't end with a semicolon.
- 2:46
Now let's scroll down to the bottom of our test data and add the endif directive.
- 2:57
#endif Visual Studio will
- 3:07
give us a subtle visual indication when code isn't being included in compilation.
- 3:13
If we switch to the release build configuration, we can see that the editor
- 3:16
will dim the color of the code in between our preprocessor directives to gray.
- 3:22
If we switch back to debug build configuration,
- 3:25
we can see that the default syntax color highlighting returns,
- 3:29
indicating that the code will be included in the compilation of our application.
- 3:33
Let's test our changes.
- 3:35
Switch to the release build configuration.
- 3:40
Delete or drop our database
- 3:52
Run the update database command.
- 3:58
Update- database.
- 4:04
And then, run the application without debugging by pressing Ctrl+F5.
- 4:08
And now, we're not getting any comic books returned from
- 4:13
our query press Enter twice to continue execution.
- 4:20
Let's double check that the comic book table doesn't contain any data.
- 4:34
And the table doesn't contain any rows just as we expected.
- 4:41
If we switch to the debug build configuration Run
- 4:45
the update database command,
- 4:56
And press F5 to run our application,
- 4:59
we'll see our list of comic books from our test data.
- 5:05
We're just scratching the surface of what's possible with build configurations
- 5:09
and preprocessor directives.
- 5:11
See the teacher's notes for information on both.
- 5:14
Let's do a quick review of the section.
- 5:17
We saw how to enable migrations for
- 5:19
our project using the Enable migrations command.
- 5:22
And our initial migration using the add migration command, and
- 5:27
update our database using the update database command.
- 5:30
We also saw how to use the Code First migrations
- 5:34
configuration class to seed our database with look up and test dat.
- 5:37
In the next section, we'll dig deeper into Code First migrations Well,
- 5:43
out of migration for a simple model change.
- 5:45
Look at an example of how to modify a migration,
- 5:48
see how to downgrade our database and review our deployment options.
- 5:53
See you after the break. | https://teamtreehouse.com/library/excluding-test-data-in-production | CC-MAIN-2019-43 | refinedweb | 931 | 63.19 |
One of the pain points with platform as a service (PaaS) solutions for Python is that they often impose constraints on what WSGI servers you can use. They may even go as far as only providing support for using a single WSGI server which they preconfigure and which you cannot customise. The problem with placing limitations on what WSGI servers you can use or how to configure them is that not everyone's requirements are the same. The end result is a Python WSGI hosting solution which is sub optimal and cannot be tuned, thus resulting in you not being able to make best use of the resources provided by the PaaS. This can then lead to you having to purchase more capacity than you actually need, because you are wasting what you do have but are unable to do anything about it.
In the future, Docker promises to provide a much better ecosystem which avoids many of these problems. In this blog post though I am going to look at OpenShift specifically and what one can do there. Do realise though that although I am going to focus on OpenShift, this problem isn't unique to just that service. Other services such as Heroku and AWS Elastic Beanstalk have their own issues and limitations as well.
WSGI server choices on OpenShift
For Python users developing web applications, OpenShift provides Python language specific cartridges for Python 2.6, 2.7 and 3.3. These cartridges currently provide two ways of starting up a Python web application.
The first solution provided is for your Python web application to be hosted using Apache/mod_wsgi. To get your WSGI application running, all you need to do is provide a WSGI script file with a specific name which contains the WSGI application entry point for your web application. You do not need to worry at all about starting up any WSGI server, as OpenShift would do all that for you. You can also provide a directory of static files which will also be served up.
Although as the author I would like to see Apache/mod_wsgi be used more widely than it currently is, I can't say I was particularly happy about how Apache/mod_wsgi was being setup under OpenShift. I could see various issues with the configuration and constraints it imposed and there wasn't really anything you could do to change it. You were also stuck with an out of date version of Apache and mod_wsgi due to OpenShift only using whatever was found within the RedHat package repository for the version of RHEL being used.
In one of the updates to the Python cartridges they offered a second alternative though. What they did this time was allow you to supply an 'app.py' file. If such an 'app.py' file existed, then rather than starting up Apache/mod_wsgi, it would run 'python' directly on that 'app.py' file, with the expectation that it would start up a Python web server of some sort, listening on the required port for web requests.
Note here that it had to be a Python code file and OpenShift would run Python on it itself. It did not allow you to provide an arbitrary application, be it a binary, a shell script or even just Python code setup as an executable script. Yes these are meant to be cartridges for running a Python web application, but it is still a somewhat annoying limitation.
What you could not for example do was provide a shell script which ran 'gunicorn', 'uwsgi', or even Apache/mod_wsgi with a better configuration. You were instead stuck with using a pure Python HTTP or WSGI server which provided a way of running it which consisted of importing the Python module for that WSGI server and then calling a function of that module to start it.
You could for example easily import the Tornado HTTP server and run it, but if you wanted to use standalone Gunicorn, uWSGI or Apache/mod_wsgi it wasn't readily apparent how you could achieve that.
What one therefore saw was that if someone did want to use a different standalone WSGI server, rather than use the OpenShift Python cartridges, they would use the DIY cartridge instead and try and build up a workable system from that for a Python web application. This would include you having to handle yourself the creation of a Python virtual environment, install packages etc, tasks that were all done for you with the Python cartridges.
Having to replicate all that could have presented many challenges as the Python cartridges use a lot of really strange tricks when it comes to managing the Python virtual environments in a scaled web application. I wouldn't have been surprised therefore that the use of the DIY cartridge precluded you from having a scaled web application.
Running a WSGI server from app.py
What supplying the 'app.py' file does at least do is prevent the startup of the default Apache/mod_wsgi installation. We can also code the 'app.py' however we want so lets see if we can simply in turn execute the WSGI server we do want to use.
As an example, imagine that we had installed 'mod_wsgi-express' by using the pip installable version of mod_wsgi from PyPi. We might then write the 'app.py' file as:
import osOPENSHIFT_REPO_DIR = os.environ['OPENSHIFT_REPO_DIR']os.chdir(OPENSHIFT_REPO_DIR)OPENSHIFT_PYTHON_IP = os.environ['OPENSHIFT_PYTHON_IP']
OPENSHIFT_PYTHON_PORT = os.environ['OPENSHIFT_PYTHON_PORT']OPENSHIFT_PYTHON_DIR = os.environ['OPENSHIFT_PYTHON_DIR']SERVER_ROOT = os.path.join(OPENSHIFT_PYTHON_DIR, 'run', 'mod_wsgi')VIRTUAL_ENV = os.environ['VIRTUAL_ENV']program = os.path.join(VIRTUAL_ENV, 'bin', 'mod_wsgi-express')os.execl(program, program, 'start-server', 'wsgi.py',
'--server-root', SERVER_ROOT, '--log-to-terminal',
'--host', OPENSHIFT_PYTHON_IP, '--port', OPENSHIFT_PYTHON_PORT)
When we try and use this, what we find is that sometimes it appears to work and sometimes it doesn't. Most of the time though OpenShift will tell us that the Python web application didn't start up properly.
For a single gear web application, even though it says it didn't start, it may still be contactable. When we try and restart the web application though, we find that the running instance of Apache/mod_wsgi will not shutdown properly and then the new instance will not run.
If using a scaled application we have the further problem that when OpenShift thinks that it didn't start properly, it will not add that gear to the haproxy configuration and so it will not be used to handle any web requests even if it is actually running.
The question is why does OpenShift think it isn't starting up properly most of the time.
Changing process names on exec()
The answer is pretty obscure and is tied to how the OpenShift Python cartridge manages the startup of the Python web application when an 'app.py' file is provided. To discover this one has to dig down into the OpenShift control script for the Python cartridge.
nohup python -u app.py &> $LOGPIPE &retries=3
while [ $retries -gt 0 ]; do
app_pid=$(appserver_pid)
[ -n "${app_pid}" ] && break
sleep 1
let retries=${retries}-1
donesleep 2if [ -n "${app_pid}" ]; then
echo "$app_pid" > $OPENSHIFT_PYTHON_DIR/run/appserver.pid
else
echo "ERROR: Application failed to start, use 'rhc tail' for more informations."
fi
The definition of the 'appserver_pid' shell function reference by this is:
function appserver_pid() {
pgrep -f "python -u app.py"
}
What is therefore happening is that the control script is running the code in 'app.py' as 'python -u app.py'. Rather than capture the process ID of the process when run and check that a process exists with that process ID, it checks using 'pgrep' to see if a process exists which has the exact text of 'python -u app.py' in the full command line used to start up the process.
The reason this will not work, or at least why it will not always work is that our 'app.py' is performing an 'os.execl()' call and in doing that the 'app.py' application process is actually being replaced with a new application process inheriting the same process ID. In performing this exec though, the enduring process ID will now show the command line used when the 'os.execl()' was done. As a consequence, the use of 'pgrep' to look for 'python -u app.py' will fail if the check wasn't done quick enough such that it occurred before 'os.execl()' was called.
Since it is checking for 'python -u app.py', lets see if we can fool it by naming the process which will persist after the 'os.execl()' call using that string. This is done by changing the second argument to the 'os.execl()' call.
os.execl(program, 'mod_wsgi-express (python -u app.py)', 'start-server',
'wsgi.py', '--server-root', SERVER_ROOT, '--log-to-terminal',
'--host', OPENSHIFT_PYTHON_IP, '--port', OPENSHIFT_PYTHON_PORT)
Unfortunately it doesn't seem to help.
The reason this time is that the 'mod_wsgi-express' script is itself just a Python script acting as a wrapper around Apache/mod_wsgi. Once the 'mod_wsgi-express' script has generated the Apache configuration based on the command line arguments, it will again use 'os.execl()', this time to startup Apache with mod_wsgi.
The name of the process therefore is pretty quickly changed once more.
Now 'mod_wsgi-express' does actually provide a command line option called '--process-name' to allow you to override what the Apache parent process will be called when started. The intention of this was that when running multiple instances of 'mod_wsgi-express' you could set the names of each to be different and thus more easily identify to which instance the Apache processes belonged.
We therefore try overriding the name of the process when using 'os.execl()', but also tell 'mod_wsgi-express' to do something similar when it starts Apache.
os.execl(program, 'mod_wsgi-express (python -u app.py)', 'start-server',
'wsgi.py', '--process-name', 'httpd (python -u app.py)',
'--server-root', SERVER_ROOT, '--log-to-terminal',
'--host', OPENSHIFT_PYTHON_IP, '--port', OPENSHIFT_PYTHON_PORT)
Success, and it now all appears to work okay.
There are two problems with this though. The first is that it is relying on an ability of 'mod_wsgi-express' to rename the Apache processes and one may not have such an ability when trying to start some other WSGI server, which may itself use a wrapper script of some sort, or even if you yourself wanted to inject a wrapper script.
The second problem is that although the Apache parent process will now be named 'httpd (python -u app.py)' and will be matched by the 'pgrep' command, all the Apache child worker processes will also have that name.
The consequence of this is that 'pgrep' will actually return the process IDs of multiple processes.
As it turns out, the return of multiple process IDs still works with the control script, but does introduce a potential for problem.
The issue this time is that although the Apache parent process will be persistent, the Apache child worker process may be recycled over time. Thus the control script will hold on to a list of process IDs that could technically be reused. This could have consequences later on if you were attempting to shutdown or restart the web application gears, as the control script could inadvertently kill off processes now being run for something else.
Interestingly this problem already existed in the control script even before we tried the trick we are trying to use. This is because the 'app.py' script could have been executing an embedded pure Python HTTP server which itself was forking in order to create multiple web request handler processes. The rather naive way therefore of determining what the process ID of the web application was and whether it started okay, could still cause problems down the track in that scenario as well.
Interjecting an intermediate process
If performing an 'os.execl()' call and replacing the current process causes such problems, lets then consider leaving the initial Python 'app.py' process in place and instead perform a 'fork()' call followed by an 'exec()' call.
If we do this then we will only have the one process with the command line 'python -u app.py' and it will be our original process that the control script started. The control script shouldn't get confused and all should be okay.
If we were to do this though we have to contend with a new issue. That is that on subsequent shutdown or restart of the web application gear, the initial process has to be able to handle signals directed at it and relay those signals onto the forked child process which is running the actual web application. It also has to monitor that forked child process in case it exits before it was mean't to and then cause itself to exit.
Doing all this in a Python script quickly starts to get messy, plus we are also leaving in place a Python process which is going to be consuming a not insignificant amount of memory for such a minor task.
We could start to look at using a process manager such as supervisord but that is adding even more complexity and memory bloat.
Stepping back, what is simple to use for such a task is a shell script. A shell script is also much easier for writing small wrappers to process environment variables and work out a command line to then be used to execute a further process. In the hope that this will make our job easier, lets change the 'app.py' file to:
import osSCRIPT = os.path.join(os.path.dirname(__file__), 'app.sh')os.execl('/bin/bash', 'bash (python -u app.py)', SCRIPT)
What we are therefore doing is replacing the Python process with a 'bash' process which is executing a shell script provided by 'app.sh' instead, but still overriding the process name as we did before.
Now I am not going to pretend that having a shell script properly handle relaying of signals to a sub process is trivial and doing that right is actually a bit of a challenge as well which needs a bit of explaining. I am going to skip explaining that and leave you to read this separate post about that issue.
Moving on then, our final 'app.sh' shell script file is:
#!/usr/bin/env bashtrap 'kill -TERM $PID' TERM INTmod_wsgi-express start-server \
--server-root $OPENSHIFT_PYTHON_DIR/run/mod_wsgi \
--log-to-terminal --host $OPENSHIFT_PYTHON_IP \
--port $OPENSHIFT_PYTHON_PORT wsgi.py &PID=$!
wait $PID
trap - TERM INT
wait $PID
STATUS=$?exit $STATUS
This actually comes out as being quite clean compared to the 'app.py' when it was using 'os.execl()' to execute 'mod_wsgi-express' directly.
Now that we aren't reliant on any special process naming mechanism of anything which is in turn being run, this is also easily adapted to run other WSGI servers such as gunicorn or uWSGI.
Is it worth all the trouble?
As far as technical challenges go, this was certainly an interesting problem to try and solve, but is it worth all the trouble.
Well based on the amount of work I have seen people putting in to try and mould the OpenShift DIY cartridge into something usable for Python web applications, I would have to say it is.
With the addition of a simple 'app.py' Python script and small 'app.sh' shell script, albeit arguably maybe non obvious in relation to signal handling, it is possible to take back control from the OpenShift Python cartridges and execute the WSGI server of your choice using the configuration that you want to be able to use.
In that respect I believe it is a win.
Now if only I could work out a way to override some aspects of how the OpenShift Python cartridges handle execution of pip to workaround bugs in how OpenShift does things. That though is a problem for another time.
6 comments:
Hi Im fairly new to web hosting and I am struggling with deploying my django project to openshift. Is there a Github repo that you have that I can observe in while reading your post? I know you did as much as you can during the post, but that still doesnt give us (especially those new to this) a rather clear look at how you really put this together.
The repo uses the script files described here. It isn't a Django application though, so extra work is required for Django. If still needing help, you are best hopping onto the mod_wsgi mailing list and asking your question there.
Hi Graham,
have you found by any chance how to serve multiple wsgi scripts from the same cartridge ? I have two Python files which I want to access via different URLs but for some reason OpenShift doesn't allow me to specify WSGIScriptAlias in .htaccess.
@Alexander There are ways it can be done using mod_wsgi-express when using the recipe explained in this post. If want help on that you are best dropping an email to the mod_wsgi mailing list and can explain it there.
Hi Graham,
thanks for the hint but this looks like too much for my needs. I've found a simple workaround - just make the root wsgi.py file handle some of the URLs and redirect them to other scripts (Python modules that is). For those interested I've provided a quick write-up on my blog:
That will not work where the two WSGI applications cannot co exist together in the same interpreter. For example, you can't run two Django instances like that. So be careful of any conflicts on global data when doing that. | http://blog.dscpl.com.au/2015/01/using-alternative-wsgi-servers-with.html | CC-MAIN-2017-30 | refinedweb | 2,940 | 61.97 |
In this tutorial I will give a basic introduction to pandas. Oh, I don't mean the animal panda, but a Python library!
As mentioned on the pandas website:
pandas is an open source, BSD-licensed library providing high-performance, easy-to-use data structures and data analysis tools for the Python programming language.
Thus,
pandas is a data analysis library that has the data structures we need to cleanse raw data into a form which is suitable for analysis (i.e. tables). It is important to note here that since
pandas performs important tasks such as aligning data for comparison and merging of data sets, handling of missing data, etc., it has become a de facto library for high-level data processing in Python (i.e. statistics). Well,
pandas was originally designed to handle financial data, provided that the common alternative is using a spreadsheet (i.e. Microsoft Excel).
The basic data structure of
pandas is called
DataFrame, which is an ordered collection of columns with names and types, thus looking like a database table where a single row represents a single case (example) and columns represent particular attributes. It should be noted here that the elements in various columns may be of different types.
So, the bottom line is that the
pandas library provides us with the data structures and functions necessary for data analysis.
Installing Pandas
Let's now see how we can install
pandas on our machines and use it for data analysis. The easiest way to install
pandas and avoid any dependency issues is by using Anaconda which
pandas comes part of. As mentioned on the Anaconda download page:
Anaconda is a completely free Python distribution (including for commercial use and redistribution). It includes more than 400 of the most popular Python packages for science, math, engineering, and data analysis
The Anaconda distribution is cross-platform, meaning that it can be installed on OS X, Windows, and Linux machines. I'm going to use the OS X installer since I'm working on a Mac OS X El Capitan machine, but of course you can choose the suitable installer for your operating system. I will go with the graphical installer (be careful, it is 339 MB).
After downloading the installer, simply walk through the simple installation wizard steps and you are all set!
All we need to do now in order to use
pandas is to import the package as follows:
import pandas as pd
Pandas Data Structures
I have mentioned one of the three
pandas data structures above, the
DataFrame. I will describe this data structure in this section in addition to the other
pandas data structure,
Series. There is another data structure called
Panel, but I will not describe it in this tutorial as it is not so frequently used, as mentioned in the documentation.
DataFrame is a 2D data structure,
Series is a 1D data structure, and
Panel is a 3D and higher data structure.
DataFrame
The
DataFrame is a tabular data structure which is composed of ordered columns and rows. To make things clearer, let's look at the example of creating a
DataFrame (table) from a dictionary of lists. The following example shows a dictionary consisting of two keys, Name and Age, and their corresponding list of values.
import pandas as pd import numpy as np name_age = {'Name' : ['Ali', 'Bill', 'David', 'Hany', 'Ibtisam'], 'Age' : [32, 55, 20, 43, 30]} data_frame = pd.DataFrame(name_age) print data_frame
If you run the above script, you should get an output similar to the following:
%20(1).png)
Notice that the
DataFrame constructor orders the columns alphabetically. If you want to change the order of the columns, you can type the following under
data_frame above:
data_frame_2 = pd.DataFrame(name_age, columns = ['Name', 'Age'])
To view the result, simply type:
print data_frame_2.
Say you didn't want to use the default labels 0,1,2,..., and wanted to use a, b, c,... instead. In that case, you can use
index in the above script as follows:
data_frame_2 = pd.DataFrame(name_age, columns = ['Name', 'Age'], index = ['a', 'b', 'c', 'd', 'e'])
That was very nice, wasn't it? Using
DataFrame, we were able to see our data organized in a tabular form.
Series
Series is the second
pandas data structure I'm going to talk about. A
Series is a one-dimensional (1D) object similar to a column in the table. If we want to create a
Series for a list of names, we can do the following:
series = pd.Series(['Ali', 'Bill', 'David', 'Hany', 'Ibtisam'], index = [1, 2, 3, 4, 5]) print series
The output of this script would be as follows:
.png)
Notice that we used
index to label the data. Otherwise, the default labels will start from 0,1,2...
Pandas Functions
In this section, I'm going to show you examples of some functions we can use with
DataFrame and
Series.
Head and Tail
The functions
head() and
tail() enable us to view a sample of our data, especially when we have a large number of entries. The default number of elements that get displayed are 5, but you can return the customized number you like.
Let's say we have a
Series composed of 20,000 random items (numbers):
import pandas as pd import numpy as np series = pd.Series(np.random.randn(20000))
Using the
head() and
tail() methods to observe the first and last five items, respectively, we can do the following:
print series.head() print series.tail()
The output of this script should be something similar to the following (notice that you might have different values since we are generating random values):
%20(1).png)
Add
Let's take an example of the
add() function, where we will attempt to add two data frames as follows:
import pandas as pd dictionary_1 = {'A' : [5, 8, 10, 3, 9], 'B' : [6, 1, 4, 8, 7]} dictionary_2 = {'A' : [4, 3, 7, 6, 1], 'B' : [9, 10, 10, 1, 2]} data_frame_1 = pd.DataFrame(dictionary_1) data_frame_2 = pd.DataFrame(dictionary_2) data_frame_3 = data_frame_1.add(data_frame_2) print data_frame_1 print data_frame_2 print data_frame_3
The output of the above script is:
%20(2).png)
You can also perform this addition process by simply using the
+ operator:
data_frame_3 = data_frame_1 + data_frame_2.
Describe
A very nice
pandas function is
describe(), which generates various summary statistics for our data. For the example in the last section, let's do the following:
print data_frame_3.describe()
The output of this operation will be:
%20(1).png)
Further Resources
This was just a scratch of the surface on Python's
pandas. For more details, you can check the
pandas documentation, and you can also check some books like Learning Pandas and Mastering Pandas.
Conclusion
Scientists sometimes need to carry out some statistical operations and display some neat graphs that require them to use a programming language. But, at the same time, they don't want to spend too much time or be faced with a serious learning curve in carrying out such tasks.
As we saw in this tutorial,
pandas enabled us to represent data in tabular form and carry out some operations on those tables in a very simple way. Combining
pandas with other Python libraries, scientists can even do more advanced tasks such as drawing specialized graphs for their data.
Thus,
pandas is a very helpful library and starting point for scientists, economists, statisticians, and anyone willing to perform some data analysis tasks.
Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
| https://code.tutsplus.com/tutorials/introducing-pandas--cms-26514 | CC-MAIN-2017-13 | refinedweb | 1,257 | 60.95 |
I find most short interview problems quite contrived. This post is devoted to the one interview problem that I have found most interesting, due to the simplicity of its statement and the subtlety of its solution.
Consider the following function, intended to generate points uniformly distributed on the unit circle, \(x^2 + y^2 = 1\).
import numpy as np from scipy.stats import uniform def generate(size=1) # Generate a point uniformly distributed in the square [-1, 1] x [-1, 1] x = uniform.rvs(loc=-1, scale=2, size=size) y = uniform.rvs(loc=-1, scale=2, size=size) # Normalize by the distance from the origin to get a point on the unit circle r = np.sqrt(x**2 + y**2) return np.column_stack([x / r, y / r])
The question asks whether or not this code does indeed generate points uniformly distributed on the unit circle.
At first glance it seems reasonable that it might do so. Upon slightly further reflection, we see that for the results to be uniformly distributed, we must map the square \([-1, 1] \times [-1, 1]\) to the unit circle in a manner that preserves area (in the appropriate sense). From this point of view, we can quickly conclude that the results will not be uniformly (EDIT: this originally read “normally,” which was a typo) distributed.
To see why, consider the following diagram.
All of the points in the square on the red line get scaled to the red point on the unit circle. Likewise, all of the points in the square on the blue line get scaled to the blue point on the unit circle. It is quite clear from this diagram that the blue line is longer than the red line. To be precise, the red line has length one, while the blue line has length \(\sqrt{2} \approx 1.414.\)
Due to this difference in length, the points generated by the function are more likely to be clustered around the four points \((\sqrt{2}, \sqrt{2})\), \((-\sqrt{2}, \sqrt{2})\), \((-\sqrt{2}, -\sqrt{2})\), and \((\sqrt{2}, -\sqrt{2})\) than elsewhere on the circle.
The following diagram shows a heatmap of the locations of one million samples generated by this function.
We can see that the samples do indeed cluster around those four points, as shown by the red regions in the heatmap.
Although this problem is fairly simple, unlike many interview problems it reinforces a key breakdown in our intuition about probability theory and naive interpretations of transformations of random variables.
Discuss on Hacker News
Update
Many people have pointed out that this isn’t a great problem for general developer interview, and I agree. This interview was for a data-oriented position which required a strong understanding of probability theory. | http://austinrochford.com/posts/2013-09-10-the-most-interesting-interview-problem.html | CC-MAIN-2016-18 | refinedweb | 460 | 61.06 |
This is a continuation to Setup C Tools tutorial I wrote in 2017 and Setup Java Tools in 2018. This one will show you briefly how to use Geany I.D.E. with GNU C++ Compiler. This includes an example so you will be able to try it and have it working first. Like the previous article, I dedicate this tutorial for purely beginners in both programming and in GNU/Linux. Note: screenshots below are taken from elementary OS 5.0 which is based on Ubuntu 18.04 (so, don't worry). Now, enjoy learning!
1. Install C++ Compiler
Just install GNU C++ Compiler:
$ sudo apt-get install g++
2. Install Geany
Like the previous article, I suggest you to use Geany Text Editor. It's a very lightweight replacement to DevC++ or Notepad++, or even Sublime Text: an all-language supporting programming IDE.
$ sudo apt-get install geany
3. Write Hello World
Here's a standard C++ source code you can copy and save as hello.cpp:
#include <iostream>using namespace std;int main (){cout << "Hello world!" << endl;return 0;}
4. Compile & Execute
Click Compile button then click Build and then Run. Or instead press F8, F9, F5 keys respectively. Voila, it shows a small terminal showing Hello World! message. It works!
(Back: Geany; front: result of the program showing on Terminal)
The Resulting Files
This little exercise creates three files: hello.cpp, hello.o, and hello (without extension). The source code, the object code, and the binary code files, respectively. The one you saw on Terminal above is hello, the executable program. You can run it manually by command line ./hello on your Terminal. You can learn more about these things better on C++ Book (Wikibooks).
Is Your Build Config Correct?
If you use Geany, once a C++ (.cpp) file saved, you can automatically compile it as Geany detects the compiler automatically. But in case you want to make sure, once you open a .cpp file, see Geany menu Settings > Set Build Config, it should looks like this.
- Compile: g++ -Wall -c "%f"
- Build: g++ -Wall -o "%e" "%f"
End Notes
For you who need a good source to learn basic C++, learn from cplusplus.com. For you who dream real C++ project(s), see KDE. I hope this simple tutorial really helps you a lot. Happy learning!
This article is licensed under CC BY-SA 3.0. | http://www.ubuntubuzz.com/2018/11/setup-cpp-programming-tools-on-ubuntu-for-beginners.html | CC-MAIN-2020-05 | refinedweb | 400 | 68.97 |
I am trying to set up a small server to handle HTTP and socketio requests — I don’t have much experience setting up servers, but right now apache2 serves the http just fine. The socketio transactions, however, keep failing with error code 400 (bad request), and I see some strange errors in the server logs. Sometimes I see an engineio error and the server responds w/ a ‘bad request’ and code 400, but always it tells me the eventlet server needs to be started:
[Mon Jan 11 19:02:54.068282 2016] [:error] [pid 4908:tid 140274923673344] [client 100.96.180.39:53473] return ws(environ, start_response) [Mon Jan 11 19:02:54.068305 2016] [:error] [pid 4908:tid 140274923673344] [client 100.96.180.39:53473] File "/var/www/projectENV/lib/python2.7/site-packages/engineio/async_eventlet.py", line 10, in __call__ [Mon Jan 11 19:02:54.068342 2016] [:error] [pid 4908:tid 140274923673344] [client 100.96.180.39:53473] raise RuntimeError('You need to use the eventlet server.') [Mon Jan 11 19:02:54.068380 2016] [:error] [pid 4908:tid 140274923673344] [client 100.96.180.39:53473] RuntimeError: You need to use the eventlet server. See the Deployment section of the documentation for more information. [Mon Jan 11 19:02:54.253124 2016] [:error] [pid 4909:tid 140274940458752] WARNING:engineio:Invalid session cde3f9aadbee4794bf9d7bb98d0b396e
My server code is pretty basic:
from flask import Flask import flaskext.couchdb from flask.ext.socketio import SocketIO # for socketio import eventlet eventlet.monkey_patch() # creation of server & db objects app = Flask(__name__) # socketio initialization socketio = SocketIO(app, async_mode='eventlet') # import views once site properties are set from app import views if __name__== "__main__": socketio.run(app, debug=True)
And my client code, written in python, uses the socketio-client library straight from the docs:
from socketIO_client import SocketIO, LoggingNamespace with SocketIO(SERVER_URL, 80, LoggingNamespace) as socketIO: socketIO.emit('aaa') socketIO.wait(seconds=1)
Isn’t the
socketio.run(app) supposed to start the eventlet server for me? Why is the server spitting back bad request (sometimes)?
Best answer
To make a WSGI application available online you need to expose it through a web server. When your application uses Flask-SocketIO, a plain WSGI web server isn’t sufficient, because WSGI does not support WebSocket, the WSGI protocol needs unofficial extensions to support this protocol.
Flask-SocketIO supports a variety of web servers that support WebSocket. It appears you have eventlet installed in your virtual environment, so that is why you receive the error that you have to use the eventlet web server.
What you don’t seem to realize, is that you are using Apache’s web server (I’m guessing mod_wsgi?). This web server is a normal, forking web server, it is not an eventlet compatible web server.
Isn’t the socketio.run(app) supposed to start the eventlet server for me?
Yes, if you were to run your application via
socketio.run(app) you would get a fully enabled eventlet web server. But you are not doing that, you are running it on apache. Eventlet has a web server, and apache has a web server, they are two separate web servers, both able to run a WSGI application. But the apache one does not support WebSocket.
The Flask-SocketIO documentation describes a few deployment scenarios that are valid. | https://pythonquestion.com/post/using-eventlet-to-manage-socketio-in-flask/ | CC-MAIN-2020-45 | refinedweb | 556 | 58.48 |
Well, analyzing the spread of the SARS-CoV-2 coronavirus was not my dream use case. But based on the responses to Ferry Djaja‘s Tracking Coronavirus COVID-19 Near Real Time with SAP HANA XSA article I decided to add my two groszy too.
[Updated on 20-03-30 with the changed links to the source data; and the new map output based on the new data granularity. Thanks Douglas Maltby for your comment!]
In his blog post, Ferry used JavaScript in SAP HANA XSA to pull the data from CSV files updated daily by Johns Hopkins University.
I would like to show you how you can pull and load these files into SAP HANA using just a few lines of code thanks to SAP HANA Python Client API for Machine Learning (
hana_ml package).
Some people were confused with the visualization on the map at the end — please note that this article focuses on technical use case connecting different components, not on doing coronavirus data deep analysis.
Get Python environment, e.g. Jupyter
I will use Jupyter in the Docker container for that. Please have a look at my previous post Understanding containers (part 05): shared files between the host and containers if you are not familiar with how to start it. As well you can do all the same steps below from any other Python environment.
So, I have my container
myjupyter01 running. I am connected to the Jupyter UI as described in the previous blog.
Install
hana_ml
The Jupyter image I used from the Docker Hub registry was
jupyter/minimal-notebook. It contains already some popular data processing packages, like
pandas.
But additionally, I need to install
hana_ml, which — in its current version 1.0.8 — is available on PyPI repository:.
The command to run the installation is
python -m pip install hana_ml, but because I am running it from Jupyter notebook with Python3 kernel, I need to run it with
! at the beginning:
!python -m pip install hana_ml
Obviously, this installation step has to be done only once. No need to rerun it in the same container e.g. when reloading the newest files.
Use
pandas to import files with data
Let’s import the same three files (
confirmed,
deaths,
recovered) from as Ferry used in his example.
import hana_ml, pandas # Links updated on 2020-03-22 df_confd = pandas.read_csv('') df_death = pandas.read_csv('') df_recvd = pandas.read_csv('') #Links from before March 22nd #df_confd = pandas.read_csv('') #df_death = pandas.read_csv('') #df_recvd = pandas.read_csv('')
As you can see from the preview of the Pandas dataframe, it lists only countries or provinces with confirmed cases, and every day the new column is added with the latest data from the previous day. Lines are added when the first case(s) is confirmed in the new region.
Use
pandas to re-format the data frame
Before persisting the data in SAP HANA, let’s:
- Remove all date columns except the last one,
- Rename the last column from the actual date (like today’s
3/10/20to
Confirmed).
df_confd_latest=df_confd.drop(df_confd.columns[4:len(df_confd.columns)-1], axis='columns') df_confd_latest.columns = [*df_confd_latest.columns[:-1],'Confirmed']
Use
hana_ml to persist data in SAP HANA table
Now let me connect to my instance of SAP HANA Express with the user
hanaml that already exists there…
cc=hana_ml.dataframe.ConnectionContext('12.34.567.890', 39015, 'hanaml', 'MyPasswordReusedEverywhere')
…and convert the Pandas dataframe
df_confd_latest into a HANA dataframe
hdf_confd.
hdf_confd=hana_ml.dataframe.create_dataframe_from_pandas(cc, df_confd_latest, 'df_confd', force=True)
Once the HANA dataframe is created:
- A physical column table is created in HANA and data from Pandas dataframe is inserted there,
- HANA dataframe
hdf_confdin Python does not store any data in your laptop, but only points to a table
HANAML.df_confdin SAP HANA server memory, and all Python operations on the HANA dataframe are physically exected in HANA db without moving data between the server and a client,
- To display the result of any operations, we need to apply
collect()method to convert HANA dataframe to Pandas (and as a result to bring data from HANA db server to the local client).
Use DBeaver to check data in SAP HANA…
You might remember me already using DBeaver — the free database tool supporting SAP HANA — in my previous post “GeoArt with SAP HANA and DBeaver“.
I am using it now again, and indeed I can find the table
df_confd in the schema
HANAML with all the data from the source Pandas dataframe.
…and do a spatial preview
As the table contains latitude and longitude columns I can visualize impacted countries/states right from DBeaver with the following SQL using Spatial data preview.
SELECT NEW ST_POINT("Long", "Lat"), "Country/Region", "Province/State", "Confirmed" FROM HANAML."df_confd";
I needed to change the map projection to
EPSG:4326 to get these points on the map. And DBeaver shows me the rest of the record data when I click on any point.
[Below is the old screenshot from 2020-03-11, which demonstrates as well the different granularity of e.g. US data used at that time]
DBeaver spatial preview is not a full-blown geospatial visual exploration tool. Yet it is good enough to see impacted countries/regions (depending on the granularity in the source files).
Should you be interested to learn more about
hana_ml…
… then I would definitely recommend checking Hands-On Tutorial: Machine Learning push-down to SAP HANA with Python by Andreas Forster.
HANA ML is a part of the new “Advanced Analytics with SAP HANA” topic for CodeJam events. Unfortunately because of the coronavirus situation, we had to cancel the first one organized by Jakob Flaman in Bern this month. Another one is organized by Ewelina Pękała on May 27th in Katowice:. Hopefully, the situation gets normal by that time, and we will not need to cancel this one too.
Stay healthy ❤️
-Vitaliy (aka @Sygyzmundovych)
Very interesting work!
Although ML is in the article, this posting is not about Machine Learning. However since you are showing use of python, it could be interesting to see if from the. data you have and the methods that are available, one could do some forecasts.
Hi Luca Toldo
Indeed, the focus of this post was just a quick tip on loading csv data from GitHub into HANA table. And there is too little data to run any meaningful ML algorithms...
But there were blog posts on how to use
hana_mlwith Python already by Andreas Forster, Shivam Shukla or.
Thanks for adding me here in this post , will surely check this out on priority how we can forecast something using the dataset.
Thanks,
Shivam
External links on Machine Learning with COVID-19 data
COVID-19 Open Research Database
COVID-19 Kaggle dataset (daily updated)
Hi Witalij
Excellent post and I too followed Ferry's post about the tracking of the virus - I have also been following a Hungarian team responsible for the RSOE EDIS Emergency and Disaster Information System
Similar topic but they have been doing Global Disaster tracking like this for years. But now if we can use hana_ml then why not.
Do you have this post as a PDF presentation perhaps ?
Regards
Graham Hardy
Hi Graham. There is the whole collection of Covid-19 maps and visuals available at
No, I do not have it in any other form, as the purpose of this post was just to share a quick tech tip 🙂
Best regards.
FYI, the COVID-19 source files were moved and restructured on March 22. See the Readme.md here:
The new source files are in the below statements to read the files.
Hope that helps those who try this exercise later!
Doug
Thanks Doug!
I updated the code and the screenshot (which is terrifying in the comparison to the previous from just a few weeks ago).
The COVID-19 US data is also updated daily by county with FIPS codes and long/lat in 2 separate files in the repository. I've added these 2 US files to my analysis exercises alongside the global data with a few dataframe adjustments. See below. The data ties exactly to what's being reported locally and globally.
While the situation and data are grim, I've enjoyed using Jupyter, Python, pandas, HANA ML, HANA HXE and DBeaver to experiment with them all on some very real and relevant data.
Clear steps defined thanks for sharing.... | https://blogs.sap.com/2020/03/11/quickly-load-covid-19-data-with-hana_ml-and-see-with-dbeaver/ | CC-MAIN-2021-31 | refinedweb | 1,396 | 62.17 |
This is a good article
This is a good article
If you're using an MFC dialog use the code below in OnInitDialog for all the windows you want to initialize.
CWD* pw = GetDlgItem( nID );
if( pw )
{
pw->SetWindowText( "blah blah" );
}
OK I've got it now. I'm sure there's something out there but I couldn't find it so I wrote the code below. Warning I've only tested a few cases, does not work for negative.
#include <string.h>...
I'm more than happy to help but learning something quickly is not always the best way to learn it for the long run. Take a look at your manual. Then ask a specific question and I'm more than happy to...
The const on the left is a return value to be a const and right says the member function is const.
A const return value is needed for temporary object allocation by the compiler. You never see...
With the friend keyword, you can designate either the specific functions or the classes whose functions can access not only public members but also protected and private members.
Once you've used atoi to place an ascii value in an integer you have (in a signed integer) a twos complement binary value. Typically we display this value as decimal but it can be displayed anyway...
It is a kernal object used for interprocess synchronization. Its state is set to signaled when it is not owned by any thread, and nonsignaled when it is owned. Only one thread at a time can own a...
int main ()
{
printf( "%s", "Press any key to continue" );
getchar( ); //stays here until a character is type
return 0;
}
Engineer223,
I think you meant to allocate more the one char on var.
He has a legitimate guestion. If you don't like the question don't answer it, but don't tell him to read the tutorial.
Looks like its working now. Please note the changes I've made. If your serious about programming send me an email explaining what I've change. You basically had it.
#include <iostream> ...
An import library is used with a .dll.
An import library is more like a proxy that you use to resolve the link for your function. The import library contains no code. Instead, they provide the...
There are a few syntax errors in your program please post the entire program and I will show you where your problems are.
You could use open but I open the file in the ofstream constructor.
#include <iostream>
#include <fstream.h>
using std::string;
int main()
{
Its sounds to me like as long as you are getting input from the console (possible end with esc (escape key) ) you'll create a new node populate it with the data from the console and then insert it...
Initialization is done by a constructor for public, protected, and private members to insure the object has the correct initial state.
Protected, private, and public are access modifiers. They set...
int main()
{
char ch = '5';
int nNumb = ch - '0';
return 0;
}
If you use spy to watch messages you'll see that when you use end task off of the taskmanager he sends a WM_CLOSE message when he shuts down an application.
After you wait some reasonable amount...
Tomkat,
what do you think the registry is ? Of course services are started out of it. See HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services
I made a few changes to the program to get it to work correctly but it will display you hit a when an a is entered. If anything else except an esc (escape) is entered it will do nothing but get the...
It just needed a little tweeking...
#include <iostream>
using namespace std;
int main()
{
int favnum;
int nxt;
The functions you've created will work but won't really do anything. They don't return anything (void) and the argument list is empty (func1( ) ).
First:
Let's change void func1( ) to int func1(...
ActiveX is based on COM, therefore it must implement IUknown and almost always IDispatch which makes the object or control available to scripting languages. All you should have to do is a...
Even though this article uses MFC I think it will give you a good start. When in doubt search.
ARTICLE on clipboard:...
You can write ActiveX controls and implement IDispatch to make methods availible to scripting languages, HTML, VB (custom as well), or ASP. | https://cboard.cprogramming.com/search.php?s=e9ed1947006ce946b6c26962da8206cd&searchid=2967537 | CC-MAIN-2020-10 | refinedweb | 744 | 73.58 |
PracProg2fileproc
Reading and Writing Files
1.
filename = input('Which file would you like to back-up? ') new_filename = filename + '.bak' backup = open(new_filename, 'w') for line in open(filename): backup.write(line) backup.close()
2.
alkaline_metals = [] for line in open('alkaline_metals.txt'): alkaline_metals.append(line.strip().split(' '))
3.
We could read the file contents into a data structure, such as a
list, and then iterate over the
list from end (last line) to beginning (first line).
4.
def process_file(reader): """ (file open for reading) -> NoneType Read and print the data from reader, which must start with a single description line, then a sequence of lines beginning with '#', then a sequence of data. """ # Find and print the first piece of data. line = skip_header(reader).strip() print(line) # Read the rest of the data. print(reader.read())
5.
6.
import time_series def smallest_value_skip(reader): """ (file open for reading) -> NoneType Read and process reader, which must start with a time_series header. Return the smallest value after the header. Skip missing values, which are indicated with a hyphen. """ line = time_series.skip_header(reader).strip() # Now line contains the first data value; this is also the smallest value # found so far, because it is the only one we have seen. smallest = int(line) for line in reader: line = line.strip() if line == '-': continue value = int(line) smallest = min(smallest, value) return smallest if __name__ == '__main__': with open('hebron.txt', 'r') as input_file: print(smallest_value_skip(input_file))
8.
def read_molecule(reader): """ (file open for reading) -> list or NoneType Read a single molecule from reader and return it, or return None to signal end of file. The first item in the result is the name of the compound; each list contains an atom type and the X, Y, and Z coordinates of that atom. """ # If there isn't another line, we're at the end of the file. line = reader.readline() if not line: return None # Name of the molecule: "COMPND name" key, name = line.split() # Other lines are either "END" or "ATOM num atom_type x y z" molecule = [name] reading = True serial_number = 1 while reading: line = reader.readline() if line.startswith('END'): reading = False else: key, num, atom_type, x, y, z = line.split() if int(num) != serial_number: print('Expected serial number {0}, but got {1}'.format( serial_number, num)) molecule.append([atom_type, x, y, z]) serial_number += 1 return molecule | http://pragprog.com/wikis/wiki/PracProg2fileproc/version/4 | CC-MAIN-2014-35 | refinedweb | 389 | 58.08 |
2019-08-13 19:28:02 8 Comments
I'm just wondering why this standard function is returning a char count of 9 for the sample code at cplusplus.com
// cin.gcount example #include <iostream> // std::cin, std::cout int main () { char str[20]; std::cout << "Please, enter a word: "; std::cin.getline(str,20); std::cout << std::cin.gcount() << " characters read: " << str << '\n'; return 0; }
Please, enter a word: simplify
9 characters read: simplify
Why is this returned as 9 characters?
Related Questions
Sponsored Content
5 Answered Questions
[SOLVED] Why does changing 0.1f to 0 slow down performance by 10x?
- 2012-02-16 15:58:39
- Dragarro
- 138026 View
- 1495 Score
- 5 Answer
- Tags: c++ performance visual-studio-2010 compilation floating-point
26 Answered Questions
[SOLVED] Why is processing a sorted array faster than processing an unsorted array?
- 2012-06-27 13:51:36
- GManNickG
- 1427673 View
- 23680 Score
- 26 Answer
- Tags: java c++ performance optimization branch-prediction
36 Answered Questions
[SOLVED] Why is "using namespace std;" considered bad practice?
- 2009-09-21 03:08:23
- akbiggs
- 782830 View
- 2491 Score
- 36 Answer
- Tags: c++ namespaces std using-directives c++-faq
21 Answered Questions
[SOLVED] Why should I use a pointer rather than the object itself?
10 Answered Questions
[SOLVED] Why is reading lines from stdin much slower in C++ than Python?
- 2012-02-21 02:17:50
- JJC
- 246291 View
- 1741 Score
- 10 Answer
- Tags: python c++ benchmarking iostream getline
8 Answered Questions
[SOLVED] How to convert a std::string to const char* or char*?
10 Answered Questions
[SOLVED] Why are elementwise additions much faster in separate loops than in a combined loop?
- 2011-12-17 20:40:52
- Johannes Gerer
- 232739 View
- 2178 Score
- 10 Answer
- Tags: c++ c performance compiler-optimization vectorization
@Remy Lebeau 2019-08-13 19:57:09
cplusplus.com is generally considered by many people to be a poor site for C++ documentation. You really should use cppreference.com instead.
For instance, cppreference's
istream::getline()documentation states the following:
cplusplus's
istream::getline()documentation states the following instead:
Which is a little misleading, as it implies that
getline()only counts characters that are stored in the user's buffer, but the delimiter is not stored in the buffer and yet it still counts towards
gcount().
@NathanOliver- Reinstate Monica 2019-08-13 19:31:39
Because of the enter key. When you press enter, a newline character (
'\n') is entered into the stream.
getlinereads up to that newline, stores the text in the array, and then reads and discards the newline. Thus, when you read
simplify, you actually read
simplify\nwhich is 9 characters. | https://tutel.me/c/programming/questions/57484191/why+does+stdistreamgcount+return+one+char+more+than+expected | CC-MAIN-2019-51 | refinedweb | 442 | 52.39 |
1.6 Software Watermarking
There are various scenarios where you would like to mark an object to indicate that you claim certain rights to it. Most famously, the government marks all their paper currency with a watermark that is deeply embedded in the paper and is therefore difficult to destroy or reproduce. For example, if you found a torn or damaged part of a note, you could hold it up to the light and use the watermark to identify the original value of the note. Also, if a forger attempts to pay you using photocopied currency, the missing watermark will alert you to the fact that the currency is not genuine.
There has been much work in the area of media watermarking, where the goal is to embed unique identifiers in digital media such as images, text, video, and audio. In a typical scenario, Doris has an online store that sells digital music. When Axel buys a copy of a song, she embeds two marks in it: a copyright notice A (the same for every object she sells) that asserts her rights to the music, and a customer identification number B (unique to Axel) that she can use to track any illegal copies he might make and sell to Carol:
If Doris gets ahold of an illegal copy of a song, she can extract the customer mark B (B is often referred to as a fingerprint), trace the copy back to Axel as the original purchaser, and then take legal action against him. If Axel were to claim “Well, I wrote and recorded this song in the first place,” Doris can extract her copyright notice A to prove him wrong.
Media watermarking algorithms typically take advantage of limitations in the human sensory systems. For example, to embed a watermark in a piece of music, you can add short—and to the human ear, imperceptible—echos. For every 0-bit of the mark, you’d add a really short echo and for a 1-bit, you would add a slightly longer echo. To mark a PDF file, you’d slightly alter the line spacing: 12 points for a 0-bit, and 12.1 points for a 1-bit. To mark an image, you’d slightly increase or decrease the brightness of (a group of) pixels. In all these cases you also need to decide where in the digital file you will make the changes. This is often done by means of a random number generator that traces out a sequence of locations in the file. The seed to the generator is the key without which you cannot extract the watermark. So a typical watermarking system consists of two functions, embed and extract:
Both functions take the secret key as input. The embed function also takes the original object (known as the cover object) and the watermark (the payload) as input, and produces a watermarked copy (the stego object) as output. The extract function, as the name implies, extracts the watermark from the stego object, given the correct key. This is just one basic watermarking system, and we’ll discuss others in Chapter 8 (Software Watermarking).
As you see from the figures above, we also have to take the adversary into account. Axel will want to make sure that before he resells Doris’ object he’s destroyed every watermark she’s embedded. More precisely, he needs to somehow disturb the watermark extraction process so that Doris can no longer extract the mark, even given the right key. In a typical attack, Axel inserts small disturbances into the watermarked object, small enough that they don’t diminish its value (so he can still resell it), but large enough that Doris can no longer extract the mark. For example, he might randomly adjust the line spacing in a PDF file, spread a large number of imperceptible echoes all over an audio file, or reset all low-order bits in an image file to 0. Media watermarking research is a game between the good guys who build marking algorithms that produce robust marks and the bad guys who build algorithms that attempt to destroy these marks. In both cases, they want the algorithms to have as little impact on a viewer’s/listener’s perception as possible.
Of course, our interest in this book is watermarking software, not media. But many of the principles are the same. Given a program P, a watermark w, and a key k, a software watermark embedder produces a new program Pw. We want Pw to be semantically equivalent to P (have the same input/output behavior), be only marginally larger and slower than P, and of course, contain the watermark w. The extractor takes Pw and the key k as input and returns w.
1.6.1 An Example
Take a look at the example in Listing 1.6▸39. How many fingerprints with the value "Bob" can you find? Actually, that’s not a fair question! As we’ve noted, we must assume that the algorithms Doris is using are public, and that the only thing she’s able to keep secret are the inputs to these algorithms. But nevertheless, have a look and see what you can find. One fingerprint stands out, of course, the string variable "fingerprint"! Not a very clever embedding, one might think, but easy to insert and extract, and if nothing else it could serve as a decoy, drawing Axel’s attention away from more subtle marks.
Listing 1.6. Watermarking example.
import java.awt.*; public class WMExample extends Frame { static String code (int e) { switch (e) { case 0 : return "voided"; case 6 : return "check"; case 5 : return "balance"; case 4 : return "overdraft"; case 2 : return "transfer"; case 1 : return "countersign"; case 3 : return "billing"; default: return "Bogus!"; } } public void init(String name) { Panel panel = new Panel(); setLayout(new FlowLayout(FlowLayout.CENTER, 10, 10)); add(new Label(name)); add ("Center", panel); pack(); show(); } public static void main(String args[]) { String fingerprint = "Bob"; if (args[0].equals("42")) new WMExample().init(code(7).substring(0,2) + code(5).charAt(0)); int x = 100; x = 1 - (3 % (1 - x)); } }
What else? Take a look at the code method. What Doris did here was to encode the string "Bob" in base 26 as bob26 = 1 · 262 +14 · 161 + 1 = 104110, using a = 0, b = 1, ..., o = 14, ..., z = 25. She then turned 1041 into a permutation of the integers 0, 1, 2, 3, 4, 5, 6, getting
- 0 → 0, 1 → 6, 2 → 5, 3 → 4, 4 → 2, 5 → 1, 6 → 3
This permutation, in turn, she used to reorder the cases of the switch statement in the code method. To extract the mark, she would have to do the process in reverse. First, she would need to find the method into which the mark is embedded (the secret key would point out the code method), extract the permutation from the switch statement, turn the permutation into 1041, and finally, decode that as the string "bob". There are many algorithms that, like this one, are based on permuting aspects of a program to embed a mark. The very first published water-marking algorithm [104,263] (a Microsoft patent), for example, permutes the basic blocks of a function’s control flow graph. In Section 8.4▸486, we will discuss this further.
Now what about the statement x=1-(3%(1-x))? Here, Doris created a translation table from letters to binary operators:
Thus, the three letters of the string "bob" turn into the operand/operator-pairs 1-, 3%, 1-, which when stitched together become x=1-(3%(1-x)). This is similar in flavor to an algorithm by Monden [252,263] et al., which we will talk about in Section 8.7.1▸505.
The three marks we’ve seen so far are all static, i.e., they’re embedded directly into the code of the program. In our example we’ve embedded into source code, but we could have used any convenient program representation, including binary code, Java bytecode, or any of the many intermediate code forms used in compilers. We will discuss static algorithms further in Chapter 8.
There is one final mark in the program, however, and this is a dynamic finger-print. What this means is that the fingerprint only appears at runtime, and only for a particular input to the program. When Doris starts the example program with the secret input key 42, the statement
new WMExample().init(code(7).substring(0,2) + code(5).charAt(0));
executes and displays the embedded fingerprint:
In Chapter 9 (Dynamic Watermarking), we will discuss these types of watermarks. In practice, of course, they are never as obvious as this: It’s too easy for Axel to find the code that would pop up a window with a string in it. Rather, the watermark is hidden somewhere in the dynamic state of the program, and this state gets built only for the special, secret input. A debugger or a special-purpose recognizer can then examine the state (registers, the stack, the heap, and so on) to find the fingerprint.
1.6.2 Attacks on Watermarking Systems
As in every security scenario, you need to consider possible attacks against the watermark. Doris has to assume, of course, that Axel will try to destroy her marks before trying to resell the program. And, unfortunately, there’s one attack that will always succeed, that will always manage to destroy the watermark. Can you think of what it is? To be absolutely sure that the program he’s distributing doesn’t contain a watermark, well, Axel can just rewrite the program from scratch, sans the mark!4 We call this a rewrite attack:
Axel can also add his own watermarks to the program. We call this an additive attack:
An additive attack might confuse Doris’ recognizer, but more important, it may help Axel to cast doubt in court as to whose watermark is the original one. A distortive attack applies semantics-preserving transformations (such as code optimizations, code obfuscations, and so on) to try to disturb Doris’ recognizer:
Finally, Axel can launch a collusive attack against a fingerprinted program by buying two differently marked copies and comparing them to discover the location of the fingerprint:
To prevent such an attack, Doris should apply a different set of obfuscations to each distributed copy, ensuring that comparing two copies of the same program will yield little information.
One clever attack that Axel may try to use is not an attack on the watermark itself. Rather, Axel could try to bring into question the validity of Doris’ watermark by pretending that the software contains his own watermark. Axel simply writes his own recognizer that “recognizes” this program as containing his mark. If he is successful, we could not tell which was the true recognizer and Doris would not be able to present a legally convincing claim on her own program.
In Chapter 8 and Chapter 9 we will describe many software watermarking algorithms. Some will be useful for watermarking entire applications, others are good for parts of applications. Some will work for binary code, others are for typed bytecode. Some will embed stealthy marks, some will embed large marks, some will embed marks that are hard to remove, and some will have low overhead. However, we know of no algorithm that satisfies all these requirements. This is exactly the challenge facing the software watermarking researcher. | https://www.informit.com/articles/article.aspx?p=1380912&seqNum=6 | CC-MAIN-2021-21 | refinedweb | 1,912 | 58.72 |
28275/how-do-i-append-one-string-to-another-in-python
Try something like this:
from turtle import Turtle, ...READ MORE
connect mysql database with python
import MySQLdb
db = ...READ MORE
For Python 3, try doing this:
import urllib.request, ...READ MORE
Python strings are immutable, you change them ...READ MORE
suppose you have a string with a ...READ MORE
You can also use the random library's ...READ MORE
You can try the following code in ...READ MORE
Syntax :
list. count(value)
Code:
colors = ['red', 'green', ...READ MORE
with open(fname) as f:
content = f.readlines()
# you ...READ MORE
Understand that every 'freezing' application for Python ...READ MORE
OR
Already have an account? Sign in. | https://www.edureka.co/community/28275/how-do-i-append-one-string-to-another-in-python?show=28277 | CC-MAIN-2020-24 | refinedweb | 116 | 79.87 |
From Documentation
Not to Use zscript for Better Performance
It is convenient to use zscript in ZUML, but it comes with a price: slower performance. The degradation varies from one application from another. It is suggested to use zsript only for fast prototyping, POC, or small projects. For large website, it is suggested to use ZK MVC instead. For example,
<window apply="foo.MyComposer"> //omitted
Then, you can handle all events and components in
foo.MyComposer. By the use of auto-wiring, it is straightforward to handle events and components.
Event Handler Is zscript
In addition to the zscript element, the event handler declared in a ZUL page is also interpreted at the runtime. For example,
<button label="OK" onClick="doSomething()"/>
where doSomething() is interpreted as zscript. Thus, for better performance, they should be replaced too.
Turn off the use of zscript
[since 5.0.8]
If you decide not to use zscript at all, you could turn on the disable-script configuration as follows, such that an exception will be thrown if zscript is used.
<system-config> <disable-zscript>true</disable-zscript> </system-config>
Use the deferred Attribute
If you still need to write zscript codes, you can specify the deferred attribute to defer the evaluation of zscript codes as follows.
<zscript deferred="true"> </zscript>
By specifying the deferred attribute, the zscript codes it contains will not be evaluated when ZK renders a page. It means that the interpreter won't be loaded when ZK renders a page. This saves memory and speeds up page rendering.
In the following example, the interpreter is loaded only when the button is clicked:
<window id="w"> <zscript deferred="true"> void addMore() { new Label("More").setParent(w); } </zscript> <button label="Add" onClick="addMore()"/> </window>
The deferred Attribute and the onCreate Event
It is worth to notice that, if the onCreate event listener is written in zscript, the deferred option mentioned in the previous second becomes useless. It is because the onCreate event is sent when the page is loaded. In other words, all deferred zscript will be evaluated when the page is loaded if the onCreate event listener is written in zscript as shown below.
<window onCreate="init()">
Rather, it is better to rewrite it as
<window use="my.MyWindow">
Then, prepare MyWindow.java as shown below.
package my; public class MyWindow extends Window { public void onCreate() { //to process the onCreate event ...
If you prefer to do the initialization right after the component (and all its children) is created, you can implement the AfterCompose interface as shown below. Note: the afterCompose method of the AfterCompose interface is evaluated at the Component Creation phase, while the onCreate event is evaluated in the Event Processing Phase.
package my; public class MyWindow extends Window implements org.zkoss.zk.ui.ext.AfterCompose { public void afterCompose() { //to initialize the window ...
Use the forward Attribute
To simplify the event flow, ZK components usually send the events to the component itself, rather than the parent or other targets. For example, when an user clicks a button, the onClick event is sent to the button. Developers usually forward the event to the window by the use of the onClick event listener as follows.
<window id="w"> <button label="OK" onClick="w.onOK"/>
As suggested in the previous sections, the performance can be improved by not using zscript at all. Thus, you can rewrite the above code snippet either with EventListener or by specifying the forward attribute as follows.
<window> <button label="OK" forward="onOK"/>
Version History
Last Update : 2012/2/13 | http://books.zkoss.org/wiki/ZK_Developer's_Reference/Performance_Tips/Use_Compiled_Java_Codes | CC-MAIN-2015-22 | refinedweb | 590 | 55.54 |
This project was writen using VS2010 and come as two units with the first part consisting of two C# .cs core files (DLNADevice.cs, SSDP.cs) that are in a test application with a form that can be downloaded from DLNACore.zip where these C# files are used to send out SSDP request to Digital Living Network Alliance (DLNA) devices using a multicast message on the LAN and then waits for any replies using UDP on port 1900 to come in.
The next part needed to stream files to a media device is to then request a list of services from each DLNA device over TCP and to then process the XML response so that we know what address and ports each device is listening on so that we can stream our media to the device or TV using the correct ControlUrl.
Pat two of the project is concerned with using what we have learned about "Play to" in an application so that we can add "Play To" on a web-site so that we can watch movies on our "Smart Tv's" using nothing more than a mobile phone and a browser to control the TV.
Below is the code that sends out the SSDP request using UDP to multicast/broadcast the message on the Local Area Network (LAN) and then wait for any replies from DLNA devices on the network which can take as long as fourteen seconds to arrive so I have wrapped this up as a service using a process thread that is aborted if the stop method is called after first setting "Running" to false
private static void SendRequestNow()
{//Uses UDP Multicast on 239.255.255.250 with port 1900 to send out invitations that are slow to be answered
IPEndPoint LocalEndPoint = new IPEndPoint(IPAddress.Any, 6000);
IPEndPoint MulticastEndPoint = new IPEndPoint(IPAddress.Parse("239.255.255.250"), 1900);//SSDP port
Socket UdpSocket = new Socket(AddressFamily.InterNetwork, SocketType.Dgram, ProtocolType.Udp);
UdpSocket.SetSocketOption(SocketOptionLevel.Socket, SocketOptionName.ReuseAddress, true);
UdpSocket.Bind(LocalEndPoint);
UdpSocket.SetSocketOption(SocketOptionLevel.IP, SocketOptionName.AddMembership, new MulticastOption(MulticastEndPoint.Address, IPAddress.Any));
UdpSocket.SetSocketOption(SocketOptionLevel.IP, SocketOptionName.MulticastTimeToLive, 2);
UdpSocket.SetSocketOption(SocketOptionLevel.IP, SocketOptionName.MulticastLoopback, true);
string SearchString = "M-SEARCH * HTTP/1.1\r\nHOST:239.255.255.250:1900\r\nMAN:\"ssdp:discover\"\r\nST:ssdp:all\r\nMX:3\r\n\r\n";
UdpSocket.SendTo(Encoding.UTF8.GetBytes(SearchString), SocketFlags.None, MulticastEndPoint);
byte[] ReceiveBuffer = new byte[4000];
int ReceivedBytes = 0;
int Count = 0;
while (Running && Count < 100)
{//Keep loopping until we timeout or stop is called but do wait for at least ten seconds
Count++;
if (UdpSocket.Available > 0)
{
ReceivedBytes = UdpSocket.Receive(ReceiveBuffer, SocketFlags.None);
if (ReceivedBytes > 0)
{
string Data = Encoding.UTF8.GetString(ReceiveBuffer, 0, ReceivedBytes);
if (Data.ToUpper().IndexOf("LOCATION: ") > -1)
{//ChopOffAfter is an extended string method added in Helper.cs
Data = Data.ChopOffBefore("LOCATION: ").ChopOffAfter(Environment.NewLine);
if (NewServer.ToLower().IndexOf(Data.ToLower()) == -1)
NewServer += " " + Data;
}
}
}
else
Thread.Sleep(100);
}
if (NewServer.Length > 0) Servers = NewServer.Trim();//Bank in our new servers nice and quick with minute risk of thread error due to not locking
UdpSocket.Close();
THSend = null;
UdpSocket = null;
}
}
The above SSDP function will return a space seperated string from all the DLNA clients that replied to our UDP broadcast and includes the port and address that cleints are listening on with the returned string looking something like this:
This string should be persisted because we don't need or want to keep having to poll for DLNA clients on the network and later we can just talk directly to clients like "" to know if the device is connected or not so the next step in the process is to split the above string to an array and create our DLNADevice objects so they are ready to do some talking.
foreach (string Server in DLNA.SSDP.Servers.Split(' '))
{//Test each DLNA client to see if we can talk to them
DLNA.DLNADevice D = new DLNA.DLNADevice(Server);
if (D.IsConnected())
{//Don't worry about the HTML just if the device is connected or not
Output += "<tr><td>" + D.FriendlyName + "</td><td>" + D.IP + ":" + D.Port + "/" + D.SMP + "</td></tr>" + Environment.NewLine;
DLNAGood +=D.FriendlyName.Replace(" "," ") + "#URL#" + Server + " ";
}
}
Helper.SaveSetting("DLNAGood", DLNAGood.Trim());
Output += "<tr><td colspan='2'><a href='" + this.Request.Url.AbsoluteUri + "?Refresh=true'>Refresh DLNA</a></td></tr>" + Environment.NewLine;
The constructor for each DNLADevice is shown below
public DLNADevice(string url)
{//Constructor like ""
this.IP = url.ChopOffBefore("http://").ChopOffAfter(":");
this.SMP = url.ChopOffBefore(this.IP).ChopOffBefore("/");
string StrPort = url.ChopOffBefore(this.IP).ChopOffBefore(":").ChopOffAfter("/");
int.TryParse(StrPort, out this.Port);
}
Now we need to call the device using TCP (Not UDP Like SSDP) on the correct address to test if the device is connected and what type of services has the device got to offer. The service we are looking for and is the most common used is "avtransport" which we can read by parsing the XML that is returned from our request to find the ControlUrl we will be using.
public bool IsConnected()
{//Will send a request to the DLNA client and then see if we get a valid reply
Connected = false;
try
{
Socket SocWeb = HelperDLNA.MakeSocket(this.IP, this.Port);
SocWeb.Send(UTF8Encoding.UTF8.GetBytes(HelperDLNA.MakeRequest("GET", this.SMP, 0, "", this.IP, this.Port)), SocketFlags.None);
this.HTML = HelperDLNA.ReadSocket(SocWeb, true, ref this.ReturnCode);
if (this.ReturnCode != 200) return false;
this.Services = DLNAService.ReadServices(HTML);
if (this.HTML.ToLower().IndexOf("<friendlyname>") > -1)
this.FriendlyName = this.HTML.ChopOffBefore("<friendlyName>").ChopOffAfter("</friendlyName>").Trim();
foreach (DLNAService S in this.Services.Values)
{
if (S.ServiceType.ToLower().IndexOf("avtransport:1") > -1)//avtransport is the one we will be using to control the device
{
this.ControlURL = S.controlURL;
this.Connected = true;
return true;
}
}
}
catch { ;}
return false;
}
Now we are ready to rock and roll and play some music by sending the device listening to the "ControlUrl" a little bit of XML that contains information about our .mp3 file that could be hosted on another machine across the internet or in my case is a local virtual directory that is hosted by Microsoft Information Server (IIS-7) on a windows machine. UploadFile is called first and then StartPlay as shwon below.
private string UploadFileToPlay(string ControlURL, string UrlToPlay)
{///Later we will send a message to the DLNA server to start the file playing
string XML = XMLHead;
XML += "<u:SetAVTransportURI xmlns:u=\"urn:schemas-upnp-org:service:AVTransport:1\">" + Environment.NewLine;
XML += "<InstanceID>0</InstanceID>" + Environment.NewLine;
XML += "<CurrentURI>" + UrlToPlay.Replace(" ", "%20") + "</CurrentURI>" + Environment.NewLine;
XML += "<CurrentURIMetaData>" + Desc() + "</CurrentURIMetaData>" + Environment.NewLine;
XML += "</u:SetAVTransportURI>" + Environment.NewLine;
XML += XMLFoot + Environment.NewLine;
Socket SocWeb = HelperDLNA.MakeSocket(this.IP, this.Port);
string Request = HelperDLNA.MakeRequest("POST", ControlURL, XML.Length, "urn:schemas-upnp-org:service:AVTransport:1#SetAVTransportURI", this.IP, this.Port) + XML;
SocWeb.Send(UTF8Encoding.UTF8.GetBytes(Request), SocketFlags.None);
return HelperDLNA.ReadSocket(SocWeb, true, ref this.ReturnCode);
}
private string StartPlay(string ControlURL, int Instance)
{//Start playing the new upload film or music track
string XML = XMLHead;
XML += "<u:Play xmlns:u=\"urn:schemas-upnp-org:service:AVTransport:1\"><InstanceID>"+ Instance + "</InstanceID><Speed>1</Speed></u:Play>" + Environment.NewLine;
XML += XMLFoot + Environment.NewLine;
Socket SocWeb = HelperDLNA.MakeSocket(this.IP, this.Port);
string Request = HelperDLNA.MakeRequest("POST", ControlURL, XML.Length, "urn:schemas-upnp-org:service:AVTransport:1#Play", this.IP, this.Port) + XML;
SocWeb.Send(UTF8Encoding.UTF8.GetBytes(Request), SocketFlags.None);
return HelperDLNA.ReadSocket(SocWeb, true, ref this.ReturnCode);
}
Note that we upload the file (well stream if the truth be told) and that we only call "StartPlay()" if we get a good HTTP 200 OK reply from the device that will first checks that it can see the file before sending back the 200 OK response.
Pausing, Stopping or Starting the current play item is just as easy as is setting the volume level and all you need to do is to wrap the command up as an XML packet as shown above and then post the XML command to the DLNA Client using the ControlURL or we could use a command to request details about the current items position being played.
private string GetPosition(string ControlURL)
{//Returns the current position for the track that is playing on the DLNA server
string XML = XMLHead + "<m:GetPositionInfo xmlns:m=\"urn:schemas-upnp-org:service:AVTransport:1\"><InstanceID xmlns:dt=\"urn:schemas-microsoft-com:datatypes\" dt:dt=\"ui4\">0</InstanceID></m:GetPositionInfo>" + XMLFoot + Environment.NewLine;
Socket SocWeb = HelperDLNA.MakeSocket(this.IP, this.Port);
string Request = HelperDLNA.MakeRequest("POST", ControlURL, XML.Length, "urn:schemas-upnp-org:service:AVTransport:1#GetPositionInfo", this.IP, this.Port) + XML;
SocWeb.Send(UTF8Encoding.UTF8.GetBytes(Request), SocketFlags.None);
return HelperDLNA.ReadSocket(SocWeb, true, ref this.ReturnCode);
}
You now have all you need to play a movie or a single track from your music collection but if you would like to queue up tracks for a music album then things become a little more difficult because you cannot upload a play-list as such that will work on the devices that I have tested so far even if the device implements the "SetNextAVTransportURI" Interface like my Samsung and Sony "Smart TV's" because all that happens is that the current track stops playing and the new one starts or nothing happens at all.
Well that's the bad news but the good news is that I found the answer by using the above function "GetPosition" that amongst other things returned is how long the current item has left to play so by polling the DNLA device and using a collection of items in our play list it becomes possible to Upload and Start the next item to be played just at the right time.
All the C# Class files are included in the DLNACore.zip at the top of the page along with the play-list queue needed to play albums so you should not have too much trouble editing the test application to play a few movies or tracks.
Your starting code for a project might look something a bit like this.
string DLNAGood = "";
if (DLNA.SSDP.Servers.Length == 0)
{//Will send out a UDP message and then we need to wait for the relies to come in from DLNA server on the LAN
DLNA.SSDP.Start();
Thread.Sleep(12000);//Wait for our service to complete
Helper.SaveSetting("DLNA.SSDP.Servers", DLNA.SSDP.Servers);
} //Save the above values because we don't want to do this very often
foreach (string Server in DLNA.SSDP.Servers.Split(' '))
{//Test each DLNA client to see if we can talk to them
DLNA.DLNADevice D = new DLNA.DLNADevice(Server);//Should be called Client
if (D.IsConnected())
{//The TV is switched on and is ready to play
D.TryToPlayFile("");//Calls upload and start
DLNAGood +=D.FriendlyName.Replace(" "," ") + "#URL#" + Server + " ";
}
}
Helper.SaveSetting("DLNAGood", DLNAGood.Trim());
Using "Play To" from a web-page (Part II)
Using a slow wifi connection to download a 1GB Movie from your two terabyte movie collection to a laptop takes time and that becomes a waste of time if you start streaming the movie to the TV and then decide its junk and that you don't want to watch it and devices like I-Pads or mobile phone just won't have any "Play To" Apps that will work but what if you could browse to a local web-site and click a "Play To" link and have the movie pop up on the TV screen without having to first stream the movie to the mobile device ?
Forget about having to open up all them ports in the windows firewall and all the services you need to have running or access permissions for hidden DLL's running in SvcHost because now you can simply stream the files directly to the TV or download the file to a pen-stick by simply putting a few web-pages on your server and mapping your USB external hard-drive as a virtual directory.
This function is used to read all the directories on the media drive to generate the HTM needed for the project and the code to add all the file in the current directory is just about the same and as easy to write.
if (Directory.Exists(Path))
{
DirectoryInfo DRoot = new DirectoryInfo(Path);
this.Title = DRoot.Name;
bool IsLeft = true;
foreach (DirectoryInfo DInfo in DRoot.GetDirectories())
{
if (DInfo.Name.ToLower().IndexOf("vti_cnf") == -1)
{
if (IsLeft)//Rows containtans two folders, left or right
Output += "<tr><td width='350'><a href='" + RootUrl + "vid/Default.aspx?Path=" + Vids.Helper.EncodeUrl(DInfo.FullName) + "'>" +Vids.Helper.ShortString( DInfo.Name,45) + "</a></td><td><img src='images/folder.png' height='20' width='30' alt='folder' /></td>";
else
Output += "<td width='350'><a href='" + RootUrl + "vid/Default.aspx?Path=" + Vids.Helper.EncodeUrl(DInfo.FullName) + "'>" + Vids.Helper.ShortString(DInfo.Name, 45) + "</a></td><td><img src='images/folder.png' height='20' width='30' alt='folder' /></td></tr>" + Environment.NewLine;
}
IsLeft = !IsLeft;
}
}
if a movie file is clicked in the browser then it just becomes a question of passing in the URL of the movie and then calling "UploadFile" and "StartPlay" from the code behind for the .aspx page which is easy to do so here I will cover how we deal with the queued play list and that starts with a bit of javascript in the page that we used to do Ajax before Ajax or JSON was ever invented.
<script type="text/javascript">
setInterval("PollServer()", 5000);
var Count = 0;
function PollServer() {
Count++;
var I = new Image();
I.src = "PlayTo.aspx?Refresh=" + Count;
}
function Previous() {
Count++;
var I = new Image();
I.src = "PlayTo.aspx?Previous=true&Refresh=" + Count;
}
function Next() {
Count++;
var I = new Image();
I.src = "PlayTo.aspx?Next=true&Refresh=" + Count;
}
function Stop() {
Count++;
var I = new Image();
I.src = "PlayTo.aspx?Stop=true&Refresh=" + Count;
}
</script>
All the files in the albums folder were saved to an array from the folder containing the album to form our queue so in the code behind all we need to do is Something like PlayListPointer++; UploadFile(); StartPlay(); but the important part to notice here is that a timer in javascript is used (see function PollServer) to keep polling the web server which in the code behind then calls the GetPostion function which returns how long the current sound track has left to play and if it's less than a few seconds then the calling thread sleeps for a second or two and then increments the PlayListPointer before calling UploadFile(); StartPlay(); on the DNLA device to play the next track.
The server-side funtion for polling the DNLA Client TV is shown below and this function will also advance the queue if the "Force" flag is set to true because the "Next Track" button has been pressed by the user.
public int PlayNextQueue(bool Force)
{//Play the next track in our queue but only if the current track is about to end or unless we are being forced
if (Force)
{//Looks like someone has pressed the next track button
PlayListPointer++;
if (PlayListQueue.Count == 0) return 0;
if (PlayListPointer > PlayListQueue.Count)
PlayListPointer = 1;
string Url = PlayListQueue[PlayListPointer];
StopPlay(false);
TryToPlayFile(Url);//Just play it
NoPlayCount = 0;
return 310;//Just guess for now how long the track is
}
else
{
string HTMLPosition = GetPosition();
if (HTMLPosition.Length < 50) return 0;
string TrackDuration = HTMLPosition.ChopOffBefore("<TrackDuration>").ChopOffAfter("</TrackDuration>").Substring(2);
string RelTime = HTMLPosition.ChopOffBefore("<RelTime>").ChopOffAfter("</RelTime>").Substring(2);
int RTime = TotalSeconds(RelTime);
int TTime = TotalSeconds(TrackDuration);
int SecondsToPlay = TTime - RTime - 5;
if (SecondsToPlay < 0) SecondsToPlay = 0;//Just a safeguard
if (SecondsToPlay <10)
{//Current track is about to end so wait a few seconds and then force the next track in our queue to play
Thread.Sleep((SecondsToPlay * 1000) +100);
return PlayNextQueue(true);//Calls uploadFile and StartPlay
}
return SecondsToPlay;//Will have to wait to be polled again before playing the next track in our queue
}
The download for this project includes all the files you need for browsing your movie collection and using some of the code we have covered already so that covers the TV and "Play To" but what about if you want to stream and watch a movie on your laptop ? Well of course thats included in the project but you may need to install the VLC plugin for windows because HTML5 is a little behind the times when playing 20 year old .mps files using the <AUDIO> tag in some browsers
Create a new web-site using something like port 8080 and set the physical path as an external hard-drive if that is where you store your media files so that the setup looks something like this from IIS-7 Service manager
Default Web-Site (80)
Media (8080)
|Sifi
|Horror
|War
|Music
Use Visual studio to create a new application named "Vid" on the 8080 website so that the setup looks like this
Default Web-Site (80)
Media (8080)
|Sifi
|Horror
|Vid (Application)
|War
|Music
Now place the contents of the DLNAWeb.zip vid folder into the sites Vid folder and then test the web-site is working by browsing to hxxp://localhost:8080/Vid/Default.aspx to view the home screen.
If you don't have a copy of Visual studio then copy the "Vid" folder to the physical path for the web-site and then convert the folder to an application by right clicking the "Vid" folder in IIS-7 Manager (Start-Programs-Admin Tools) and then selecting "Convert to application"
By default IIS-7 does not host all the file types that you might need for your movies so click on the "Media(8080)" node on the left-hand side of IIS-7 Manager and then double click to MINE Types to view all the suported file types. If .AVI is missing then right click and add a new MINE Type that should look like this.
.avi video/avi Inherited
Thats all, Enjoy Dr Gadgit
The web-site project also contains a "Youtube.asxp" web-page that relays search requests over SSL to Youtube so that pages fit better in small android type devices and any spyware scripts are also removed but in order to do this a webhelper.cs file is included and I think you will find that the static GetWebPage() method is worth looking at since it deals with Encryption, Cookies, Chunking and Gzip and leaves you in a lot more control than using a HttPWebRequest
See for my wifi scanner or wait for my next project that is a fully working windows remote decktop application.
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
PlayTo.aspx.cs(12,32,12,36): error CS0246: The type or namespace name 'DLNA' could not be found (are you missing a using directive or an assembly reference?)
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | https://codeproject.freetls.fastly.net/Articles/893791/DLNA-made-easy-with-Play-To-from-any-device?display=Print | CC-MAIN-2022-05 | refinedweb | 3,159 | 52.49 |
As a result of last week's research camp on robotics, and Orocos being
available as an ROS stack, we've improved the integration of Orocos components
in ros package trees. A small number of patches have been applied on the
toolchain-2.1 branch, some bigger ones on the master branch.
This is how it works for 2.1.x (toolchain-2.1) :
* When you use the orocreate-pkg command to create a package, the UseOrocos
cmake macros will detect if ROS_ROOT was set or not. If so, it switches to
rosbuild mode and finds rtt/ocl using rospack. There are two ways to find it:
either you downloaded them from a ros repository, or you have them checked out
yourself. In the latter case, they must be in your ROS_PACKAGE_PATH.
* When in rosbuild mode, components end up in package/lib/orocos/, while non-
component libraries end up in package/lib/. This is conform what the
deployer/plugin loaders expect. We assume you're not going to do a 'make
install' in the rosbuild mode, so a plain 'make' puts everything in the right
place.
* When in non-rosbuild mode, everything stays as before.
In addition, for master (upcomming toolchain 2.2.0), I have switched OCL to
use the UseOrocos cmake macros. So OCL will pick up the above settings
automatically depending on your environment. This means that ros users don't
need a 'make install' step anymore when building ocl. This solves the annoying
'two deployer-gnulinux choices' bug in rosrun. RTT still requires a 'make
install' step in the rtt/install directory for ros environments. That will
stay like that forever, for now :-)
Last but not least, in the deployer, you may now use import("packagename")
where packagename is a ros package (so findable by rospack) and which has its
orocos components installed in packagename/lib/orocos. It will also load all
dependencies of that package ! This means a huge improvement in setting up
import statements, since you can now universally import (or depend on)
typekits or components once they are packaged properly.
The standard Orocos way of loading packages is also still supported. In case
you have an lib/orocos/packagename directory in your RTT_COMPONENT_PATH, that
will be found too using import("packagename") and loaded accordingly. No
dependencies can be used in this case though, since you can't specify them,
unless we also install the manifest.xml file and start parsing it.
That all said, this is still somewhat experimental. It works in one setup, it
may be broken in another. I expect some more iterations, but any feedback
before the next release would be great. We need to catch-up with the docs/wiki
too.
Peter
PS: On master, I had to rename the rttlua component to orocos-lua to avoid
target name conflicts in the cmake file. | http://www.orocos.org/forum/rtt/rtt-dev/orocos-rosbuildrospack | CC-MAIN-2018-26 | refinedweb | 475 | 64.91 |
>> for balanced parentheses in an expression in C++
Suppose we have an expression. The expression has some parentheses; we have to check the parentheses are balanced or not. The order of the parentheses are (), {} and []. Suppose there are two strings. “()[(){()}]” this is valid, but “{[}]” is invalid.
The task is simple; we will use stack to do this. We should follow these steps to get the solution −
- Traverse through the expression until it has exhausted
- if the current character is opening bracket like (, { or [, then push into stack
- if the current character is closing bracket like ), } or ], then pop from stack, and check whether the popped bracket is corresponding starting bracket of the current character, then it is fine, otherwise that is not balanced.
- After the string is exhausted, if there are some starting bracket left into the stack, then the string is not balanced.
Example
#include <iostream> #include <stack> using namespace std; bool isBalancedExp(string exp) { stack<char> stk; char x; for (int i=0; i<exp.length(); i++) { if (exp[i]=='('||exp[i]=='['||exp[i]=='{') { stk.push(exp[i]); continue; } if (stk.empty()) return false; switch (exp[i]) { case ')': x = stk.top(); stk.pop(); if (x=='{' || x=='[') return false; break; case '}': x = stk.top(); stk.pop(); if (x=='(' || x=='[') return false; break; case ']': x = stk.top(); stk.pop(); if (x =='(' || x == '{') return false; break; } } return (stk.empty()); } int main() { string expresion = "()[(){()}]"; if (isBalancedExp(expresion)) cout << "This is Balanced Expression"; else cout << "This is Not Balanced Expression"; }
Output
This is Balanced Expression
- Related Questions & Answers
- Check for balanced parentheses in Python
- Check for balanced parentheses in an expression - O(1) space - O(N^2) time complexity in C++
- Check for balanced parentheses in an expression O(1) space O(N^2) time complexity in Python
- Program to check whether parentheses are balanced or not in Python
- Count pairs of parentheses sequences such that parentheses are balanced in C++
- C++ Remove Invalid Parentheses from an Expression
- Print all combinations of balanced parentheses in C++
- C++ Balanced expression with replacement
- How to match parentheses in Python regular expression?
- Program to find maximum number of balanced groups of parentheses in Python
- C++ Program to Check for balanced paranthesis by using Stacks
- Print the balanced bracket expression using given brackets in C Program
- In MySQL, how to check for a pattern which is not present within an expression?
- Replace the Substring for Balanced String in C++
- C++ Program to Construct an Expression Tree for a Postfix Expression
Advertisements | https://www.tutorialspoint.com/check-for-balanced-parentheses-in-an-expression-in-cplusplus | CC-MAIN-2022-27 | refinedweb | 414 | 58.21 |
Inheriting scalars
Discussion in 'C++' started by Ray Gardener, Jun 9, scalars, lists, dictionaries ..., Feb 28, 2005, in forum: Python
- Replies:
- 8
- Views:
- 336
- Paul Boddie
- Mar 1, 2005
Scipy: vectorized function does not take scalars as argumentsago, May 24, 2006, in forum: Python
- Replies:
- 3
- Views:
- 339
- ago
- May 25, 2006
TypeError: only length-1 arrays can be converted to Python scalars., Feb 28, 2008, in forum: Python
- Replies:
- 2
- Views:
- 9,197
- Steve Holden
- Feb 28, 2008
scalars and namespaceJeff Thies, Jun 27, 2003, in forum: Perl Misc
- Replies:
- 4
- Views:
- 99
- Kent Paul Dolan
- Jun 28, 2003
Why "scalars leaked" in ithread? Is my code right for doing this?niry, Aug 28, 2003, in forum: Perl Misc
- Replies:
- 5
- Views:
- 128
- niry
- Aug 30, 2003 | http://www.thecodingforums.com/threads/inheriting-scalars.283630/ | CC-MAIN-2014-49 | refinedweb | 128 | 71.48 |
curve karve wrote:How many objects are eligible for garbage collection after executing line 7?
view source print
public class Tester {
public static void main(String[] args) {
Integer x = new Integer(3000);
Integer y = new Integer(4000);
Integer z = new Integer(5000);
Object a = x;
x = y;
y = z;
z = null; //line 7
}
}here the answer is zero i don know how..anybody kindly explain..according to me the answer should be 3...
Mahesh Chitale wrote:yep right zero is right 3 is wrong everyone has a reference
well that graph thing is really great can be used to solve any GC question thanks for it W.Joe Smith | http://www.coderanch.com/t/476693/java-programmer-SCJP/certification/garbage-collection | CC-MAIN-2014-42 | refinedweb | 109 | 59.64 |
Ticket #4346 (closed Bugs: fixed)
Boost pool's not comaptible with Microsoft memory leak detection
Description
Boost pool classes not compatible with Microsoft memory leak detection.
You can include that lines in code: #ifdef _DEBUG
#define _CRTDBG_MAP_ALLOC
#endif #include <crtdbg.h> #include "boost\pool\pool_alloc.hpp"
We can solve problem using two ways:
- Rename all malloc & free methods
- Add #pragma push_macro, #undef and #pragma pop_macro lines. Also add alternative names for malloc and free methods
Attachments
Change History
Changed 4 years ago by Arkadiy Shapkin <arkadiy_s@…>
- attachment pool.patch
added
comment:1 Changed 4 years ago by steven_watanabe
- Owner set to cnewbold
- Component changed from None to pool
comment:2 Changed 4 years ago by steven_watanabe
- Status changed from new to closed
- Resolution set to fixed
Note: See TracTickets for help on using tickets. | https://svn.boost.org/trac/boost/ticket/4346 | CC-MAIN-2014-15 | refinedweb | 134 | 50.16 |
In this Tutorial, we will Explore Java Variables, Types of Variables, Java Instanceof, Scope & Lifetime of a Variable with the help of Examples:
We will also see a few frequently asked questions that would help you in understanding the topic better.
After going through this tutorial, you will be gaining insights into the Java variables, local and global variables, the instance variable, and other sub-topics related to Java variables.
=> Check ALL Java Tutorials Here.
What You Will Learn:
Java Variables
As we know a Java variable is a storage unit in a Java program. A Java variable is a combination of ‘type’, ‘identifier’, ‘identifier value’. Declaring a variable requires ‘type’ and ‘identifier’.
However, when you specify the value of a variable after declaring the variable, this process is called the initialization of a variable.
Syntax:
type identifier [ = value][, identifier [= value] ...]
Examples
// declaring three variables a, b and c. int a, b, c; // initializing variables a and c. int a = 10, b, c = 5;
Dynamic Initialization
Here, we will see the dynamic initialization of a variable. We will be calculating the discriminant of the quadratic equation of mathematics.
The mathematical formula for calculating discriminant is b²-4ac for the equation ax² +bx +c
All we have to do is to calculate the discriminant using dynamic initialization.
public class DynamicInitialization { public static void main(String[] args) { int a = 1, b = 2, c = 3; /* * d is dynamically initialized which will be the * discriminant of the quadratic equation */ double d = b*b -4*a*c; System.out.println("Discriminant is: " + d); } }
Output
Scope And Lifetime Of Variables
In this section, we will discuss the scope and lifetime of a Java variable. Here, we will take a variable ‘a’ that will be known to the entire program and then demonstrate the value of that variable which will be specific to a block inside the program.
Again we will create another variable ‘b’ inside a block that depends on the value of ‘a’. As soon as the scope ends, the scope of variable ‘b’ also ends whereas ‘a’ is still known to the program.
class VariableScope { public static void main(String args[]) { // a is known to entire program int a; a = 15; // starting new scope known only to this block if (a == 15) { int b = 20; // a and b both known here. System.out.println("a and b: " + a + " " + b); a = b / 2; } /* b is unknown here which means * if we print b, it will throw an error * whereas a is still known */ System.out.println("a is " + a); } }
Output
Java Variable Types
In this section, we will learn about the various types of Java variables mentioned below.
- Local variable
- Instance variable
- Static or Class variable
Local Variables
These variables are declared inside the body of a method. These can be used within the same method where it is being initialized.
Some of the properties of a Local Variable include:
- Local variables are declared inside a method, constructor, or block.
- No access modifiers for local variables.
- These can be used only within the same block, method, or constructor where it is initialized.
- No default value after you have declared your local variable. You need to initialize your declared local variable.
- It can’t be defined by a static keyword.
Given below is the program in which we have used local variables initialized within a method of a class. As “height” is a local variable initialized with the calculate() method, the scope of this variable will be confined within the method.
public class local { public void calculate() { // initialized a local variable "height" int height = 0; // incrementing the value of height height = height + 170; System.out.println("height is: " + height + " cm"); } public static void main(String args[]) { // a is a reference used to call calculate() method local a = new local(); a.calculate(); } }
Output
Instance Variables
Instance variables are those variables that are declared inside a class. Unlike Local variables, these variables cannot be declared within a block, method, or constructor.
Enlisted below are the properties of the Instance variable:
- They are declared within a class but outside a block, method or constructor.
- It cannot be defined by a static keyword.
- Unlike Local variables, these variables have a default value.
- The integer type has a default value ‘0’ and the boolean type has the default value ‘false’.
- Unlike Local variables, we have access modifiers for Instance variables.
Given below is the program where we have demonstrated the instance variable. We have declared variables outside the main method and then assigned the values to them by using objects keeping one variable the “number” unassigned.
Finally, we have printed the values of these Instance variables and the unassigned variable “number” has printed ‘0’ by default.
public class instance { // Declaring instance variables public int rollNum; public String name; public int totalMarks; public int number; public static void main(String[] args) { // created object instance in = new instance(); in.rollNum = 95; in.name = "Saket"; in.totalMarks = 480; // printing the created objects System.out.println(in.rollNum); System.out.println(in.name); System.out.println(in.totalMarks); /* * we did not assign the value to number so it * will print '0' by default */ System.out.println(in.number); } }
Output
Static Or Class Variable
Unlike the Local and Instance variable (where we can not use static), we have another variable type which is declared as static and is known as “Static or Class variable”.
Given below are some of the properties of the Static or Class variable:
- These variables cannot be local.
- Static variables are shared among all the instances of a class.
- The default values of Static/Class variables are the same as the Instance variables.
- Static variables can be used within a program by calling the className.variableName
- The memory allocated to store Static variables is Static memory.
In the below program, we are calculating the circumference of a circle by using a private variable radius and a constant pi. Both these variables are declared as static.
public class StaticVariable { // radius is declared as private static private static int radius; // pi is a constant of type double declared as static private static final double pi = 3.14; public static void main(String[] args) { // assigning value of radius radius = 3; // calculating and printing circumference System.out.println("Circumference of a circle is: " + 2*pi*radius); } }
Output
Java instanceof
The Java instanceof is an operator that is used to tell whether the created object is an instance of the type or not. Type can be a Class or an interface.
The return type is Boolean i.e. either “true” or “false”.
For Example, In the below program, we have created a reference variable a1 of type A and tried to find whether a1 is an instance of A or not. As a1 is an instance of A, it returned “true”.
class A { public static void main(String args[]) { A a1 = new A(); System.out.println(a1 instanceof A); } }
Output
Frequently Asked Questions
Q #1) What are Java Global Variables?
Answer: Global variables are those variables that are accessed by the entire program and it is declared at the beginning of the program.
Global variables do not belong to Java as Java is a pure Object Oriented programming language and everything belongs to the Java Class. Just to protect data and members (variables) of the Class, Java does not support Global variables.
However, we have Static variables that are globally declared and is accessible by all method, subclass of a program.
Q #2) How to clear the value of a variable in Java?
Answer: It can be done using an inbuilt method of Java that is java.DoubleAdder.reset().
The syntax of this method is
Public void reset();
This method belongs to the package “java.util.concurrent.atomic.DoubleAdder” so you need to import this package before you proceed.
In the below program, we have added a few elements into DoubleAdder and then tried resetting it and finally printed the value after the reset operation.
import java.util.concurrent.atomic.DoubleAdder; public class clearValue { public static void main(String[] args) { DoubleAdder a = new DoubleAdder(); // adding elements into DoubleAdder a.add(99); a.add(83); a.add(75); a.add(105); //Printing the value of 'a' System.out.println("Value after adding elements: " +a); // resetting the value of a a.reset(); // Printing the value of 'a' after reset System.out.println("Value after resetting: " + a); } }
Output
#3) How to check the following Variable Type in Java?
String a = “test”;
Answer: If the variable is of type String then you can use referenceVariable.getClass().getName().
class A { public static void main(String args[]) { String a = "test"; System.out.println(a.getClass().getName()); } }
Output
#4) How to update a variable in Java?
Answer: Given below is a simple program where we have updated a variable in Java.
public class updateVariable { public static void main(String[] args) { int a = 10; System.out.println(a); a = 20; System.out.println(a);}}
Output
Conclusion
In this tutorial, we have discussed Java Variables and provided an insight into the Dynamic Initialization, scope, and lifetime of a variable along with explaining the different Java variable types and Java instanceof operator.
Each major concept was explained with proper programming examples to help you understand the topic better.
Suggested reading =>> VBA Variables and Option Explicit
Towards the end, we also saw a couple of frequently asked questions that will let you know about the different questions which could be asked during Java interviews.
=> Watch Out The Simple Java Training Series Here. | https://www.softwaretestinghelp.com/java-variables-and-types/ | CC-MAIN-2021-17 | refinedweb | 1,584 | 54.73 |
/* . */
#define NEW_SELECTIONS
/* On 4.3 these lose if they come after xterm.h. */
/* On HP-UX 8.0 signal.h loses if it comes after config.h. */
/* Putting these at the beginning seems to be standard for other .c files. */
#include <std>
#include <ctype.h>
#include <errno.h>
#include <setjmp.h>
#include <sys/stat.h>
();
#endif /* USE_X_TOOLKIT */
#ifndef USE_X_TOOLKIT
#define x_any_window_to_frame x_window_to_frame
, mouse_faceQueue (););
FONT_TYPE *font = FACE_FONT (face);
GC gc = FACE_GC (face);
int gc_temporary = 0;
/* HL = 3 means use a mouse face previously chosen. */
if (hl == 3)
cf =)
xgcv.foreground = face->foreground;
/* If the glyph would be invisible,
try a different foreground. */
if (xgcv.foreground == xgcv.background)
xgcv.foreground = face->background;;;
{ | https://emba.gnu.org/emacs/emacs/-/blame/eb506b8d0ee459706a1860f35ba46e7cdf8c7edd/src/xterm.c | CC-MAIN-2021-10 | refinedweb | 110 | 63.56 |
In article <39c3a6b9.8147312 at news-server.socal.rr.com>, howard at eegsoftware.com wrote: > On Thu, 14 Sep 2000 18:42:23 GMT, howard at eegsoftware.com wrote: > > >I am getting compile errors trying to compile a SWIG 1.1 extension to > >work with 2.0b1. > > > >The errors look like: > > > >Compiling... > >pycdx3_wrap.cxx > >D:\Microsoft Visual Studio\VC98\INCLUDE\math.h(514) : error C2894: > >templates cannot be declared to have 'C' linkage > >D:\Microsoft Visual Studio\VC98\INCLUDE\wchar.h(700) : error C2733: > >second C linkage of overloaded function 'wmemchr' not allowed > > ... In your pycdc3_wrap.cxx file you need to move the #include <Python.h> to be in front of #ifdef __cplusplus extern "C" { #endif You can make this change more permanent by editing pyexp.swg in SWIG's library directory. -- Robin Dunn Software Craftsman robin at AllDunn.com Java give you jitters? Relax with wxPython! Sent via Deja.com Before you buy. | https://mail.python.org/pipermail/python-list/2000-September/061803.html | CC-MAIN-2016-44 | refinedweb | 154 | 64.88 |
Created on 2020-07-21 09:28 by ronaldoussoren, last changed 2020-07-29 21:14 by gregory.p.smith.
The code for os.link() seems to ignore follow_symlinks when the linkat(2) function is not available on the platform, which results in unexpected behaviour when "follow_symlinks" is false.
I'm trying to give os.link() and follow_symlinks the benefit of the doubt, but the implementation just seems buggy to me.
POSIX says that "[i]f path1 names a symbolic link, it is implementation-defined whether link() follows the symbolic link, or creates a new link to the symbolic link itself" [1]. In Linux, link() does not follow symlinks. One has to call linkat() with AT_SYMLINK_FOLLOW:.
The behavior is apparently the same in FreeBSD [2].
Thus the following implementation in os.link() seems buggy.
#ifdef HAVE_LINKAT
if ((src_dir_fd != DEFAULT_DIR_FD) ||
(dst_dir_fd != DEFAULT_DIR_FD) ||
(!follow_symlinks))
result = linkat(src_dir_fd, src->narrow,
dst_dir_fd, dst->narrow,
follow_symlinks ? AT_SYMLINK_FOLLOW : 0);
else
#endif /* HAVE_LINKAT */
The only way that the value of follow_symlinks matters in Linux is if src_dir_fd or dst_dir_fd is used with a real file descriptor (i.e. not DEFAULT_DIR_FD, which is AT_FDCWD). Otherwise, the default True value of follow_symlinks is an outright lie. For example:
>>> os.link in os.supports_follow_symlinks
True
>>> open('spam', 'w').close()
>>> os.symlink('spam', 'spamlink1')
>>> os.link('spamlink1', 'spamlink2')
spamlink2 was created as a hardlink to spamlink1, not its target, i.e. it's a symlink:
>>> os.lstat('spamlink1').st_ino == os.lstat('spamlink2').st_ino
True
>>> os.readlink('spamlink2')
'spam'
In contrast, if src_dir_fd is passed, then follow_symlinks=True is implemented as advertised (via AT_SYMLINK_FOLLOW):
>>> fd = os.open('.', 0)
>>> os.link('spamlink1', 'spamlink3', src_dir_fd=fd)
spamlink3 was created as a hardlink to spam, the target of spamlink1:
>>> os.lstat('spam').st_ino == os.lstat('spamlink3').st_ino
True
That the value of an unrelated parameter -- src_dir_fd -- changes the behavior of the follow_symlinks parameter is obviously a bug that should be addressed.
POSIX mandates that "[i]f both fd1 and fd2 have value AT_FDCWD, the behavior shall be identical to a call to link(), except that symbolic links shall be handled as specified by the value of flag". It's already using AT_FDCWD as a default value, so the implementation of os.link() should just unconditionally call linkat() if it's available. Then the value of follow_symlinks, true or false, will be honored, with or without passing src_dir_fd or dst_dir_fd.().
---
In Windows, CreateHardLinkW [3] is incorrectly documented as following symlinks (i.e. "[i]f the path points to a symbolic link, the function creates a hard link to the target"). Actually, it opens the file to be hard-linked with the NTAPI option FILE_OPEN_REPARSE_POINT (same as WinAPI FILE_FLAG_OPEN_REPARSE_POINT). Thus no type of reparse point is followed, including symlinks.
---
[1]:
[2]:
[3]:
>().
Isn't that a backwards-incompatible change?
> Isn't that a backwards-incompatible change?
So, do you think it should just be documented that follow_symlinks is effectively ignored with os.link() on most platforms that claim to support it, unless either src_dir_fd or dst_dir_fd is used? I'd rather it was fixed to behave consistently in 3.10, even if it's backwards incompatible with some use cases on some platforms. I think for most use cases, it's just called without arguments as os.link(src, dst), in which case on most platforms switching the default to follow_symlinks=False will preserve the existing and expected behavior.
I agree that the current implementation is wonky.
The implementation should use linkat(2) whenever it is available, that's the only portable way to honour the follow_symlinks flag as POSIX says that the behaviour for link(2) with symbolic links is implementation defined.
From a quick experiment link(2) on Linux behaves like linkat(2) without AT_SYMLINK_FOLLOW. On macOS link(2) behaves like linkat(2) *with* AT_SYMLINK_FOLLOW.
That means os.link behaviour is currently different on macOS and Linux.
I think it would be worthwhile to try to standardise the behaviour. Given the relative market sizes it I'd go for the current behaviour on Linux (with explicit tests!), even if that might not be backward compatible on macOS.
I'd also add a configure test for the behaviour of link(2) and error out when the user specifies a value for follow_symlinks that's not compatible with link(2) when linkat(2) is not available. Or maybe only do this when the user explicitly passes in a value for this argument (make it a tri-state).
Also: note that the current macOS installers on macOS don't look at the follow_symlinks flag at all, they are build on macOS 10.9 where linkat(2) is not available (unlink macOS 10.10 or later). That's why I noticed this problem.
> So, do you think it should just be documented that follow_symlinks is effectively ignored with os.link() on most platforms that claim to support it, unless either src_dir_fd or dst_dir_fd is used?
At this stage, I am just trying to understand all the possibilities in the design space and I don't have a preferred path. I just wanted to point out that we should take into account that many things may be broken if we make changes that are backwards-incompatible in a low-level function and we must have that in mind
Thanks for the analysis Eryk! I think you are right, changing the default to match the behavior that people have actually been experiencing on `os.link(src, dst)` makes sense.
Possible suggestion:
We can go one step further if anyone believes it is likely to be necessary: Preserve the exiting buggy behavior regardless of src_dir_fd= value when follow_symlinks is not explicitly provided as an argument. This way the behavior for people who did not specify it remains unchanged <= 3.9. This would be the principal of no surprise. (it'd effectively become a tri-state _UNSPECIFIED/True/False value where _UNSPECIFIED depends on the mess of conditionals described by Eryk)
Documentation up through 3.9 should be updated to note the oddity.
In 3.8 & 3.9 if it _is_ explicitly specified, fixing the bug so that it actually honors that makes sense.
In 3.10 we should honor the new =False default without this tri-state behavior bug-compatible-by-default at all.
This is more complicated to implement. I'd also be happy with the already described "just updating the default to False and fixing forward in 3.10 to actually honor True properly."
META: Regarding macOS, can we update the macOS version used to build the installers for 3.10? | https://bugs.python.org/issue41355 | CC-MAIN-2020-34 | refinedweb | 1,091 | 58.18 |
A Proven Framework for Exporting Your Medium Followers
Medium is undergoing lots of changes, it is really evolving nowadays. I love to write on the sleek interface and I love to tweak Medium. Maybe more than writing on it.
So one of my friends had a problem that was a gold one: She wanted to export her follower list from Medium to CSV. By hand this would have taken her ages for about 2000 followers.
I haven’t yet inspected the followers page (actually I didn’t even know Medium has an option to see who is following you) so this was a completely new field for me. Fortunately she knows about HTML and she could easily find out that elements have the same classes. This is a point that I could use so I started coding the solution in Python right away. It was a good exercise for me because I am learning Python and I learned like two times more from this task than from like the first 100 pages of the book I read and from the videos I have seen.
TL;DRTL;DR
Yes, I will give you the code in a second. Just let me give you some instructions. First of all, you need to scroll down to the end of your follower list by clicking on the ‘Show More’ lots of times (it depends on how many followers you have). Second, you need to save the page after you scrolled down. It is the easiest if you right-click and click ‘Save as..’ or something like that. If you can’t find it that way, you can just hit Ctrl-S or Cmd-S in case you use Mac. Anyway, try to save it into a separate folder. Now you are done with about half of the job. Congrats!
After saving the files you will see a file and a folder. If you have chosen ‘index.html’ as the name of the file, open that file in a text editor. It is important because if you just double-click it, it will open in your browser (unless you configured it to open somewhere else but in this case I guess you are a programmer and you already know what is going on). Copy the contents of the file. The easiest way is Ctrl-A and then Ctrl-C. In Mac, change Ctrl to Cmd.
Here comes the tricky part! I coded this thing in Python and it is hard for an average user to set up a Python development environment. Fortunately it is dead simple (Halloween intended) by using a free cloud editor called Cloud9.
Just sign up and create a new box with this URL:. It is very important to select Django as a template so you already have Python shipped.
After you created your your amazing virtual machine you will see a code editor popping up. COOL!
Over here create a file named ‘main.py’ by right-clicking the folder on the left that has the same name as your project name when you created it. (In my case it is ‘your-fancy-exporter-name’ from the image above.
Right now comes the super fun part! Paste this code into
main.py:
import bs4 import re txt = open("index.html") def read1k(): return txt.read(65443553) for data in iter(read1k, ''): soup = bs4.BeautifulSoup(data, "html5lib") links = [a.attrs.get('href') for a in soup.select('div.list-itemDescription > a')]; names = [a for a in soup.select('div.list-itemDescription > a')]; descriptions = [p for p in soup.select('p.list-itemDescription')]; i = 0 length = len(descriptions) while(i < len(links)): ez = links[i] + "\t" + re.sub(' +',' ',names[i].text).replace('\n', '\t').replace('\r', '').encode('ascii', 'ignore').decode('ascii') + "\n" with open("output.csv", "a") as myfile: myfile.write(ez) i+= 1 print("The function finished with %d found followers. You can download 'output.csv' now")%(len(links))
At this point if you could follow, I must say you rock!
So to the next step: Boring installation of dependencies.
Paste this command in the terminal in the bottom of the page:
sudo pip install beautifulsoup4
Right now just before running the code, you need to insert your Medium page HTML content into a new file called ‘index.html’ the same way as you created main.py. The problem with this is that it is unformatted and the script has problems selecting data from it in it’s current stage. So right now You must format it by clicking on Edit > Code Formatting > HTML while your index.html file is open in the editor.
And Here Comes the Magic!And Here Comes the Magic!
Just paste
python main.py
into the terminal and right now you have an ‘output.csv’ in your file tree on the left! Hooray! That’s your CSV data! WOW, you made it this far! You can now just right-click to that CSV and download it from the drop-down menu. That’s it!
This was a featured article on Medium, this is why I showed you this on Codementor too, in case you would like to see my code or use it! | https://www.codementor.io/noxowe/proven-framework-for-exporting-your-medium-followers-du107ufv7 | CC-MAIN-2017-30 | refinedweb | 865 | 75.81 |
Next: Using DLLs with GNAT, Previous: Windows Calling Conventions, Up: Mixed-Language Programming on Windows [Contents][Index]
A Dynamically Linked Library (DLL) is a library that can be shared by several applications running under Windows. A DLL can contain any number of routines and variables.
One advantage of DLLs is that you can change and enhance them without forcing all the applications that depend on them to be relinked or recompiled. However, you should be aware than all calls to DLL routines are slower since, as you will understand below, such calls are indirect.
To illustrate the remainder of this section, suppose that an application
wants to use the services of a DLL
API.dll. To use the services
provided by
API.dll you must statically link against the DLL or
an import library which contains a jump table with an entry for each
routine and variable exported by the DLL. In the Microsoft world this
import library is called
API.lib. When using GNAT this import
library is called either
libAPI.dll.a,
libapi.dll.a,
libAPI.a or
libapi.a (names are case insensitive).
After you have linked your application with the DLL or the import library and you run your application, here is what happens:
API.dllis mapped into the address space of your application. This means that:
libAPI.dll.aor
API.libor automatically created when linking against a DLL) which is part of your application are initialized with the addresses of the routines and variables in
API.dll.
API.dll, routines
DllMainor
DllMainCRTStartupare invoked. These routines typically contain the initialization code needed for the well-being of the routines and variables exported by the DLL.
There is an additional point which is worth mentioning. In the Windows
world there are two kind of DLLs: relocatable and non-relocatable
DLLs. Non-relocatable DLLs can only be loaded at a very specific address
in the target application address space. If the addresses of two
non-relocatable DLLs overlap and these happen to be used by the same
application, a conflict will occur and the application will run
incorrectly. Hence, when possible, it is always preferable to use and
build relocatable DLLs. Both relocatable and non-relocatable DLLs are
supported by GNAT. Note that the
-s linker option (see GNU Linker
User’s Guide) removes the debugging symbols from the DLL but the DLL can
still be relocated.
As a side note, an interesting difference between Microsoft DLLs and Unix shared libraries, is the fact that on most Unix systems all public routines are exported by default in a Unix shared library, while under Windows it is possible (but not required) to list exported routines in a definition file (see The Definition File).
Next: Using DLLs with GNAT, Previous: Windows Calling Conventions, Up: Mixed-Language Programming on Windows [Contents][Index] | https://gcc.gnu.org/onlinedocs/gnat_ugn/Introduction-to-Dynamic-Link-Libraries-DLLs.html | CC-MAIN-2018-43 | refinedweb | 471 | 54.02 |
Things used in this project
Story
The Idea
Developing a Walabot as a tool for construction and healthcare applications is a worthy contribution to society, which is why most ideas involve these two industries, but I saw the opportunity to break from the mold of standard submissions. A tracking TV stand can increase the quality of life and the standards of tailoring entertainment to an individual and group's needs. For this reason, I developed a technical demonstration of the tracking TV stand.
Description
This project is intended to develop Walabot's impact in the entertainment sector.
The tracking TV stand will use a Walabot device as the means to monitor activity in front of it. Any display can have tracking with the help of the Walabot. Homes, shopping centers, restaurants and bars all will have more immersing television viewing with the Walabot.
The Walabot will be mounted on the stand facing the viewing area to capture the environment and people in it. Using the radio frequency technology, the Walabot will track individuals' movements and respond by moving the TV to allow everyone to have great views! Whether it is in a shopping mall display to give passing customers the best view or at a house party to tailor to the guests, the Walabot is the perfect device for the task!
Use
The Walabot will work in tandem with a motor to rotate a monitor to give the audience the best view possible.
- Audience Tracking: The Walabot will track the concentration of individuals in view of the TV. It will determine the optimal angle to position the TV based on where the audience is across the viewing area.
- Individual Tracking: The Walabot will motion track individuals in the viewing area to give them the best view as they move around the environment.
All tracking will be done at a continuous rate to match the changing environment, which makes the Walabot perfect for this project.
The Project
Design the Parts
The design for a tracking TV stand is relatively simple. A rotating turn table on which the TV sits is attached on the vertical axis to the stationary stand.
Modify a Standard Servo
Using a standard servo (I used a Futaba S3003) to power the rotation of the monitor seemed to work. But, I considered that the torque required to move larger monitors would be too great for a standard service to safely manage. Because of this fact, it was necessary to modify the servo's internal to allow for continuous rotation. With continuous rotation achievable, a gearing ratio could be applied to increase the torque output of the servo, which would allow it to turn larger screens.
- Step 1: Open the servo and melt the "stopper" on the spline gear (this is the gear that you can see rotating on the outside of the servo).
- Step 2: Remove the potentiometer from on the board and solder three male-male wires of different colors in its place. Make sure to know which color is attached to each hole.
- Step 3: Cut a hole in the side of the servo case to allow the added wires to come out, and put the wires through them.
- Step 4: Solder the three added wires to the potentiometer in the opposite way that it was attached to the board. The signal wire is the same, but the ground and power voltages are switched because the plate spins the opposite direction of the servo due to a gearing interaction.
- Step 5: Close the now modified servo.
- Step 6: Attach the spline gear ordered from to the spline of the Futaba standard servo. It is important to note that a Futaba standard servo has 25 spline teeth (these are the small ones on the inside of the central hole) while other brands may have 24. A spline gear will not be compatible with a servo if it has a different number of spline teeth so use caution in your purchases.
Assemble the TV Stand
The parts were made using 3D printed PLA.
- Step 1: Take the four M5X30 bolts and assemble the eight nylon washers and 4 wheels on them to make four separate wheel hubs.
- Step 2: Place each wheel hub in the baseClamp printed parts.
- Step 3: Attach the potentiometerAdapter with the modified servo potentiometer in it to the plain bore gear with hub, and place the gear with the hub in the hole on the turnTable.
- Step 4: Pace the standard servo in the basePlate part in the designated cutout near the center and the potentiometer on the axis of rotation. Make sure the spline gear does not hit the ribs on the turnTable.
- Step 5: Attach the baseStand, baseClamp, and baseStandMagnet parts to the basePlate using super glue.
- Step 6: Place the Walabot magnet into the baseStandMagnet part, and allow the Walabot to magnetically adhere to the full assembly.
- Step 7: Place the turnTable on top of the assembly with the monitor on top. Be sure to check there is meshing between the gear of the servo and of the turnTable.
Wire the System
To connect the physical assembly to the electrical components of the tracking TV stand, use the following instructions and the diagram in schematics:
- Step 1: Solder lead wires from the ground and power plates on the battery supply. There are two wires coming from the ground plate and one from the power plate. One wire from the ground plate attaches to the ground wire of the servo and the other is on the ground pin (top row third from the left) of the Raspberry Pi. The power lead wire attaches to the power of the servo.
- Step 2: Attach the signal wire of the servo to pin to the top row sixth from the left (two pins between ground and this attachment pin) of the Raspberry Pi.
- Step 3: Power the Raspberry Pi with a 700mA outlet plugin with a USB male end (phone charger) into the Raspberry Pi.
- Step 4: Power the Walabot with the powered USB hub.
Program the Electronics
The Raspberry Pi has python support built into it, and the Walabot API is in python.
- Step 1: Download and install the Walabot API for Raspberry Pi from the Walabot website.
- Step 2: Install the wiringPi library from GitHub.
- Step 3: Configure the Raspberry Pi to run without a display. Open terminal and type
sudo raspi-config
Use the arrow keys to navigate to Boot Options, press enter. Press enter on Desktop / CLI. Select Console Autologin, press enter. Press tab 2x, then enter to Finish. Select No when asked to reboot now.
- Step 4: Download the trackingTV python script using this command.
wget -O trackingTV.py
- Step 5: Modify the /etc/profile file to run the script on boot.
sudo nano /etc/profile
Scroll to the end of the file and add this on a new line.
sudo python
trackingTV.py
Press ctrl+x, then y, then enter to exit the file and save it.
- Step 6: Now that everything is configured reboot the Raspberry Pi.
sudo reboot
Demonstration
Custom parts and enclosures
Schematics
No document.
Code
trackingTV.pyPython
from imp import load_source import wiringpi import math import time servo = 1 # The pin the servo signal wire is connected to totalDegrees = 130 # Range of the servo +-(130/2) lastAveragePosition = 0 # Previous average position of targets from Walabot lastMoveTime = time.time() # When the servo was moved last minimumAverageDelta = 10 # Amount of degrees required for servo to move maximumStillTime = 10 # Maximum amount of time servo will be still for servoSpeed = 20 # Speed of servo in degrees/second (only tells servo when to turn off after setting position) turntableAxisOffset = 10 # cm's of space between axis of turntable and Walabot origin point # Allows angles to be calculated for direct aiming of turntable # Sets up the servo pin with wiringpi library def startServo(): wiringpi.pinMode(servo, wiringpi.PWM_OUTPUT) wiringpi.pwmSetMode(wiringpi.PWM_MODE_MS) wiringpi.pwmSetClock(384) wiringpi.pwmSetRange(1000) # Sets the position of the servo (degrees) def setServo(deg): value = deg / totalDegrees / 2 + 0.5 wiringpi.pwmWrite(servo, int(123.6 * (value) + 14 * (1 - value))) # Stops the servo from moving def stopServo(): wiringpi.pwmWrite(servo, 0) wiringpi.pinMode(servo, 0) # Connect to the Walabot, if failed wait and try again def connect(): while True: try: bot.ConnectAny() except bot.WalabotError as err: time.sleep(1) else: print("Connected") return # Setup Walabot profile and scanning area def setup(): bot.SetProfile(bot.PROF_SENSOR) bot.SetArenaR(10, 300, 2) bot.SetArenaTheta(-1, 1, 1) bot.SetArenaPhi(-40, 40, 2) bot.SetThreshold(60) bot.SetDynamicImageFilter(bot.FILTER_TYPE_NONE) print("Configured") # Begin Walabot calibration def calibrate(): bot.Start() bot.StartCalibration() print("Calibrating") while bot.GetStatus()[0] == bot.STATUS_CALIBRATING: bot.Trigger() print("Ready") # Disconnect Walabot def disconnect(): bot.Stop() bot.Disconnect() print("Disconnected") # Main function # Scans for targets # Averages position # Moves servo to position def function(): global lastAveragePosition, lastMoveTime # Get targets from Walabot bot.Trigger() targets = bot.GetSensorTargets() average = 0 count = len(targets) # Find average position of all targets for target in targets: angle = math.degrees(math.atan(target.yPosCm / (target.zPosCm + turntableAxisOffset))) average += angle # If any targets found, determine if should move the servo if count != 0: average /= count delta = abs(average - lastAveragePosition) # Only move servo if targets moved at least minimumAverageDelta degrees # Or it has been >= maximumStillTime since the last servo move if delta >= minimumAverageDelta or time.time() - lastMoveTime >= maximumStillTime: print("Move: " + str(average) + "\n") lastMoveTime = time.time() startServo() setServo(average) time.sleep(delta / servoSpeed) stopServo() lastAveragePosition = average # Main code start # Setup Walabot library bot = load_source("WalabotAPI", "/usr/share/walabot/python/WalabotAPI.py") bot.Init() bot.SetSettingsFolder() # The Walabot setup process connect() setup() calibrate() # Setup WiringPi library and servo wiringpi.wiringPiSetup() startServo() # Make sure turntable starts at 0 degree position setServo(0) time.sleep(2) stopServo() # Main loop execute main function repeatedly try: while True: function() except KeyboardInterrupt: stopServo() disconnect()
Credits
Replications
Did you replicate this project? Share it!I made one
Love this project? Think it could be improved? Tell us what you think! | https://www.hackster.io/user16807/tracking-tv-stand-84c71d | CC-MAIN-2017-26 | refinedweb | 1,668 | 54.52 |
Here, we are going to see how we can communicate on slack with the help of slack-scala-client.
What is Slack?
A messaging app for teams. Slack’s channels help you focus by enabling you to separate messages, discussions and notifications by purpose, department or topic.
However, if you need privacy, Slack provides that as well with invite-only channels.
Slack integration with our application
Integration is what takes Slack from a normal online instant messaging system to a solution that enables you to centralize all your notifications, from sales to tech support, social media and more, into one searchable place where your team can discuss and take action on;
and much more.
For this we will be using scala-slack-client.
What is slack-scala-client?
A Scala library for interacting with the Slack API and real time messaging interface.
Installation:
Add SBT dependency:
libraryDependencies += "com.github.gilbertw1" %% "slack-scala-client" % "0.1.8"The steps to send a notification to your slack channel is :
- Create and regenerate an API token for your slack team.
What is a token and why do we need it?
Authentication tokens are password-like strings that users can generate that allow bots, scripts, or other programs to integrate with their Slack team.
Most of Slack’s customers will never even see an authentication token, let alone run the risk of accidentally posting it online. Remember, anyone who has access to your authentication token can perform whatever actions were scoped for that token. In other words, it can be exactly like sharing your Slack password on the internet. So never share your token in any form.
You can visit the following link to generate an access token:
Once you have generated your token, use it to create the SlackApiClient.
API
But here we will be using the first version.
Creating client instance:
Creating an instance of either client simply requires passing in a Slack API token:
val token = "<Your Token Here>" val client = SlackApiClient(token)
The async client return futures as the result of each of its API functions.
Once you have created the slackApiClient instance, use it to list the available channels for your team.
val res = client.listChannels() // => Future[Seq[Channel]]
This method returns all the public channels created on your team and hence you can communicate to one of your public channels via this SlackApi. Once all the public channels are obtained, you can perform the desired operation you want to perform on your channel.
For example, consider the following method call :
def getChannelId(channelName: String) = Future[Option[String]]
here we will supply the channel name where we want to perform the operation and get the id of the channel and then further use it in the following call:
client.postChatMessage(channelId, msgBody)
where
channelId is the String id of the channel and
msgBody is the String message you want to post to your channel.
Here is a very simple example to send message on a channel. Have a look and explore what all you can do with your channel 🙂
References:
-
-
3 thoughts on “Introduction to slack-scala-client”
Reblogged this on Anurag Srivastava.
Reblogged this on rishabhrv.
Reblogged this on himaniarora1. | https://blog.knoldus.com/introduction-to-slack-scala-client/ | CC-MAIN-2019-22 | refinedweb | 532 | 63.39 |
22 February 2010 00:00 [Source: ICB]
USES
Polystyrene (PS) is a thermoplastic resin used in many applications, including food packaging, domestic appliances, electronic goods, toys, household goods and furniture.?xml:namespace>
SUPPLY/DEMAND
Oversupply and a lack of profitability spurred the permanent closure of some 350,000 tonnes/year of capacity in the Netherlands, Spain, France and Germany last year. Utilization rates are reported to have averaged 82% in 2009, but this figure should increase in 2010 as capacity cuts improve the structural supply balance.
Producers say overall demand in Europe fell by about 6% in 2009 versus 2008. The food packaging demand has been more resilient, but construction, household goods and electronics were hit hard by the economic crisis. Demand in early 2010 remains weak. January sales were down on 2009's level, primarily because of the massive jump in price, which forced buyers to reduce purchases and work off stocks.
Demand has dwindled in the past two years from technological changes, as well as substitution by polypropylene (PP) and polyethylene terephthalate (PET). Applications such as video and audio cassettes have disappeared, most of the CD box sector has gone and flat-screen technology has led to the loss of 75% of PS used in the TV sector.
Germany's BASF and US-based Dow Chemical have put their styrenics businesses up for sale. France's Total Petrochemicals is in talks with Italy's Polimeri Europa to take over production at the Feluy, Belgium, facilities until September 2011, following Polimeri's decision to end the operating contract.
PRICES
European prices in January jumped by at least €150/tonne because of the €172/tonne hike in styrene feedstock. Prices were edging up again in February as producers strived to recover margins.
But, with styrene slipping by €13-16/tonne, buyers were resisting and producers were struggling to implement the €50/tonne increases hoped for. Those achieved ranged from €10/tonne on higher-priced accounts to €20-30/tonne at the lower end.
The FD EU distribution indicator price for general purpose polystyrene (GPPS) was put at €1,130-1,170/tonne in early February, with many large accounts still unsettled.
Profitability is still poor and upstream volatility remains a challenge.
TECHNOLOGY
Three types of processes are generally used: suspension, solution and mass polymerization.
OUTLOOK
Consolidation is still possible, with several smaller-sized producers - who are not back-integrated - the main candidates.
In the longer term, PS in insulation will grow by at least 4%/year, especially in developing countries in ?xml:namespace>
Investment plans are focused in the Middle East and
european polystyrene CAPACITY*, '000 TONNES/year
Profile last published May 7, 2007
For the latest market prices and reports on more than 120 commodity chemicals from the leading independent pricing and market intelligence service, please visit ICIS pricing | http://www.icis.com/Articles/2010/02/22/9336494/european-chemical-profile-polystyrene.html | CC-MAIN-2015-22 | refinedweb | 470 | 50.26 |
Post Link :
Today, In this tutorial we are going to learn how to parse an large XML file using XMLReader Class. The XMLReader extension is an XML Pull parser. The reader acts as a cursor going forward on the document stream and stopping at each node on the way.
The XMLReader extension was initially a PECL extension for PHP 5. It was later moved to the PHP source bundled as of PHP 5.1.0, and later enabled by default as of PHP 5.1.2
Advantages XMLReader :
It is faster since it is not loading the whole XML into memory.
It can parse large and high complex XML document having more sub-trees.
XML Reader Features:
Retrieving portion of XML document based on current node.
Getting attributes based on index, name or namespace.
Parsing elements based on attribute’s index, name or namespace.
Validating XML document
Lets see the example where we are going to parse an xml tag from the external source file. Here we are using XMLReader to get to each node, then use SimpleXML to access them. This way, you keep the memory usage low because you're treating one node at a time and you still leverage SimpleXML's ease of use.
Download link : PHP XML Reader Example
My programming journey: Learning math!
About six months ago I made the decision of breaking into graphics programming. And with that came the realization of learning math.
When using large XML with a DOMDocument... | https://dev.to/skptricks/php-xml-reader-example-2h53 | CC-MAIN-2018-39 | refinedweb | 247 | 66.44 |
15 March 2011 09:46 [Source: ICIS news]
TOKYO (ICIS)--Cosmo Oil's 220,000 bbl/day refinery in ?xml:namespace>
“It has not been extinguished though (the fire) has become smaller compared to when it first started. We don’t have an idea when it will be put out,” the source said in Japanese.
Liquefied petroleum gas (LPG) tanks at the refinery exploded an hour after the 9.0-magnitude quake struck northeastern
“We are focusing on extinguishing the fire. After it’s put out, we’ll examine (the damages),” the source said.
All production and deliveries at and from the refinery were still suspended, he said.
But Cosmo Oil’s refineries in
Additional reporting by Nurluqman Suratman | http://www.icis.com/Articles/2011/03/15/9443858/cosmo-oils-220000-bblday-chiba-refinery-still-in-flames.html | CC-MAIN-2015-14 | refinedweb | 119 | 66.33 |
One of the newer technologies on the JavaScript scene is Flux. It’s getting a lot of attention right now because it’s not MVC. It seems like every other framework was doing MVC or some variant thereof. Facebook (the instigators of Flux and it’s associated library – React) decided there was a better way for larger applications.
There are many articles on React and Flux – I like the introduction from Fluxxor – a library that implements the Flux architecture. However, here is the obligatory Flux architecture diagram:
In the Flux Architecture, there is a Dispatcher at the center of everything. The system creates Actions that are dispatched by the Dispatcher to the Stores (all the stores). Stores hold data. Not just one type of data, mind you – they could hold many types of data. The stores emit a “store changed” event when their data changes. Views listen for those changes and re-render appropriately. Those Views are inevitably React components, whose primary purpose is to minimize the DOM replacements that go on as these are slow.
In the proper Flux architecture, the Dispatcher and Stores have specific APIs and functionalities that need to be implemented. However, one can forego a lot of the code and get a much simpler (and smaller) app by simplifying and altering the API. If you use the Reflux library, for example, they forego the Dispatcher completely – insteadrelying on the actions to dispatch directly to the stores. The interesting thing about an architecture is that it is theory – the libraries implement the concrete examples.
So much for theory. I want to build an app that implements React and Flux. Before I do that, I need to know how to write React components and how to build a production workflow for an app that utilizes React. So today is all about learning the basics of React and setting up a build process.
Initializing the Project
I’m using an ASP.NET5 project for this. I could have used a Node/Express or Koa or any number of other web servers. Aside from my Web API (which comes in at the very end when I talk about authentication), I only need client-side functionality.
I’ve also added a Client directory and created an initial index.html file:
<!DOCTYPE html> <html> <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width,initial-scale=1.0"> <title>An ES6 React/Flux Demo</title> <link rel="stylesheet" href=""> <link rel="stylesheet" href=""> <link rel="stylesheet" href="bundle.css"> </head> <body> <div id="root">... booting ...</div> <script src="bundle.js"></script> </body> </html>
I need a build system and I’m going to use Gulp as normal. I’ve added an NPM package.json file:
{ "version": "1.0.0", "name": "react-demo", "private": true, "devDependencies": { "gulp": "^3.9.0", "gulp-less": "^3.0.3" } }
I’ve also created a Gulpfile.js:
var gulp = require('gulp'), less = require('gulp-less'); var config = { src: './Client', dest: './wwwroot' }; var files = { html: config.src + '/**/*.html', less: config.src + '/style/bundle.less' }; gulp.task('build', ['style', 'html']); gulp.task('html', function () { return gulp.src([files.html]).pipe(gulp.dest(config.dest)); }); gulp.task('style', function () { return gulp.src([files.less]) .pipe(less()) .pipe(gulp.dest(config.dest)); });
Once this was done, I bound the build task to the Before Build step in the Task Runner Explorer. Don’t worry about the bundle.js for now – I’ll get on to that later. If you run this project (just to see it working), you should get a browser window with a booting message displayed.
You can get my starting point from my GitHub Repository.
A Simple React Component
I’m going to write what is possibly the simplest react component ever. It just pushes out a HTML DOM. I’m going to use it to provide a title for my website. The file is located in Client/views/NavBrand.jsx – note the use of the JSX extension – that’s very important, as you will see in a moment. If you are using Visual Studio 2015 (and I suggest you do!), then use item template JSX File.
import React from 'react'; class NavBrand extends React.Component { render() { return ( <div className="_navbrand"> <div className="valign"> React Demo </div> </div> ); } } export default NavBrand;
This made me do a double-take when I first saw it. A JavaScript function is returning HTML. Actually, it’s not – there are some visual clues, such as using className instead of the normal class to specify the CSS class. There are other differences:
- If you use label for="", the for becomes htmlFor
- If you have a HTML element that is not terminated, you need to use /> (e.g. <img />)
There are also other differences, but I’ll get to them later. Unfortunately, Visual Studio 2015 still doesn’t like ES6 + JSX syntax (and it still has problems with ES6 in general). Atom is much better in this regard, so if you are following along on a Mac or using Node/Express, you may want to switch to Atom.
I’ve also got some style in Client/style/NavBrand.less and I’ve included that file in the bundle.less file. I do this for every single React component I write, so I’m not going to mention it again. Just be sure to pick up the less stylesheets from the repository.
This only defines the React component. I also need to use it. To do this, I’m going to create a Client/app.jsx file that I’ll use as a bootstrap:
import React from 'react'; import NavBrand from './views/NavBrand.jsx'; React.render(<NavBrand/>, document.getElementById('root'));
Building The System
I’ve got a lot going on here. My code is ECMAScript 6 and has JSX embedded in it. I also use modules that need to be dealt with. I could just write this for SystemJS (and I’ve blogged about that). Instead I’m going to use a production build system based on browserify. Browserify is a system for bundling all the libraries and JS code you use into one bundle file. All I have to do is load the libraries I want as dependencies in package.json and then run browserify. I still need to transform the ES6 and JSX code into ES5.1 for most browsers. Fortunately, there is a plugin called babelify that uses Babel for that purpose.
Since this is a production workflow, I want to bring eslint into the mix for pre-run checking. Eslint also has a plugin for React/JSX, so I’ll use that. It’s important to note here – Visual Studio 2015 doesn’t seem to like mixing ES6 and JSX. You get a lot of red-squiggly lines for perfectly valid code. As a result, you need eslint to ensure you don’t make syntax errors prior to publishing your code.
These are pretty much recipes, so I’m just copying from someone else (and I forget who, but everyone seems to use the same recipe). Here is the package.json:
{ "version": "1.0.0", "name": "react-demo", "private": true, "devDependencies": { "babelify": "^6.1.3", "browserify": "^11.0.0", "eslint": "^0.24.1", "eslint-plugin-react": "^3.0.0", "gulp": "^3.9.0", "gulp-eslint": "^0.15.0", "gulp-less": "^3.0.3", "gulp-rename": "^1.2.2", "vinyl-source-stream": "^1.1.0" }, "dependencies": { "react": "^0.13.3" } }
I also need a task in the gulpfile.js:
var gulp = require('gulp'), browserify = require('browserify'), eslint = require('gulp-eslint'), less = require('gulp-less'), rename = require('gulp-rename'), source = require('vinyl-source-stream'); // Previously discussed parts not included gulp.task('bundle', [ 'eslint', 'copyhtml' ], function () { var bundler = browserify({ extensions: ['.js', '.jsx'], transform: ['babelify'], debug: true }); return bundler .add(files.entry) .bundle() .on('error', function(err) { console.error(err.toString()); }) .pipe(source(files.entry)) .pipe(rename('bundle.js')) .pipe(gulp.dest(config.dest)); });
I also need an .eslintrc file with the following contents (just create a JSON file with the name .eslintrc):
{ "ecmaFeatures": { "modules": true, "jsx": true }, "env": { "browser": true, "es6": true }, "plugins": [ "react" ], "rules": { "quotes": [ 2, "single", "avoid-escape" ], "react/display-name": [ 1, { "acceptTranspilerName": true }], "react/jsx-boolean-value": 1, "react/jsx-no-undef": 1, "react/jsx-quotes": 1, "react/jsx-sort-prop-types": 1, "react/jsx-sort-props": 0, "react/jsx-uses-react": 1, "react/jsx-uses-vars": 1, "react/no-danger": 1, "react/no-did-mount-set-state": 1, "react/no-did-update-set-state": 1, "react/no-multi-comp": 1, "react/no-unknown-property": 1, "react/prop-types": 1, "react/react-in-jsx-scope": 1, "react/require-extension": 1, "react/self-closing-comp": 1, "react/sort-comp": 1, "react/wrap-multilines": 1 } }
The majority of the rules are ripped from an example in the eslint-plugin-react documentation. I’ve made some changes to suit my coding style and to support ES6.
Running the project will give you a simple piece of text. You no longer have the booting message. Instead you have a React Demo statement. That’s coming from the simple component I developed earlier.
You can get the build system plus this first component from my GitHub Repository.
React Components inside React Components
React is like many other web component technologies. You can embed components inside of components. For example, let’s say I want a NavBar.jsx component:
import React from 'react'; import NavBrand from './NavBrand.jsx'; class NavBar extends React.Component { render() { return ( <header> <div className="_navbar"> <NavBrand/> </div> <div className="_navbar _navbar_grow"> </div> <div className="_navbar"> </div> </header> ); } } export default NavBar;
I’m leaving two of the DIV elements blank, for later expansion. You can clearly see the NavBrand element embedded inside of the JSX that I am returned for the NavBar element. You can adjust the app.jsx file to bring in this component instead of NavBrand to see it in action. You can, of course, get the code from my GitHub Repository.
Reusable Components – Props
Let’s say I have a set of links I want to display. The links are defined (in app.jsx) like this:
var pages = [ { name: 'welcome', title: 'Welcome', nav: true, auth: false, default: true }, { name: 'flickr', title: 'Flickr', nav: true, auth: false }, { name: 'spells', title: 'Spells', nav: true, auth: true } ]; var route = 'welcome';
I could then pass these variables down into a component like this (also in app.jsx):
React.render(<NavBar pages={pages} route={route}/>, document.getElementById('root'));
These are Properties and they appear in this.props in a React component. I need to adjust my NavBar.jsx file to account for these:
import React from 'react'; import NavBrand from './NavBrand.jsx'; import NavLinks from './NavLinks.jsx'; class NavBar extends React.Component { render() { return ( <header> <div className="_navbar"> <NavBrand/> </div> <div className="_navbar _navbar_grow"> <NavLinks pages={this.props.pages} route={this.props.route}/> </div> <div className="_navbar"> </div> </header> ); } } export default NavBar;
I’ve added a new component called NavLinks that takes the properties that NavBar received. Now I need to write Client/views/NavLinks.jsx:
import React from 'react'; class NavLinks extends React.Component { render() { let visibleLinks = this.props.pages.filter(page => { return (page.nav === true && path.auth === false); }); let linkComponents = visibleLinks.map(page => { let cssClass = (page.name === this.props.route) ? 'link active' : 'link'; return (<li className={cssClass} key={page.name}>{page.title}</li>); }); return ( <div className="_navlinks"> <ul>{linkComponents}</ul> </div> ); } } export default NavLinks;
A few notes here. You can use any variable within JSX – just include it in curly braces. Note how I do iteration here – Aurelia has repeat.for, Angular has ng-repeat. React just uses JavaScript so I don’t have to learn a whole bunch of new syntax, nor do I need an extra plug-in just to iterate. I do, however, need to ensure I define a key on iterated elements. Normally, React will define a default key on all elements so that it can refer to a specific key. In the case of iterators, it can’t, so you have to do it instead.
If you build and run this project, you will get a nice list of all the pages that don’t require authentication, according to the object we pass into the root element (the NavBar in app.jsx). However, you will also note that there are errors from eslint:
React expects you to define the types of properties that a component can accept. The route property is a string and the pages property is a complex object. To define the property, you need to add a static object called propTypes to the class. PropTypes defines the shape of the properties that you accept. This is the way I do it:
NavLinks };
Put this block right above the export statement. You can do all sorts of validation and be very prescriptive with the shape of the properties coming in. Check out the docs for the full syntax.
Since the properties for NavBar are identical to the properties for NavLinks, you can cut and paste the object to the NavBar object as well to get rid of the warnings. The code, in case you got lost, is on my GitHub Repository.
State and the Welcome Page
In the Aurelia tutorial there was a Welcome page with a form and some dynamic elements. I’m going to create two things. Firstly, I’m going to create an AppView.jsx component that represents the complete page. Here is the code:
import React from 'react'; import assign from 'lodash/object/assign'; import NavBar from '../views/NavBar'; import Welcome from '../views/Welcome'; class AppView extends React.Component { constructor(props) { super(props); this.state = this.props; } render() { let Route; switch (this.state.route) { case 'welcome': Route = Welcome; break; default: Route = Welcome; } return ( <div id="pagehost"> <NavBar pages={this.state.pages} route={this.state.route}/> <Route/> </div> ); } } AppView }; export default AppView;
Don’t forget to also alter app.jsx to render the AppView instead of the NavBar. In order to compile this, I’m using a lodash function to provide a basic object deep-copy capability, so I need to my list of dependencies in package.json. Lodash is a collection of utility functions. The assign function just copies my properties into a new thing called state.
Aside from the state thing, I’m setting this up to be a router for my page – as can be seen in the render() method. I only have one route right now, but I can add more routes as they become available.
So, why two areas to store data? Properties are defined to be immutable – you set them once and walk away. They don’t get changed within the component. State can be changed. This means I can use it to cause changes in the rendered component. Obviously, the example above (while relevant later on) is a totally useless change. I’m using state instead of props – big deal.
However, if you remember the Welcome Page from the Aurelia Tutorial, you will know we need interactivity for the page. Here is the code for Welcome.jsx:
import React from 'react'; class Welcome extends React.Component { constructor(props) { super(props); this.state = { firstname: 'John', lastname: 'Doe' }; } onSubmit() { alert(`Hello ${this.fullname}`); // eslint-disable-line no-alert } onChange() { this.setState({ firstname: React.findDOMNode(this.refs.fn).value, lastname: React.findDOMNode(this.refs.ln).value }); } get fullname() { return `${this.state.firstname} ${this.state.lastname}`; } render() { let submitHandler = event => { return this.onSubmit(event); }; let changeHandler = event => { return this.onChange(event); }; return ( <section id="welcome"> <h2>Welcome</h2> <form role="form" onSubmit={submitHandler}> <div className="form-group"> <label htmlFor="firstname">First Name</label> <input className="form-control" onChange={changeHandler} </div> <div className="form-group"> <label htmlFor="lastname">Last Name</label> <input className="form-control" onChange={changeHandler} </div> <div className="form-group"> <label>Full Name</label> <p className="help-block">{this.fullname}</p> </div> <button type="submit" className="btn btn-default">Submit</button> </form> </section> ); } } export default Welcome;
Let’s go through this slowly. The constructor sets up our initial state. I’m setting a firstname and a lastname. Next, I define an event handler that will get called when I click on Submit and an event handler when I change the text in one of the text boxes. I’ll come back to the change handler later. I’ve got a computed property next – this is used in both the submit handler to alert “Hello My Name” and within the page render.
Talking of the page render, that’s actually not too complex. Just take it slow and you will see that the JSX is practically identical to the HTML code used in the Aurelia tutorial. However, there is an interesting aspect to those input boxes. The ref attribute is not normal HTML. It’s used by React to find a specific node within this component. It’s similar to the this.$.id in Polymer, but it doesn’t use the id (which can be different). When I get to the change handler, I can use React.findDOMNode to find an element by ref.
On running this project, note the following:
- Altering either text box will alter the computed field for Full Name
- Pressing Submit will pop up an alert with the full name
This is a pretty basic in-component interactivity example. You can find the code on my GitHub Repository. I’ve also added a NavToolbar component to finish off the NavBar – it’s a static component for right now, just like NavBrand, so it should be familiar territory.
Wrap Up
There are obvious parallels to draw here between React and Polymer. Both are component technologies and both can render complete pages using sub-components. Both have a level of interactivity that can be built in via private event handling. Both can encapsulate the DOM elements within the component so that they can be referenced easily. Both have a full set of lifecycle functions that can be utilized to easily handle DOM behavior.
React doesn’t require any special features from the browser – it works in any modern browser. Polymer has to polyfill for missing browser features – Shadow DOM, Custom Elements, Templates, and so on. React works fine with ES6 out of the box. Polymer isn’t really ready for ES6 yet. In my tests, React was significantly faster on large DOM changes. And that is the defining thing here. React is fast. For my simple tutorial, that’s not really important. However, transition to a larger project and you really notice the slow down in the DOM when you have to re-draw complete pages.
In the end, it’s a matter of personal preference. Both are young technologies. I suspect that if you are using an MVC architecture and you want components, you will gravitate towards Polymer since they can be easily included. If you want to use a Flux architecture, then you will use React components.
In the next post, I’ll cover a “Flux” architecture and finish off the Aurelia tutorial pages with a full Flux implementation. | https://shellmonger.com/2015/08/15/building-an-es6jsx-reactflux-app-part-1-the-views/ | CC-MAIN-2017-17 | refinedweb | 3,173 | 59.5 |
Duy Nguyen <pclo...@gmail.com> writes: > On Thu, May 2, 2013 at 6:35 AM, Junio C Hamano <gits...@pobox.com> wrote: >> Nguyễn >> >> Could you clarify what this second point means? >> >> "rev-list --objects --all --not $this $that" does not detect >> "islands" but checking with the updated index-pack does? > > Object islands (in the new pack) by definition are not connected to > the main DAG and so invisible to/unreachable from rev-list. index-pack > examines all objects in the pack and checks links of each object. With > this approach, islands are no different than reachable objects.
Advertising
OK, so if you are fetching an updated tip of the main history, and a new tip of a history that is disjoint. If we imagine that my public repository just added the 'todo' branch and you are fecting them for the first time. The history of 'todo' branch is an island that is not connected anywhere from your refs namespace yet. In order to ensure that updating the tip of fetched 'todo' is safe, you would need to verify the island is free of dangling pointers and the only thing you need to be sure is the tip of 'todo' is _in_ that island. >>. > > Yes, we need to make sure the new value of our refs are existing > objects. But it does not need to be in the new pack. It is a bit more tricky than that. A malicious (or simply buggy) other side can send a subset of my 'todo' branch, which is an island that is free of dangling pointers (think: 'rev-list --objects todo~8'). Further imagine that you earlier attempted a fetch of the same history from me over a commit walker and you happen to have partial history near the tip of 'todo' but not connected to the island. sha1_object() will find it, but that does not say anything useful. The tip _must_ appear in the island for your check to yield a usable result, no? The existing "everything connected" was designed to protect against that kind of breakage as well. I might be reading your change incorrectly, but I am not sure how the new code protects against such a breakage. > After index-pack > is run, we're guaranteed that all objects in repo are connected and > any of them could be new ref. This is also why I add has_sha1_file() > in clone.c. -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majord...@vger.kernel.org More majordomo info at | https://www.mail-archive.com/git@vger.kernel.org/msg25041.html | CC-MAIN-2018-05 | refinedweb | 425 | 71.14 |
On Thu, 2007-01-11 at 16:18 +0100, Gilles Chanteperdrix wrote: > Gilles Chanteperdrix wrote: > > This was run on x86, but need further testing before inclusion. > > Here is a new version, after testing. It appears to run fine. I tested > forking in real-time applications both before and after calling > rt_task_shadow, and vmallocing areas of 256 Mo, and memseting them both > from a non-realtime or real-time context and it works. > > The next step is to clean up the patch, but I have to admit that I need > some help: should I keep the functions in the files where I put them ? > in what headers should I declare them ? Should I define an empty > ipipe_update_nofault_mms when CONFIG_IPIPE is not set in order to avoid > a few #ifdefs ?
Advertising
> diff -Naurdp -x '*~' ipipe-2.6.19/arch/i386/mm/fault.c > ipipe-2.6.19-nocow/arch/i386/mm/fault.c > --- ipipe-2.6.19/arch/i386/mm/fault.c 2007-01-10 09:44:52.000000000 +0100 > +++ ipipe-2.6.19-nocow/arch/i386/mm/fault.c 2007-01-11 09:58:49.000000000 > +0100 > @@ -654,3 +654,19 @@ void vmalloc_sync_all(void) > } > } > #endif > + > +#ifdef CONFIG_IPIPE > +int ipipe_arch_map_vm_area_to_mm(struct mm_struct *mm, > + unsigned long start, > + unsigned long end) > +{ __ipipe_pin_range_mapping() would better identify an internal routine which somehow wires the mapping of a virtual address range into a memory context. [...] > + > +#if CONFIG_IPIPE > + struct list_head nofault; s,nofault,pinned, ? The point is that the NOFAULT feature does not really disable all faults, but only faults leading to lazy/ondemand mappings. E.g. pathological faults would still raise exceptions. > +#endif /* CONFIG_IPIPE */ > }; > > struct sighand_struct { > diff -Naurdp -x '*~' ipipe-2.6.19/kernel/fork.c > ipipe-2.6.19-nocow/kernel/fork.c > --- ipipe-2.6.19/kernel/fork.c 2007-01-10 09:44:53.000000000 +0100 > +++ ipipe-2.6.19-nocow/kernel/fork.c 2007-01-11 15:32:25.000000000 +0100 > @@ -385,6 +385,7 @@ void mmput(struct mm_struct *mm) > > if (atomic_dec_and_test(&mm->mm_users)) { > ipipe_cleanup_notify(mm); > + ipipe_destroy_nofault_mm(mm); We may want to merge both into the notification trigger. Those nitty-gritty I-pipe details ought to be gathered; after all, removing the mm from the pinned mm queue is also a cleanup operation. This would also remove the need for adding a placeholder in the !CONFIG_IPIPE case. [...] > - > +#ifdef CONFIG_IPIPE > + ipipe_update_nofault_mms(start, end); I'd suggest something like __ipipe_update_all_pinned_mm(). [...] > +#ifdef CONFIG_IPIPE > +#include <linux/vmalloc.h> /* For vmlist */ > +#endif /* CONFIG_IPIPE */ No need for noisy conditional here. Including linux/vmalloc.h has no undesirable side-effect in the !CONFIG_IPIPE case anyway. [...] > + > +#ifdef CONFIG_IPIPE > +static LIST_HEAD(nofault_mms); > +static DEFINE_RWLOCK(nofault_mms_lock); > + > +static int ipipe_fault_pte_range(struct mm_struct *mm, pmd_t *pmd, > + struct vm_area_struct *vma, > + unsigned long addr, unsigned long [...] > +static int ipipe_fault_pmd_range(struct mm_struct *mm, pud_t *pud, > + struct vm_area_struct *vma, > + unsigned long addr, unsigned long end) [...] > +static int ipipe_fault_pud_range(struct mm_struct *mm, pgd_t *pgd, > + struct vm_area_struct *vma, > + unsigned long addr, unsigned long end) [...] Those routines are good candidates for inlining. > +int ipipe_disable_task_faults(struct task_struct *tsk) > +{ ipipe_disable_ondemand_mappings() would be more accurate. [...] > +#ifdef CONFIG_IPIPE > + ipipe_update_nofault_mms((unsigned long) area->addr, end); > +#endif /* CONFIG_IPIPE */ Better define a nop placeholder for __ipipe_update_all_pinned_mm() in the !CONFIG_IPIPE case instead of the conditional. -- Philippe. _______________________________________________ Xenomai-core mailing list Xenomai-core@gna.org | https://www.mail-archive.com/xenomai-core@gna.org/msg03913.html | CC-MAIN-2018-30 | refinedweb | 535 | 51.04 |
Opened 16 months ago
Last modified 16 months ago
#28119 new New feature
Test client cookies do not take into account server hostnames/domains
Description (last modified by )
A couple of issues arise in the testing framework when a Django project supports multiple hostnames.
- Cookies received don't set the domain field
- Cookies with a domain field are still included in requests to a different domain than the one in the cookie
Example of
domain not being set:
from django.test import Client client = Client() # 1. Make a request with explicit SERVER_NAME response = client.get('/', SERVER_NAME='foo.local') # 2. Note that response.cookies['csrftoken']['domain'] has no value
Expected result:
response.cookies['csrftoken']['domain'] was set to the value of
SERVER_NAME (default would be
testserver).
Rationale: Browsers do this, according to the specification: (4.3.1 Interpreting Set-Cookie: Domain Defaults to the request-host)
Example of cookies sent incorrectly to another domain:
from django.test import Client client = Client() # 1. Make request with explicit SERVER_NAME, receive `csrftoken` cookie response = client.get('/', SERVER_NAME='foo.local') # 2. Note that client.cookies['csrftoken'] now has some value (eg. "123456") # 3. Set the domain on the cookie client.cookies['csrftoken']['domain'] = 'bar.local' # 4. Make request to different domain response = client.get('/', SERVER_NAME='bar.local') # 5. Note that client.cookies['csrftoken'] was sent with the request, re-used by the server, and still has the same value (eg. "123456")
Expected result: On step 4, the client does not include the cookie with non-matching domain name.
Rationale: Using
SERVER_NAME, the client should simulate browser behaviour by not sending cookies incorrectly to different hostnames.
Change History (5)
comment:1 Changed 16 months ago by
comment:2 Changed 16 months ago by
comment:3 Changed 16 months ago by
comment:4 Changed 16 months ago by
The use case in my project is to test single sign-on functionality. In one case, we need to log the user in on one site but not the other. Then, when the user visits the second site, they should be redirected and automatically authenticated. However, in the test framework, the session cookie will be sent to both sites after authenticating on only one, because the hostnames aren't taken into consideration. So the SSO features cannot be properly tested. I agree that this is probably not a very common use case, and does add some complexity to the simple client.
I would love to put this together in a Client subclass as a proof of concept. I will try to do so when I get the free time (although that may take a while).
I'm not sure if we'd want to add this complexity to the test client (which is fairly dumb and simple). Could you elaborate on the use case? Assuming you need the functionality for your own project, perhaps you can show us how complicated it is to implement in a
Clientsubclass. | https://code.djangoproject.com/ticket/28119 | CC-MAIN-2018-34 | refinedweb | 488 | 56.35 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.