text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
On May 5, 2015 2:14 PM, "Guido van Rossum" <guido@python.org> wrote:
>
> In the PEP 492 world, these concepts map as follows:
>
> - Future translates to "something with an __await__ method" (and asyncio Futures are trivially made compliant by defining Future.__await__ as an alias for Future.__iter__);
>
> - "asyncio coroutine" maps to "PEP 492 coroutine object" (either defined with `async def` or a generator decorated with @types.coroutine -- note that @asyncio.coroutine incorporates the latter);
>
> - "either of the above" maps to "awaitable".
Err, aren't the first and third definitions above identical?
Surely we want to say: an async def function is a convenient shorthand for creating a custom awaitable (exactly like how generators are a convenient shorthand for creating custom iterators), and a Future is-an awaitable that also adds some extra methods.
-n | https://mail.python.org/archives/list/python-dev@python.org/message/MVSLR7KJ2OIZJMXL6ZLMUDFKJIRKWKOP/attachment/2/attachment.htm | CC-MAIN-2021-21 | refinedweb | 136 | 55.34 |
Refactoring Automated Functional Test Scripts with iTest2
- |
-
-
-
-
-
-
Read later
Reading List
Introduction
Automated test scripts are known difficult to maintain. With wide adoption of agile methodologies in enterprise software projects, one of its core practices: automated functional testing proves its value, at the same time provides challenges to projects. Traditional record-n-playback testing tools may help creating a set of test scripts quickly, but often end up unmaintainable. The reason: application changes.
In programming world, 'refactoring' (a process improves software internal structure without changing its behaviour) has become a highly frequent used word among programmers. In simple words, programmers make code more readable and design more flexible during refactoring process. Experienced agile project managers allocate certain time for programmers to perform code refactoring or make it as part of process finishing user stories. Most of Integrated Development Environments (IDEs) come with support for various refactorings.
Testers who develop or maintain automated test scripts usually do not have that kind of luxury, but share the same needs to make automated test scripts readable and maintainable. It is difficult (the more test scripts, the more difficult it becomes) to get the test scripts back on track for releases with new features, bug fixes and software changes.
Test Refactoring
The objective and procedures of functional test refactoring is the same with code refactoring, but it has special characteristics:
- Target Audience
The end users of testing tool are testers, business analysts or even customers. The fact is that testers, business analysts and customers generally do not possess programming skills, and this changes the whole paradigm.
- Script Syntax
Code refactoring is mostly supported on compiled languages such as Java and C#. Functional test scripts, however, may be in a form of XML, proprietary vendor scripts, compiled languages or script languages (such as Ruby). Depending on the test framework, the use of refactoring varies.
- Refactorings specific to functional testing
While some common code refactorings, such as 'Rename', apply to functional test scripts, they are ones are specific for testing purposes, such as 'Move the scripts to run each test case".
iTest2 IDE
A new functional testing tool: iTest2 IDE is designed for testers to develop and maintain automated test scripts with ease. iTest2 is written from ground up dedicated for web test automation, the test framework it supports is rWebUnit (an open source extension of popular Watir - Web App Testing in Ruby) in RSpec syntax.
The philosophy of iTest2 is: easy and simple. Trials showed testers without programming experiences could write their first automated test scripts averagely less than 10 minutes under mentoring. With iTest2, testers can develop, maintain and verify test scripts against functional requirements; developers can verify the feature is working on; business analysts/customers view test execution (in a real browser: IE or Firefox) to verify function requirements.
The test scripts created by iTest2 can be executed from command line and integrated with continuous build servers.
Walk through
An example worth thousands of words. We will walk through complete steps from creating two cases to making them readable and maintainable by using refactoring tools in iTest2.
Test Plan
For our exercise, we develop typical yet simple web test scripts for Mercury's NewTour web site.
Create test case 001
1. Create a project
Firstly, we create an iTest2 project, specify the site URL, and a sample test script file will be created as below:
load File.dirname(__FILE__) + '/test_helper.rb' test_suite "TODO" do include TestHelper before(:all) do open_browser "" end test "your test case name" do # add your test scripts here end end
2. Use iTest2Recorder to record test scripts for Test Case 001
We use iTest2 Recorder, a Firefox add-on records user operations in Firefox browser into executable test scripts.")
3. Paste recorded test script in a test script file, and run it.
# ... test "[001] one way trip" do") end
Now run the test case (right click and select 'Run [001] one way trip'), it passed!
Refactor to use page objects
The above test scripts work and rWebUnit syntax is quite readable. Some might question the needs for refactoring, and what is 'using pages'?
First of all, test scripts in current format are not easy to maintain. Let's say we now have hundreds of automated test scripts, new released software changed user authentication to use customer's email as username to login, which in turn means we need to change to use 'email' instead of 'userName' in test scripts. Performing search and replace on hundreds of files does not sound like a good solution. Also project members like to speak common vocabulary within the project, which has a fancy name: Domain Specific Language (DSL). It is nice to see them used in test scripts as well.
It can be done using page objects. A page in our context represents a web page logically, it contains operations provided to end user on that page. For example, the home page in our example has three operations: 'enter user name', 'enter password' and 'click login button'. 'Refactor to use pages' is a process to extract operations into specific page objects, and it is made quite easy to do so with refactoring support in iTest2.
1. Extract to HomePage
The login function is on the home page, and we will make it so. As user login is a well understood function, we make 3 lines of statements (enter username, password and clicking login button) into one operation. Select those 3 lines, then click 'Extract Page ...' under 'Refactoring' menu (keyboard shortcut: Ctrl+Alt+G).
Figure 1. 'Refactor' menu - 'Extract Page...'
This opens a window like below to for you to enter page name and function name, we enter 'HomePage' and 'login' respectively.
Figure 2. 'Extract Page' dialog box
The selected statements (3 lines) are now replaced with
home_page = expect_page HomePage home_page.login
A new file 'pages\home_page.rb' is created with the following content: class HomePage < RWebUnit::AbstractWebPage
class HomePage < RWebUnit::AbstractWebPage def initialize(browser) super(browser, "") # TODO: add identity text (in quotes) end def login enter_text("userName", "agileway") enter_text("password", "agileway") click_button_with_image("btn_signin.gif") end end
Run the test case again, it shall still pass.
Note: As Martin Fowler pointed out: the rhythm of refactoring: test, small change, test, small change, test, small change. It is that rhythm that allows refactoring to move quickly and safely.
2. Extract to SelectFlightPage
After login successfully, customers land at flight selection page. Different from login page, every operation here is more likely to be updated independently by developers, so we extract each operation to a new function. Move caret to the line
click_radio_option("tripType", "oneway")
Perform another 'Extract to Page..." refactoring (Ctrl+Alt+G), enter "SelectFlightPage" and "select_trip_oneway" for new page and function name.
select_flight_page = expect_page SelectFlightPage select_flight_page.select_trip_oneway
3. Continue extract more operations into SelectFlightPage
Continue performing refactorings for the remaining operations to 'SelectFlightPage'': 'select_from_new_york', 'select_to_sydney', and 'click_continue'.
test "[1] one way trip" do home_page = expect_page HomePage home_page.login select_flight_page = expect_page SelectFlightPage select_flight_page.select_trip_oneway select_flight_page.select_from_new_york select_flight_page.select_to_sydney select_flight_page.click_continue assert_text_present("New York to Sydney") end
As always, we run the test case again.
Write test case 002
Since we now have two pages ('HomePage' and 'SelectFlightPage') from refactoring test case 001, writing test case 002 will be a lot easier (by reusing them).
1. Using existing HomePage
iTest2 IDE has built-in support for page objects, typing "ep" and pressing 'Tab' key (called 'snippets') will expand to 'expect_page' and populate all known pages for selection.
Figure 3. Auto-complete pages
We get
expect_page HomePage
To use HomePage, we need to get a handle to it (in programming world, it is called 'variable'). Perform "Introduce Page Variable" refactoring (Ctrl+Alt+V) to create the variable.
Figure 4. 'Refactor' menu - 'Introduce Page Variable'
home_page = expect_page HomePage
Now type "homepage." in next statement, the functions defined in the page class will show up for you to choose.
Figure 5. Page function lookup
2. Add dedicated operation for Test Case 2
Test Case 002 is quite similar to Test Case 001, the differences are trip type selection and assertions. With the help of the recorder, we can identify the new operation:
click_radio_option("tripType", "roundtrip")
Then refactor it into a new function in SelectFlightPage
select_flight_page.select_trip_round
Here it is
test "[2] round trip" do home_page = expect_page HomePage home_page.login
Run the test scripts for Test Case 2 (Right click any line in test case 2, and select 'Run ...'), it passed!
Reset application to initial state
But wait, we are not quite finished yet. Test Case 1 passed, Test Case 2 passed, running them together got an error on Test Case 2, why?
We did not reset the web application back to initial state, the user remains signed in after finishing execution of Test Case 001. To make tests independent from each other, we make sure the test execution starting with sign-in and end with sign-off.
test "[001] one way trip" do home_page = expect_page HomePage home_page.login # . . . click_link("SIGN-OFF") goto_page("/") end test "[002] round trip" do home_page = expect_page HomePage home_page.login # . . . click_link("SIGN-OFF") goto_page("/") end
Remove duplications
There are obvious duplications in test scripts. RSpec framework allows users to set operations before or after each test case execution.
Select the first two lines (login function) then press 'Shift + F7' to perform 'Move code' refactoring.
Figure 6. Refactoring 'Move code'
Select '2 Move to before(:each)' to move the operations into
before(:each) do home_page = expect_page HomePage home_page.login end
As the name suggests, these two operations will be executed before each test case, so that the first two statements in Test Case 002 are not needed any more. And we can perform similar refactoring to create 'after(:each)' section.
after(:each) do
click_link("SIGN-OFF")
goto_page("/")
end
Final version
Here are a complete (refactored) test scripts for Test Case 001 and Test Case 002.
load File.dirname(__FILE__) + '/test_helper.rb' test_suite "Complete Test Script" do include TestHelper before(:all) do open_browser "" end before(:each) do home_page = expect_page HomePage home_page.login end after(:each) do click_link("SIGN-OFF") goto_page("/") end test "[001] one way trip" do select_flight_page = expect_page SelectFlightPage select_flight_page.select_trip_oneway select_flight_page.select_from_new_york select_flight_page.select_to_sydney select_flight_page.click_continue assert_text_present("New York to Sydney") end test "[002] round trip" do end
Coping with changes
We are not living in a perfect world. Things do change frequently in software development world. Fortunately, the above work makes test scripts not only more readable, but also easier to cope with changes.
1. Customers change terminologies
As we know, it is a good practice to speak same language in a project, even in test scripts. For instance, customers now prefer using term "Return Trip" rather than "Round Trip". With refactored test scripts, it can be done in seconds.
Move caret to the function 'select_trip_round' in 'SelectFlightPage' (pages\select_flight_page.rb), Select 'Rename ...' under 'Refactoring' menu (Shift+F6)
Figure 7. 'Refactor' menu - 'Rename'
Then enter new function name: 'select_return_trip'.
Figure 8. 'Rename Function' dialog
The references of 'select_trip_round' in test script file are updated with
select_flight_page.select_return_trip
2. Application Changes
Application changes (by programmers) are more common. For instance, a programmer changed the flight selection page for some reason, the attribute to identify the departure city has changed (in HTML) from
<select name="fromPort"> to <select name="departurePort">
Although no visible changes from users' point of view, the test scripts (any test cases using that page) are now broken. It can be a quite tedious and error prone job if you are using recorded script directly as your test scripts.
Navigate to 'select_from_new_york' in 'SelectFlightPage' (Ctrl+T select 'select_flight_page', Ctrl+F12 then select 'select_from_xx'), and change 'fromPort' to 'departurePort'.
def select_from_new_york select_option("departurePort", "New York") # from 'fromPort' end
That's not too hard!
Summary
In this article, we introduced using page objects in automated functional testing to make test scripts easy to understand and maintain. Through a real example, we demonstrated various refactorings using iTest2 IDE to improve the test scripts.
References
Fowler, Martin, et al. Refactoring: Improving the design of existing code, Reading, Mass.: Addison-Wesley, 1999
Rate this Article
- Editor Review
- Chief Editor Action
Hello stranger!You need to Register an InfoQ account or Login or login to post comments. But there's so much more behind being registered.
Get the most out of the InfoQ experience.
Tell us what you think
Use of NewTours site for the article
by
Eric Schumacher | https://www.infoq.com/articles/refactoring-test-scripts | CC-MAIN-2017-13 | refinedweb | 2,059 | 53.92 |
Question for a senior C++ developer. Dare you?
What must be in SR.h to achieve desired printed values?
#include <stdio.h> #include "SR.h" int main() { int j = 1; int a[] = {2, 3}; { SR x(j), y(a[0]), z(a[1]); j = a[0]; a[0] = a[1]; a[1] = j; printf("j = %d, a = {%d, %d}\n", j, a[0], a[1]); // j = 2, a = {3, 2} - we want that the printed be this } printf("j = %d, a = {%d, %d}\n", j, a[0], a[1]); // j = 1, a = {2, 3} - - we want that the printed be this } ```
- raven-worx Moderators
@Kofr
so you want instead of printing
j = 2, a = {3, 2} j = 2, a = {3, 2}
it should print
j = 2, a = {3, 2} j = 1, a = {2, 3}
right???
If so you need to get around the scoping and create new "inner" variables" and create a copy of the "outer" variable values.
- SGaist Lifetime Qt Champion
Hi,
To add to @raven-worx, looks like SR is just useless since it's not used anywhere.
- kshegunov Qt Champions 2017
Here:
template<class T> class SR { public: SR(T & v) : ref(v), val(v) { } ~SR() { ref = val; } private: T & ref, val; }
But what would be the purpose of this, beside being an example of bad programming? | https://forum.qt.io/topic/73433/question-for-a-senior-c-developer-dare-you | CC-MAIN-2018-39 | refinedweb | 220 | 72.39 |
On Fri, 2007-02-09 at 12:49 -0500, Len Brown wrote:> On Friday 09 February 2007 12:14, James Morris wrote:> > This is being disabled in the guest kernel only. The host and guest > > kernels are expected to be the same build.> > Okay, but better to use disable_acpi()> indeed, since this would be the first code not already inside CONFIG_ACPI> to invoke disable_acpi(), we could define the inline as empty and you could> then scratch the #ifdef too.Thanks Len!This applies on top of that series.== Len Brown <lenb@kernel.org> said:> Okay, but better to use disable_acpi()> indeed, since this would be the first code not already inside CONFIG_ACPI> to invoke disable_acpi(), we could define the inline as empty and you could> then scratch the #ifdef too.Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>diff -r 85363b87e20b arch/i386/lguest/lguest.c--- a/arch/i386/lguest/lguest.c Sat Feb 10 01:52:37 2007 +1100+++ b/arch/i386/lguest/lguest.c Sat Feb 10 10:28:36 2007 +1100@@ -555,10 +555,7 @@ static __attribute_used__ __init void lg mce_disabled = 1; #endif -#ifdef CONFIG_ACPI- acpi_disabled = 1;- acpi_ht = 0;-#endif+ disable_acpi(); if (boot->initrd_size) { /* We stash this at top of memory. */ INITRD_START = boot->max_pfn*PAGE_SIZE - boot->initrd_size;diff -r 85363b87e20b include/asm-i386/acpi.h--- a/include/asm-i386/acpi.h Sat Feb 10 01:52:37 2007 +1100+++ b/include/asm-i386/acpi.h Sat Feb 10 10:43:43 2007 +1100@@ -127,6 +127,7 @@ extern int acpi_irq_balance_set(char *st #define acpi_ioapic 0 static inline void acpi_noirq_set(void) { } static inline void acpi_disable_pci(void) { }+static inline void disable_acpi(void) { } #endif /* !CONFIG_ACPI */ -To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at | https://lkml.org/lkml/2007/2/9/387 | CC-MAIN-2015-48 | refinedweb | 304 | 72.46 |
On Sat, Jun 09, 2007 at 04:22:24AM -0400, Anthony Bryan wrote: > Hello, this is a repost of a message to fedora-devel, as this seems to > be the place for it. > > >From: Jesse Keating <jkeating redhat com> > > > >On Friday 08 June 2007 14:24:47 Anthony Bryan wrote: > >> I was hoping Fedora could investigate using Metalinks for their ISO > >> downloads. Metalink is an XML format for listing all the ways you can > >> get a file or collection of files (mirrors + their location, rsync, > >> p2p) along with checksums to automatically repair a file in case of > >> error, signatures, language, OS/arch, and other metadata. It's mainly > >> used for large files like ISOs, where errors can be very frustrating. > >> > >> It's supported by about 20 programs on unix, mac, and win, including > >> aria2 (already in the Fedora repos). It's used by openSUSE, > >> OpenOffice.org, cURL, and many other distributions. > >> > >> Here's a screenshot of a Metalink download in the DownThemAll Firefox > >> extension (nightly build). What you don't see are all the mirrors and > >> checksums. > >> > >> > >> > > > >This is something interesting, and I wonder if we could make use of > >MirrorManager ( ) > >to > >have dynamic .metalink files created with updated mirror readiness info. > >Certainly something that looks worth looking into. > > That would be quite nice, no one else has dynamic .metalinks on a > large scale. When I got the F7 ISO, I noticed it would fit in well w/ > the download pages which tell which mirrors have which releases. I > think it would make things less frustrating for end users trying to > get things, and hopefully create less strain on mirrors. Certain > metalink clients will download from domestic mirrors first, if country > info is in there, which should hopefully be more efficient for > everyone. > > What can we do to make this happen? Is this the type of thing that's > easier for the maintainer of MirrorManager to add, or should we supply > a patch? I saw some articles on metalink and spent time time looking at it a few weeks ago. Good stuff. Patches are certainly welcome. What I picture is a new application 'generate_metalinks' along the lines of the generate_mirrorlists application within mirrormanager that connects to the database, finds what it wants, and generates static .metalink text files. You'd probably only want to metalink the .ISOs, of which there are plenty. No sense making .metalink files for every file in the (presently) 660GB file system. Then we also need to generate static web pages that include HTML links to all the .metalink files. And link to those from somewhere in the mirrors.fedoraproject.org/ namespace, maybe mirrors.fp.o/metalink/index.html plus all the .metalink files in a dir there. Then we add it into the update-static-content script that runs at the top of the hour, so they're dynamically generated. > Here's the current tools people have done, if that helps - > > > Metalink Editor - in Python, GUI > cURL - they use a short Perl script that makes them based on location. > Simba/RoPkg::Metalink - Perl > Bouncer - there's a patch for it. > Metalink tools - CLI, C++ As mm is written in python + TurboGears, it would be great if the metalink creator tool was also in python. :-) Source to mm is available here: Thanks, Matt -- Matt Domsch Software Architect Dell Linux Solutions linux.dell.com & Linux on Dell mailing lists @ | https://www.redhat.com/archives/fedora-infrastructure-list/2007-June/msg00140.html | CC-MAIN-2016-22 | refinedweb | 562 | 65.73 |
4.6. Dropout¶
Just now, we introduced the classical approach of regularizing statistical models by penalizing the \(\ell_2\) norm of the weights. In probabilistic terms, we could justify this technique by arguing that we have assumed a prior belief that weights take values from a Gaussian distribution with mean \(0\). More intuitively, we might argue that we encouraged the model to spread out its weights among many features and rather than depending too much on a small number of potentially spurious associations.
4.6.1. Overfitting Revisited¶
Given many more features than examples, linear models can overfit. But when there are many more examples than features, we can generally count on linear models not to overfit. Unfortunately, the reliability with which linear models generalize comes at a cost: Linear models can’t take into account interactions among features. For every feature, a linear model must assign either a positive or a negative weight. They lack the flexibility to account for context.
In more formal text, you’ll see this fundamental tension between generalizability and flexibility discussed as the bias-variance tradeoff. Linear models have high bias (they can only represent a small class of functions), but low variance (they give similar results across different random samples of the data).
Deep neural networks take us to the opposite end of the bias-variance spectrum. Neural networks are so flexible because they aren’t confined to looking at each feature individually. Instead, they can learn interactions among groups of features. For example, they might infer that “Nigeria” and “Western Union” appearing together in an email indicates spam but that “Nigeria” without “Western Union” does not.
Even when we only have a small number of features, deep neural networks are capable of overfitting. In 2017, a group of researchers presented a now well-known demonstration of the incredible flexibility of neural networks. They presented a neural network with randomly-labeled images (there was no true pattern linking the inputs to the outputs) and found that the neural network, optimized by SGD, could label every image in the training set perfectly.
Consider what this.
4.6.2. Robustness through Perturbations¶
Let’s think briefly about what we expect from a good statistical model. We want it to do well on unseen test data. One way we can accomplish this is by asking what constitutes a “simple” model? Simplicity can come in the form of a small number of dimensions, which is what we did when discussing fitting a model with monomial basis functions. Simplicity can also come in the form of a small norm for the basis functions. This led us to weight decay (\(\ell_2\) regularization). Yet a third notion of simplicity that we can impose is that the function should be robust under small changes in the input. For instance, when we classify images, we would expect that adding some random noise to the pixels should be mostly harmless.
In 1995, Christopher Bishop formalized a form of this idea when he proved that training with input noise is equivalent to Tikhonov regularization [Bishop, 1995]. In other words, he drew a clear mathematical connection between the requirement that a function be smooth (and thus simple), as we discussed in the section on weight decay, with and the requirement that it be resilient to perturbations in the input.
Then in 2014, Srivastava et al. [Srivastava et al., 2014] developed a clever idea for how to apply Bishop’s idea to the internal layers of the network, too. Namely they proposed to inject noise into each layer of the network before calculating the subsequent layer during training. They realized that when training deep network with many layers, enforcing smoothness just on the input-output mapping misses out on what is happening internally in the network. Their proposed idea is called dropout, and it is now a standard technique that is widely used for training neural networks. Throughout training, on each iteration, dropout regularization consists simply of zeroing out some fraction (typically 50%) of the nodes in each layer before calculating the subsequent layer.
The key challenge then is how to inject this noise without introducing undue statistical bias. In other words, we want to perturb the inputs to each layer during training in such a way that the expected value of the layer is equal to the value it would have taken had we not introduced any noise at all.
In Bishop’s case, when we are adding Gaussian noise to a linear model, this is simple: At each training iteration, just add noise sampled from a distribution with mean zero \(\epsilon \sim \mathcal{N}(0,\sigma^2)\) to the input \(\mathbf{x}\) , yielding a perturbed point \(\mathbf{x}' = \mathbf{x} + \epsilon\). In expectation, \(E[\mathbf{x}'] = \mathbf{x}\).
In the case of dropout regularization, one can debias each layer by normalizing by the fraction of nodes that were not dropped out. In other words, dropout with drop probability \(p\) is applied as follows:
By design, the expectation remains unchanged, i.e., \(E[h'] = h\). Intermediate activations \(h\) are replaced by a random variable \(h'\) with matching expectation. The name “dropout” arises from the notion that some neurons “drop out” of the computation for the purpose of computing the final result. During training, we replace intermediate activations with random variables.
4.6.3. Dropout in Practice¶
Recall the multilayer perceptron (Section 4.1) with a hidden layer and 5 hidden units. Its architecture is given by
When we apply dropout to the hidden layer, we are essentially removing each hidden unit with probability \(p\), (i.e., setting their output to \(0\)). We can view the result as a network containing only a subset of the original neurons. In Fig. 4.6.1, \\). Intuitively, deep learning researchers often explain the intuition thusly: we do not want the network’s output to depend too precariously on the exact activation pathway through the network. The original authors of the dropout technique described their intuition as an effort to prevent the co-adaptation of feature detectors.
At test time, we typically do not use dropout. However, we note that there are some exceptions: some researchers use dropout at test time as a heuristic approach for estimating the confidence of neural network predictions: if the predictions agree across many different dropout masks, then we might say that the network is more confident. For now we will put off the advanced topic of uncertainty estimation for subsequent chapters and volumes.
4.6.4. Implementation from Scratch¶
To implement the dropout function for a single layer, we must draw as many samples from a Bernoulli (binary) random variable as our layer has dimensions, where the random variable takes value \(1\) (keep) with probability \(1-p\) and \(0\) (drop) with probability \(p\). One easy way to implement this is to first draw samples from the uniform distribution \(U[0, 1]\). then we can keep those nodes for which the corresponding sample is greater than \(p\), dropping the rest.
In the following code, we implement a
dropout function that drops
out the elements in the
ndarray input
X with probability
drop_prob, rescaling the remainder as described above (dividing the
survivors by
1.0-drop_prob).
import d2l from mxnet import autograd, gluon, init, np, npx from mxnet.gluon import nn npx.set_np() def dropout(X, drop_prob): assert 0 <= drop_prob <= 1 # In this case, all elements are dropped out if drop_prob == 1: return np.zeros_like(X) mask = np.random.uniform(0, 1, X.shape) > drop_prob return mask.astype(np.float32) * X / (1.0-drop_prob)
We can test out the
dropout function on a few examples. In the
following lines of code, we pass our input
X through the dropout
operation, with probabilities 0, 0.5, and 1, respectively.
X = np.arange(16).reshape(2, 8) print(dropout(X, 0)) print(dropout(X, 0.5)) print(dropout(X, 1))
[[ 0. 1. 2. 3. 4. 5. 6. 7.] [ 8. 9. 10. 11. 12. 13. 14. 15.]] [[ 0. 0. 0. 0. 8. 10. 12. 0.] [16. 0. 20. 22. 0. 0. 0. 30.]] [[0. 0. 0. 0. 0. 0. 0. 0.] [0. 0. 0. 0. 0. 0. 0. 0.]]
4.6.4.1. Defining Model Parameters¶
Again, we can use the Fashion-MNIST dataset, introduced in Section 3.6. We will define a multilayer perceptron with two hidden layers. The two hidden layers both have 256 outputs.
num_inputs, num_outputs, num_hiddens1, num_hiddens2 = 784, 10, 256, 256 W1 = np.random.normal(scale=0.01, size=(num_inputs, num_hiddens1)) b1 = np.zeros(num_hiddens1) W2 = np.random.normal(scale=0.01, size=(num_hiddens1, num_hiddens2)) b2 = np.zeros(num_hiddens2) W3 = np.random.normal(scale=0.01, size=(num_hiddens2, num_outputs)) b3 = np.zeros(num_outputs) params = [W1, b1, W2, b2, W3, b3] for param in params: param.attach_grad()
4.6.4.2. Defining Section 2.5, we can
ensure that dropout is only active during training.
drop_prob1, drop_prob2 = 0.2, 0.5 def net(X): X = X.reshape(-1, num_inputs) H1 = npx.relu(np.dot(X, W1) + b1) # Use dropout only when training the model if autograd.is_training(): # Add a dropout layer after the first fully connected layer H1 = dropout(H1, drop_prob1) H2 = npx.relu(np.dot(H1, W2) + b2) if autograd.is_training(): # Add a dropout layer after the second fully connected layer H2 = dropout(H2, drop_prob2) return np.dot(H2, W3) + b3
4.6.4.3. Training and Testing¶
This is similar to the training and testing of multilayer perceptrons described previously.
num_epochs, lr, batch_size = 10, 0.5, 256 loss = gluon.loss.SoftmaxCrossEntropyLoss() train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size) d2l.train_ch3(net, train_iter, test_iter, loss, num_epochs, lambda batch_size: d2l.sgd(params, lr, batch_size))
4.6.5. Concise Implementation¶
Using Gluon, all we need to do is add a
Dropout layer (also in the
nn package) after each fully-connected layer, passing in the dropout
probability as the only argument to its constructor. During training,
the
Dropout layer will randomly drop out outputs of the previous
layer (or equivalently, the inputs to the subsequent layer) according to
the specified dropout probability. When MXNet is not in training mode,
the
Dropout layer simply passes the data through during testing.
net = nn.Sequential() net.add(nn.Dense(256, activation="relu"), # Add a dropout layer after the first fully connected layer nn.Dropout(drop_prob1), nn.Dense(256, activation="relu"), # Add a dropout layer after the second fully connected layer nn.Dropout(drop_prob2), nn.Dense(10)) net.initialize(init.Normal(sigma=0.01))
Next, we train and test the model.
trainer = gluon.Trainer(net.collect_params(), 'sgd', {'learning_rate': lr}) d2l.train_ch3(net, train_iter, test_iter, loss, num_epochs, trainer)
4.6.
4.6.7. Exercises? | https://d2l.ai/chapter_multilayer-perceptrons/dropout.html | CC-MAIN-2019-51 | refinedweb | 1,769 | 55.84 |
All about Async/Await, System.Threading.Tasks, System.Collections.Concurrent, System.Linq, and more…
After my previous post, I received several emails and comments from folks asking why I chose to implement ForEachAsync the way I did. My goal with that post wasn’t to prescribe a particular approach to iteration, but rather to answer a question I’d received… obviously, however, I didn’t provide enough background. Let me take a step back then so as to put the post in context.
Iteration is a common development task, and there are many different variations on how iteration might be implemented. For example, a basic synchronous ForEach might be implemented as follows:
public static void ForEach<T>(this IEnumerable<T> source, Action<T> body) { foreach(var item in source) body(item); }
public static void ForEach<T>(this IEnumerable<T> source, Action<T> body) { foreach(var item in source) body(item); }
That, however, encapsulates just one particular semantic, that of looping through the source, executing the action one element at a time, and stopping if an exception is thrown. Here’s another implementation, this time continuing the processing even if an exception is thrown, propagating any exceptions only once we’re done with the whole loop:
public static void ForEach<T>(this IEnumerable<T> source, Action<T> body) { List<Exception> exceptions = null; foreach(var item in source) { try { body(item); } catch(Exception exc) { if (exceptions == null) exceptions = new List<Exception>(); exceptions.Add(exc); } } if (exceptions != null) throw new AggregateException(exceptions); }
These are both synchronous examples. Once asynchrony is introduced, additional variations are possible. We can of course create asynchronous versions that match the two examples just shown, e.g.
public static async Task ForEachAsync<T>(this IEnumerable<T> source, Func<T,Task> body) { foreach(var item in source) await body(item); }
public static async Task ForEachAsync<T>(this IEnumerable<T> source, Func<T,Task> body) { foreach(var item in source) await body(item); }
and:); }
respectively. But we can also go beyond this. Once we’re able to launch work asynchronously, we can achieve concurrency and parallelism, invoking the body for each element and waiting on them all at the end, rather than waiting for each in turn, e.g.
public static Task ForEachAsync<T>(this IEnumerable<T> source, Func<T,Task> body) { return Task.WhenAll( from item in source select body(item)); }
This serially invokes all of the body delegates, but it allows any continuations used in the bodies to run concurrently (depending on whether we’re in a serializing SynchronizationContextand whether the code in the body delegate is forcing continuations back to that context). We could force more parallelism by wrapping each body invocation in a Task:
public static Task ForEachAsync<T>(this IEnumerable<T> source, Func<T, Task> body) { return Task.WhenAll( from item in source select Task.Run(() => body(item))); }
public static Task ForEachAsync<T>(this IEnumerable<T> source, Func<T, Task> body) { return Task.WhenAll( from item in source select Task.Run(() => body(item))); }
This will schedule a Task to invoke the body for each item and will then asynchronously wait for all of the async invocations to complete. Note that this also means that the code run by the body delegate won’t be forced back to the current SynchronizationContext, even if there is one, since the async invocations are occurring on ThreadPool threads where there is no SynchronizationContext set.
We could further expand on this if we wanted to limit the number of operations that are able to run in parallel. One way to achieve that is to partition the input data set into N partitions, where N is the desired maximum degree of parallelism, and schedule a separate task to begin the execution for each partition (this uses the Partitioner class from the System.Collections.Concurrent namespace):); })); }
This last example is similar in nature to Parallel.ForEach, with the primary difference being that Parallel.ForEach is a synchronous method and uses synchronous delegates.
The point is that there are many different semantics possible for iteration, and each will result in different design choices and implementations. The ForEachAsync example from my previous post was just one more such variation, accounting for the behavior that I’d been asked about. As should now hopefully be obvious from this post, it is in no way the only way to iterate asynchronously.
Thanks for all the interest.
Perhaps some folks sensed that something was missing from, and incompatible with, the original solution -- namely, the ability to break from the loop. This new kind of loop seems like it should go by a different name, given the substantially different semantics. Might I suggest "SomewhatParallelForEach"?
This new await and async keywords really wreck my brain. At one point I think I understand them and at another I realize that I don't.
Do you know any good resources to learn those keywords and how to use them?
Dalibor, try. There are a lot of resources there. Good luck.
I run method as async using BeginInvoke and EndInvoke like this:
void Main()
{
Random rnd = new Random();
Func<int> work = ()=> {
var delay = rnd.Next(2000);
delay = 5000;
Thread.Sleep(delay);
return delay;
};
for (int i = 0; i < 100; i++)
{
work.DoAsync(j=>Console.WriteLine("hello:{0}",j));
}
Console.WriteLine ("non-block");
Console.Read();
}
public static class Extensions
{
public static void DoAsync<TResult>(this Func<TResult> f, Action<TResult> callback)
f.BeginInvoke(x => callback(f.EndInvoke(x)), null);
public static void DoAsync<TInput, TResult>(this Func<TInput, TResult> f, TInput arg, Action<TResult> callback)
f.BeginInvoke(arg, x => callback(f.EndInvoke(x)), null);
What is different async ctp?
"What is different?": Several things, but most importantly, your solution is callback-based. You have to turn your control flow inside out by providing a delegate that specifies what to do when the first operation completes (note that you're also not properly handling exceptions... if the async operation fails, that exception will propagate out of the EndInvoke call and likely crash the process). In contrast, the new async support in C#/VB allows you to write your normal control flow, without using callbacks. For more info, see.
async/await is really good, but I feel like it is missing the final piece: an "async foreach" construct, maybe based on IObservable.
Can you write a post about how that can be simply simulated?
Hi Flavien-
A synchronous foreach is basically syntactic sugar for (ignored disposability for the purposes of this discussion):
while(e.MoveNext()) Body(e.Current);
So, if you had an API which exposed a "Task<bool> MoveNextAsync()" and a "T Current;", you could do the same basic thing with await:
while(await e.MoveNext()) Body(e.Current);
If you were using a BufferBlock<T> from System.Threading.Tasks.Dataflow.dll, for example, you could do that with:
while(await buffer.OutputAvailableAsync()) Body(buffer.Receive());
or a bit more efficiently as:
while(await buffer.OutputAvailableAsync())
T current;
while(buffer.TryReceive(out current)) Body(current);
I implemented the last ForEachAsync example in the above that uses the partitioner and I call on the ForEachAsync prefixed with an await, but the execution in the calling method continues while the Task are still being processed instead of pausing until all the tasks are complete (WhenAll). Am I doing something wrong? Below is a sample of my code...
await new DocumentChunkHelper(document).GetChunkCollection().ForEachAsync<DocumentChunk>(
DegreeOfParallelism.Low,
chunk =>
{
return this._service.UpdateServiceTemplateWriteUp(
new DocumentBuffer()
{
Id = this.ServiceTemplate.ServiceTemplateId,
Buffer = chunk.Buffer,
BufferIndex = chunk.BufferIndex,
MaxBufferSize = chunk.MaxBufferSize,
TotalLength = chunk.TotalLength
}
);
}
);
This UpdateServiceTemplateWriteUp method's return type is Task<DocumentChunk>
What does your ForEachAsync code look like... is it 100% exactly what's written above? Or did you maybe replace the Task.Run with Task.Factory.StartNew or something like that? It could also be that the task you're returning from UpdateServiceTemplateWriteUp is completing before the represented work is actually done.
Stephen,
Can you explain why nothing is happening when I use the ForEachAsync:
public static void TestAsyncP()
{
Enumerable.Range(0, 100).ToList()
.ForEachAsync<int>(
10,
task =>
{
return new Task(async () => {
await ia(task);
//Console.WriteLine(ii);
});
//return task;
});
Console.WriteLine("ForEachAsync");
}
public static async Task<long> ia(int x)
long ii = 1;
Parallel.ForEach(Partitioner.Create(1, x), range =>
{
for (var ij = range.Item1; ij <= range.Item2; ij++)
Console.WriteLine("{0} - {1} - {2} - {3}", x, ij, range.Item1, range.Item2);
}
});
await Task.WhenAll(
Enumerable.Range(1, x).ToList()
.ConvertAll(y =>
Task.Run(() =>
System.Threading.Thread.Sleep(x);
ii *= y;
})).ToArray());
Console.WriteLine("ia: {0} - {1}", x, ii);
return ii;
@Lamb: I've not run your code, but a few things that jump out at me:
1. Task's ctor creates an unstarted Task. It will never be executed (and thus never complete) unless you call its Start method, which your code does not do.
2. Task's ctor does not work as you're expecting with async lambdas. See the post at blogs.msdn.com/.../10229468.aspx for related information; it's talking about Task.Factory.StartNew, but similar issues apply to the ctor.
If I try to start the task though I get an exception. Do you have any working examples of using the ForEachAsync?
10,
return new Task(async () =>
}).Start();
Returns: Start may not be called on a promise-style task.
@BermudaLamb: The Task you're trying to start in your example is the one being returned from ForEachAsync, not the one that you're constructing. You could instead put the .Start on the Task that you're constructing, but they'll you'll still have the other issue I mentioned. Just get rid of the .Start and replace "new Task" with "Task.Run".
Hi was just reviewing some of you code.
To be honest I still have no idea how it all works. Maybe a 101 dummy session if you have the time.
Anyway. I have a working sample which implements your ForEachAsync Function below with thread limiting.
When I step into the code responsible for the function);
}));
and I hover over partition I can see 'Partition.Current' threw an exception of type 'System.InvalidOperationException'. MoveNext must be called at least once before calling Current."
Is this normal?
The code seems to run fine however.
Other than that very nice work!
@Peter H: When you hover over a property like that, the debugger is actually executing the code associated with the property's getter. Since you're hovering over it before MoveNext has been called, Current throws an exception, as would be the case if you called Current on most any enumerator before you called MoveNext. | http://blogs.msdn.com/b/pfxteam/archive/2012/03/05/10278165.aspx | CC-MAIN-2014-41 | refinedweb | 1,762 | 57.77 |
- Code: Select all
def delay_print(s):
for c in s:
sys.stdout.write('%s' % c)
sys.stdout.flush()
time.sleep(0.05)
I use raw_input() as a part of the text, and I have gotten it to print out letter-by-letter as well:
- Code: Select all
q_one = raw_input(delay_print("Are you a boy, or a girl? "))
However, when I run the code, the letters print out one-by-one, like wanted, but I also get a "None" statement afterwards:
- Code: Select all
Are you a boy, or a girl? None
I would like the text to still print out letter-by-letter, however I don't want the "None" at the end. This is only for my raw_input() commands. Any help? | http://www.python-forum.org/viewtopic.php?p=10862 | CC-MAIN-2014-15 | refinedweb | 122 | 79.3 |
Type Design Guidelines in .NET
From the CLR perspective, there are only two categories of types—reference types and value types—but for the purpose of framework design discussion we divide types into more logical groups, each with its own specific design rules. Figure 4-1 shows these logical groups.. Extensibility and base classes are covered in Chapter 6.
Interfaces are types that can be implemented this book.
Figure 4-1: The logical grouping of types
4.1.
4.1.1 Standard Subnamespace Names
Types that are rarely used should be placed in subnamespaces to avoid cluttering the main namespaces. We have identified several groups of types that should be separated from their main namespaces.
4.1.1.1. | https://www.informit.com/articles/article.aspx?p=423349&seqNum=4 | CC-MAIN-2020-34 | refinedweb | 119 | 57.67 |
Red Hat Bugzilla – Bug 948089
xorg-x11-drv-armsoc fails to build
Last modified: 2013-04-05 09:06:51 EDT
Description of problem:
xorg-x11-drv-armsoc fails to build from source in F19.
Version-Release number of selected component (if applicable):
xorg-x11-drv-armsoc-0.5.1-8.fc19
How reproducible:
always
Steps to Reproduce:
1. rpm -ivh xorg-x11-drv-armsoc-0.5.1-8.fc19.src.rpm
2. rpmbuild -bb SPECS/xorg-x11-drv-armsoc.spec
3.
Actual results:
fails to build:
drmmode_display.c:56:22: fatal error: mibstore.h: No such file or directory
#include "mibstore.h"
Expected results:
successfully builds.
Additional info:
It appears to be missing a header file from the source. I'm not sure if anything else is missing.
related info:
It appears that this file was intentionally removed, but the driver needs to be updated to not call miInitializeBackingStore:
Created attachment 731692 [details]
patch to remove MIB backing store leftovers.
Created attachment 731693 [details]
Spec file changes to apply the patch
Adding a naive (trivial) patch to remove references to mibstore.h and miInitializeBackingStore allows this package to build. I have not tested that it works.
The spec file and source patch are attached. | https://bugzilla.redhat.com/show_bug.cgi?id=948089 | CC-MAIN-2018-26 | refinedweb | 205 | 52.56 |
C library function - exp()
Advertisements
Description
The C library function double exp(double x) returns the value of e raised to the xth power.
Declaration
Following is the declaration for exp() function.
double exp(double x)
Parameters
x − This is the floating point value.
Return Value
This function returns the exponential value of x.
Example
The following example shows the usage of exp() function.
#include <stdio.h> #include <math.h> int main () { double x = 0; printf("The exponential value of %lf is %lf\n", x, exp(x)); printf("The exponential value of %lf is %lf\n", x+1, exp(x+1)); printf("The exponential value of %lf is %lf\n", x+2, exp(x+2)); return(0); }
Let us compile and run the above program that will produce the following result −
The exponential value of 0.000000 is 1.000000 The exponential value of 1.000000 is 2.718282 The exponential value of 2.000000 is 7.389056
math_h.htm
Advertisements | http://www.tutorialspoint.com/c_standard_library/c_function_exp.htm | CC-MAIN-2017-04 | refinedweb | 161 | 51.55 |
I want to control running process/program by script in Python.
I have a program linphonec (You can install: apt-get install linphonec).
My task is:
1. Run linphonec (I'm using subprocess at the moment)
2. When linphonec is running it has a many command to control this and I want to e.g use "proxy list" <- this is command in linphonec.
Simple flow:
test@ubuntu$ > linphonec
linphonec > proxy list
There are actually 2 ways to communicate:
Run your program with
myprogram.py | linphonec to pass everything you
Use subprocess.Popen with subprocess.PIPE in constructor via keywrod-args for stdin (propably stdout and stderr, too) and then communicate for a single command or use stdin and stdout (stderr) as files
import subprocess p=subprocess.Popen("linphonec", stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, universal_newlines=True) #this is for text communication p.stdin.write("proxy list\n") result_first_line=p.stdout.readline() | https://codedump.io/share/vko9xc0IsIuX/1/how-to-type-command-to-running-process-in-python | CC-MAIN-2018-17 | refinedweb | 154 | 52.15 |
Scala Supertypes to Typeclasses
After a few months of writing about Cats, it is great to take a small break. This pause isn’t to start anything new, but to build foundations for the upcoming posts. If you are looking to learn about those scary FP words, you will need to understand what is below.
Chances are, if you are looking to learn about cats, you will find the start quite easy. Hopefully, I can make the end easy too.
When you write code, it is a good idea to aim for generic logic. You never know when you might need to solve another very similar problem.
The simplest way to avoid duplication is by writing functions. They allow to execute many times the same logic. This logic should be based on input arguments, and return output ones.
def maxOption(elements: List[Int]): Option[Int] = {
if(elements.isEmpty) None
else Some(elements.max)
}
The above function is quite simple. It finds the largest element in a
List[Int], or returns
None. It is a safe alternative to the built in one.
Our
maxOption function is a great way to avoid redefining the if statement, but it isn’t very generic. It only works with
List[Int].
def maxOption(elements: Array[Float]): Option[Float] = ???
def maxOption(elements: Set[String]): Option[String] = ???
def maxOption(elements: Vector[Boolean]): Option[Boolean] = ???
...
It would be silly to define the function for every combination of types. This can be avoided with abstraction.
A supertype represent functionalities that are inherited by another type. This is often represented with animals, shapes, or vehicles.
class Bicycle(
cadence: Int,
gear: Int,
speed: Int,
)
class MountainBike(
cadence: Int,
gear: Int,
speed: Int,
seatHeight: Int,
) extends Bicycle(cadence, gear, speed)
Array,
List, and
Set have many supertypes in common. Picking the smallest common denominator would increase compatibility with other types.
The only need for
maxOption is for the supertype to implement
isEmpty, and
max. Those can be found in the
GenTraversableOnce trait.
import scala.collection.GenTraversableOnce
def maxOption(elements: GenTraversableOnce[Int]): Option[Int] = {
if(elements.isEmpty) None
else Some(elements.max)
}
GenTraversableOnce has over 350 subclasses. By using it instead of
List, we increased compatibility, but
Int is still very limiting.
Int, like
String,
Boolean, and many other types, only extend
Any, and
AnyVal. Those types can’t be compared to identify the maximum value.
def maxOption(elements: GenTraversableOnce[Any]): Option[Any] = ???
Instead of using a supertype,
Int should be implemented as a generic. This allows the caller to specify any type, but it also means the function must handle all types.
def maxOption[A](elements: GenTraversableOnce[A]): Option[A] = ???
Once again this seems like the wrong approach, until you attempt to compile the code.
scala> import scala.collection.GenTraversableOnce
import scala.collection.GenTraversableOnce
scala> def maxOption[A](elements: GenTraversableOnce[A]) = {
| if(elements.isEmpty) None
| else Some(elements.max)
| }
<console>:14: error: No implicit Ordering defined for A.
else Some(elements.max)
^
The compiler raises an error. It doesn’t know how to identify a maximum
A, but it could with an
implicit Ordering.
Ordering is a trait used to sort elements. It allows the compiler to identify the
max value.
The function can take
Ordering as an extra argument
def maxOption[A](elements: GenTraversableOnce[A])
(implicit ord: Ordering[A]): Option[A] = {
if(elements.isEmpty) None
else Some(elements.max)
}
Or a type bound
def maxOption[A: Ordering](elements: GenTraversableOnce[A]) = {
if(elements.isEmpty) None
else Some(elements.max)
}
The second is just syntactic sugar for the first.
Ordering is a typeclass. Similarly to the supertype, it defines, and sometimes implement functionality. There is more to it, but I will keep that for the next post.
Lets see how
Ordering could be used for
maxOption if it was written for an auction company. It would need to return the highest
Bid.
case class Bid(
owner: String,
amount: Float)
The wrong approach is to remove the generic, and replace it by
Bid. This would work, but the function wouldn’t be generic anymore.
Instead, a new implementation of
Ordering should be created.
implicit val bidOrdering = new Ordering[Bid] {
def compare(x: Bid, y: Bid): Int = x.amount.compare(y.amount)
}
As long as the implicit is in scope, the function can be invoked with any
GenTraversableOnce[Bid].
Supertypes offer a simple hierarchy explanation that makes it easy for people to use. Typeclasses, with the implicits, aren’t as welcoming, but offer the same functionality, and more.
Next time, with the basics out of the way, I will focus on the more part. | https://medium.com/@pvinchon/scala-generics-and-type-classes-3495bc059d1f | CC-MAIN-2019-13 | refinedweb | 759 | 61.02 |
Query:
How to call an external command (as if I’d typed it at the Unix shell or Windows command prompt) from within a Python script?
How to execute a program or call a system command in Python? Answer #1:
Use the
subprocess module in the standard library:
import subprocess subprocess.run(["ls", "-l"])
The advantage of
subprocess.run over
os.system is that it is more flexible (you can get the
stdout,
stderr, the “real” status code, better error handling, etc…).
Even the documentation for
os.system recommends using
subprocess instead:
The
subprocessmodule provides more powerful facilities for spawning new processes and retrieving their results; using that module is preferable to using this function. See the Replacing Older Functions with the subprocess Module section in the
subprocessdocumentation for some helpful recipes.
On Python 3.4 and earlier, use
subprocess.call instead of
.run:
subprocess.call(["ls", "-l"])
Run a system command in Python- Answer #2:
Summary of ways to call external programs, including their advantages and disadvantages:
os.systempasses, et cetera. On the other hand, this also lets you run commands which are simply shell commands and not actually external programs.
os.popenwill do the same thing as
os.systemexcept. Example:
print(os.popen("ls -l").read())
subprocess.Popen. This is intended as a replacement for
os.popen, but.
subprocess.call. This is basically just like the
Popenclass and takes all of the same arguments, but it simply waits until the command completes and gives you the return code. For example:
return_code = subprocess.call("echo Hello World", shell=True)
subprocess.run. Python 3.5+ only. Similar to the above but even more flexible and returns a
CompletedProcessobject when the command finishes executing.
os.fork,
os.exec,
os.spawnare similar to their C language counterparts, something “
my mama didnt love me && rm -rf /” which could erase the whole filesystem.
Answer #3:
Typical implementation:().
Answer documentation is:
import subprocess import sys # Some code here pid = subprocess.Popen([sys.executable, "longtask.py"]) # Call subprocess # Some more code here
The idea here is that you do not want to wait in the line ‘call subprocess’ until the longtask.py is finished. But it is not clear what happens after the line ‘some more code here’ from the example.
My target platform was FreeBSD, but the development was on Windows, so I faced the problem on Windows first.
On Windows (Windows XP), the parent process will not finish until the longtask.py has finished its work. It is not what you want in a CGI script. The problem is not specific to Python; in the PHP community the problems are the same.
The solution is to pass DETACHED_PROCESS Process Creation Flag to the underlying CreateProcess function in Windows.
Answer #5:
import os os.system("your command")
Note that this is dangerous since the command isn’t cleaned. I leave it up to you to google for the relevant documentation on the ‘os’ and ‘sys’ modules. There are a bunch of functions (exec* and spawn*) that will do similar things.
Answer #6: How to execute a program or call a system command? 🙂.
Hope you learned something from this post.
Follow Programming Articles for more! | https://programming-articles.com/how-to-execute-a-program-or-call-a-system-command-in-python-answered/ | CC-MAIN-2022-40 | refinedweb | 529 | 58.99 |
.
so this statement
makes a pointer of type int and another 5 int variables (28 bytes in total on a modern system - 8 byte pointer & 5 4byte integers). The address of the first int variable is stored in the pointer, but since it's not holding any information about the size of the array, there's nothing stopping you from indexing out of the range of the array
Is that correct?
No, int test[5] makes a fixed array with 5 elements, and takes 20 bytes. In most cases, when test is evaluated, it will decay into a temporary pointer.
There's nothing stopping you from indexing an array out of range due to the way the subscript index works -- this is discussed in more detail in the lesson on pointer arithmetic.
Sorry Alex my error.
Alex,
After going over this lesson again. There seems to be an error with the comments on both the programs that pass an array to a function. The array is 32 bytes and the pointer is 4 bytes.
Either that, or the print values backwards.
Which example are you referring to?
hey alex i just got back here for a review because i already reached chapeter 7.8 without fully understanding the chapter. it seems i misunderstood what "decay" means. it seems like the array when passed into the function decays into a pointer type which points to the first element of the array. by "decay" i first thought it would only include the first element in the array thus discarding the rest. taking a closer look at the examples above it is now clear to me that that wasn't the case and in fact the passed array decays into a pointer type but can still be used just as a regular array only this time this is the actual array and not a copy.
just one question though, pointers holds only address and to get the value of that address we dereference a pointer with *. when an array is passed to a function and that array decays into a pointer type how come we can use the variable name of the array without * ? wouldn't that be equivalent to getting the address of the element ?
example
void testarray(int array[])
{
std::cout << array[3] << std::endl; // prints 44 not the address of 4th element
}
main()
{
int myarray[] = {11,22,33,44,55};
testarray(myarray);
}
I'll try to see if I can find a way to update the lesson to be clearer about what I mean by decay since it's used in a non-obvious sense here.
Remember that operator[] does an implicit dereference, so array[3] is the same as *(array + 3).
Thanks Alex your tutorials are very much appreciated.
Alex.
When you modified the array element above, in the intro pass by address you used *ptr = 5;. It seems like this statement actually does three operations at once?
1. Point to the address of the first element array[0].
2. Assign a new value to this element of 5.
3. Dereference the pointer.
Is that right?
How does this statement actually work?
When changeArray() is called, the value of ptr is set by the caller -- in this case, it will point to the first element of array (array[0]). But that's not specific to the line you indicate.
*ptr = 5 does two things:
1) It dereferences ptr, which gets the value at the address the pointer is pointing to (in this case, array[0]).
2) It assigned a new value of 5 to this element.
So it's essentially saying array[0] = 5.
Good Thanks for the clarification.
Hi Alex. I want to leave this for you to repost in the proper section if u desire. I can't find your section on dangling pointers.
My question is: under what circumstances should the programmer be vigilant about uncovering dangling pointers. I would think it daunting to look everywhere all the time, and I am wondering is there a more intelligent way of checking for them.
I cover dangling pointers in the lesson 6.9.
One major problem with pointers is that there's no way to tell whether they're dangling. The burden is on the programmer to ensure that all pointers are valid.
That means if you deallocate memory or do something that will potentially leave pointers dangling, it's on you to clean things up. One of the best ways to do that is to not have more than one pointer pointing to a given memory address (so that if you use one pointer to deallocate, you don't leave the other dangling). Another good habit is to ensure you set your pointers to 0 or nullptr (unless they're going to go out of scope anyway).
Hi Alex,
In the process of review I recently learned that an array name is a pointer. In this lesson you were teaching how to assign a pointer(ptr) to an array(array). Does that mean the pointer (ptr) is pointing to another pointer, (array), that points to the address of the first element of the array?
e.g.
An array name is not a pointer! An array name is an array that can decay into a pointer that points to the first element of the array (which you can then assign to another pointer if you desire).
So yes, in your example, ptr would point to the first element of array.
> When passing an array as an argument to a function, a fixed array decays into a pointer
Then how can the decayed pointer still use the [] syntax inside the function?
How does this work?
Via pointer arithmetic, which I cover in lesson 6.8a, Pointer arithmetic and array indexing.
Hey Alex, been sat here for about 30 minutes trying to figure out whats going on and reading the comments to see if anyone has had similar questions.
I understand what you are saying about the question ^^ about dereferncing , the thing I dont understand how are we able to change an array via a pointer? I thought a pointer points to the address of a variable(s).
So we pass the array into the void function are we actually passing a pointer to the array as I changed the caller to
I got the message invalid conversion from int* to int. Is array automatically set to a pointer then?
Overall just confused, maybe because its half 10 and I got college tomorrow :P, Cheers for your help and time again Alex 🙂
Remember that arrays decay to pointers in most situations. So when we pass the array as an argument to function changeArray(), it decays into a pointer that points into the first element of the array.
So function parameter ptr is pointing to the first element of the array.
When we dereference *ptr, we get the value at the first element of the array, which we can then assign to a different value (such as 5).
Note that this works with non-arrays too:
Does that make sense? I talk more about passing by address in lesson 7.3. If you're still having trouble, you may want to jump ahead and read that lesson.
Thanks so much! I'm actually beginning to understand pointers finally and yeah I get it now. 🙂
There's something I'm not quite getting my head around (although it is late here).
You say that when passing an array to a function:
is the same as:
So why can't I change an element in the array with:
Because both * and [] do a dereference, so you're trying to dereference variable array twice.
Hi Alex. Thanks for the tutorials.
In the second difference between the pointers and arrays, when I check the type of the pointer to an array, i.e. using typeid(&array).name(), in Visual Studio I get 'int (*) [5]' and not 'int[5] *' . Is it because of differences in my and your compilers or it is typo?
I see the same thing in Visual Studio 2015. typeid().name reports int (*)[5], but the visual debugger reports int[5] *. I like the "int (*)[5]" syntax better, since that matches how you'd actually declare a pointer to a fixed array. I'll update the lesson.
I think in the chapter "Revisiting passing fixed arrays to functions " there is a little mistake. the //array decasy into a pointer here - comment should be one row below to the printSize(array);
Quite right. Fixed!
#include <iostream>
int main()
{
using namespace std;
//string myString = "Christopher"; //
char myStringArray[] = "Christopher";
int arraySize = sizeof(myStringArray)/sizeof(myStringArray;
}
//hello
Alex,i am having in a problem in this question..please explain ,,?
You're asking me to explain how this works? It's using a for loop with a pointer to step through each character in the string, and if it's a vowel, it increments nVowels.
Hi Alex, thank you for this tutorial. but I wonder there are any other books, documentaries that can help me practice coding, i mean a book full of exercises like your quiz. Most regard!!!
Thanks Alex..you'r a star
Suggestion:
You should exactly tell what cout prints when asked to print the address of a variable. Take this for example:
cout is printing the "very first" address of first and second element of the array. An int takes 4 bytes, means 4 addresses. Those 4 memory addresses are reserved for that int variable. cout never prints all the 4 addresses reserved for that variable, only the first 1 byte sized address. In the above program, the first line of output is the (starting)address reserved for variable value. Let's say, it is 0*0000. Memory from 0*0000 to 0*0003 is reserved for value. When cout is asked to print the address of value2, it gives 0*0004, again the starting address. This clears a confusion that why array's address and its first elements address are the same when printed(my one :-)). An array has nothing in it but its elements. Thus, an arrays address starts from the (first) address of its first element and ends with the (last) address of its last element. cout only prints the starting address, that's why it gives identical outputs when printing address of an array(&array) and its first element(&array[0]). Sizeof returns the total size of its parameter(I don't think operators accept any parameter, but no words for those bracket closed objects right now).
The part Arrays and Pointers in C++ is really confusing for beginners like me. I gave my 5 days to this chapter to clear everything and today at least not afraid of the title "Arrays and Pointers". You can't do anything better. Your site has the simplest explanation of Arrays and Pointers.
Thanks for the feedback. I've rewritten parts of the article to try and make some of the points more clear. Talking about the sequential memory addresses of arrays is a good idea, but probably better for the next lesson. I'll look at adding it there.
1. "To us, the above is an array of 5 integers, but to the compiler, array is a variable of type int[5]"
array is a pointer variable of type int[5], right? Add "pointer" before the term "variable" or let me know if I am taking it wrong.
2. Alex, help me get out of this big confusion:
In this program, if array is a variable of type int[5] and name is of char[], that points to the first element of that array, what is decayed then. Code is printing what's expected. array variable pointing to first element of array[5], and dereferencing it will definitely output the first element. Hope you understand what I am asking.
3. "Taking the address of the array returns a pointer to the entire array (in the above example, int[5] *), not just the first element (int *)." I can't understand, what are you trying to say here. Do you mean this: "&array returns the address of entire array"? If yes, then again this program tricks me:
If &array is returning the address of the entire array, then why address of first element (&array[0]) is also same(printing the same address as &array)?
4. How to print the address of entire array and not just the address of its first element?
Sorry for bunch of questions, but this lesson is really confusing to me.
Good questions.
1) Array has type int[5], not int*. This is why we can use sizeof() on it and get the size of the array, not the size of a pointer.
2) Arrays decay from their array types into a pointer type in most cases, including when they're sent to cout. In practical terms, this rarely matters unless you need the size information.
3) The address of the first element and address of the entire array are the same. The only difference is the type information that is returned. A pointer to the first element would have type int*, whereas a pointer to the whole array would have type (int[5] *). The only case where this is likely to matter is when you're doing pointer arithmetic. Incrementing a pointer to the first element will advance to the second element. Incrementing a pointer to the entire array will move to the memory address just beyond the array.
4) Again, they both point to the same address. It's only the typing that's different.
Let me know if this is clear, or how I could make the above lesson clearer to begin with.
Hi,
to question 3:
How is the correct syntax to initialize a pointer to an int arry? I want to do something like this:
Additionally, why does the following not work:
Greetings from germany,
Tobias
ptr will point at the first element of array (index 0).
One more conclusion, may be wrong:
I was testing arrays with different operators. Some operators cause it to break into a pointer and some doesn't. If I am right(not sure, I am just a beginner), I found the solution of the question why array breaks into a pointer when passed to a function. May be, the function call operator {()} is one of the operator that causes the array to break into a pointer, with no side effect. Please read my previous comment. I asked a question there.
There are other operators that cause arrays to decay into pointers, such as unary operator+. So it's not just the function call operator that causes decay.
Thanks Alex, seems like you didn't noticed this in my comment:
"This effectively allows us to treats(should be treat) an array as a pointer in most cases"
Typo is in the first paragraph after the first program posted in this lesson. The last line.
I am again with a question, If we can increment or decrement a pointer by 1 adrress by using increment decrement operators, why compiler complains when these operators are used directly on arrays to increment/decrement the address (something like (&array[0]) + 1) by 1 address? I know question will not explain what I am asking. Here is the program:
If array breaks into a pointer when evaluated, what is happening in 15th line? Is it also a special case? If yes, can I say that array remains array (and doesn't decay into a pointer) when seen with an unary operator by the compiler?
I am not sure because dereference operator is also an unary operator.
Whatever it is, I am unable to understand why line no. 15 gives a compiler error. Can't it print the address of the integer, that comes after the address of the last element in the array.
Thanks for the heads up about the typo.
++array is the equivalent of array = array + 1. A fixed array's value (the address it points to) is considered const (it's non-reassignable). You're trying to change it, which is illegal.
I found one more:
"In the fixed array case, the program allocates memory for a fixed array of length 5, and initializes that memory with the string “Alex\n”"
"What usually happens is that the compiler places the string “Alex\n” into read-only memory somewhere, and then sets the pointer to point to it"
If a string terminates with a null terminator and not with a newline escape sequence(\n), the "Alex\n" part should be "Alex" in both sentences. Take a look at my previous comment in this section. There are more typos.
A question...actually 2:
1. If an array decays into a pointer when evaluated, why sizeof(array) prints length*element value size. Does this means, the bracketed term (array) is not evaluated in this statement.
2. When I tried to get the address of a char variable by following program, it prints a strange character and not the address:
Why...???
Thanks for the typo notifications.
1) This is just a special case.
2) This is an interesting case you've found. You intended to print the memory address of variable value, but &value is being interpreted as a value of type char*. When you pass a value of type char* to std::cout, it prints that value as a string. Your variable value isn't null terminated, so it prints a, and then runs off into uninitialized memory and prints some garbage before randomly hitting a 0, which acts as a null terminator.
Typos:
"This effectively allows us to treats(treat) an array as a pointer in most cases"
char array variable is named "name". "szname" is undefined, so dereferencing it will cause a compile error. Remove "sz" from the pointer name in the last line.
"Note that ptr+1 does not return the memory address after pnPtr(should be ptr)"
Fixed, thanks.!
Yes, that is exactly what it would do. 🙂
That's one of the many reasons you need to be careful using pointers.
I have a question regarding a simple program which indexes using pointer arithmetic in order to reverse the case of a string. The code is this: 😉:
Will output the following:.
...rather than a hex address for pnPtr.
While setting up a random integer array and carrying out the same process (sans vowel counting)...
..?
std::cout assumes that it should treat char* as a string, so it prints objects of that type as a string. For other types of pointers, it just prints the content of the pointer (the address the pointer is holding as a value).
I really appreciate your guide! THANKS
Wow. I finally understand pointers! Yay!
That is a pretty hand way of stepping through an array. :
I want pnPntr to point to the first Address of szName.
Yet this works,
Depending on what is on the left of szName, szName means completely different things!
I get mollie..
put a pointer declaration and suddenly szname is and address!.
The short answer to all your questions is that szName decays to a pointer to the first element of the array.
So when you say this:
You're setting pnPntr to the content of szName, which is the address of the first array element. That's likely what you'd intended.
When you say this:
You're setting pnPntr to the address of szName, which itself is pointing to the first element of the array.
Assuming the former:
This is the same as pnPntr[0], so it should be no surprise it only prints the first element.
cout is smart enough to see that pnPntr is a char pointer, so it assumes you want to print the whole string.. 🙂 'arr'!
This is going over my head:
Could you explain it, please?
I'm sorry but would it be possible to explain? For example the 'pn:
This is creating a new pointer named pnPtr and setting it to point to the first character in the name array. After this, pnPtr and szName will be holding the same address.?
Y.
But When I change the code to the below, the result becomes 4, which I do not understand. It is as if the program is counting the last whitespace character as a vowel. Can you explain this to me?.
Name (required)
Website | http://www.learncpp.com/cpp-tutorial/6-8-pointers-and-arrays/comment-page-1/ | CC-MAIN-2018-43 | refinedweb | 3,401 | 72.26 |
By Robert Love
Book Price: $49.99 USD
£30.99 GBP
PDF Price: $34.99
Cover | Table of Contents | Colophon
int) called the file descriptor, abbreviated fd. File descriptors are shared with user space, and are used directly by user programs to access files. A large part of Linux system programming consists of opening, manipulating, closing, and otherwise using file descriptors.
inttype. Not using a special type—an
fd_t, sayis often used to indicate an error from a function that would otherwise return a valid file descriptor.
read( )and
write( )system calls. Before a file can be accessed, however, it must be opened via an
open( )or
creat( )system call. Once done using the file, it should be closed using the system call
open( )system call:
#include <sys/types.h> #include <sys/stat.h> #include <fcntl.h> int open (const char *name, int flags); int open (const char *name, int flags, mode_t mode);
open( )system call maps the file given by the pathname
nameto a file descriptor, which it returns on success. The file position is set to zero, and the file is opened for access according to the flags given by
flags.
flagsargument must be one of
O_RDONLY,
O_WRONLY, or
O_RDWR. Respectively, these arguments request that the file be opened only for reading, only for writing, or for both reading and writing.
int fd; fd = open ("/home/kidd/madagascar", O_RDONLY); if (fd == −1) /* error */
open( )system call must have sufficient permissions to obtain the access requested.
flagsargument can be bitwise-ORed with one or more of the following values, modifying the behavior of the open request:
O_APPEND
O_ASYNC
SIGIOby default) will be generated when the specified file becomes readable or writable. This flag is available only for terminals and sockets, not for regular files.
O_CREAT
namedoes not exist, the kernel will create it. If the file already exists, this flag has no effect unless
O_EXCLis also given.
O_DIRECT
read( )system call, defined in POSIX.1:
#include <unistd.h> ssize_t read (int fd, void *buf, size_t len);
lenbytes into
buffrom the current file offset of the file referenced by
fd. On success, the number of bytes written into
bufis returned. On error, the call returns
−1, and
errnois set. The file position is advanced by the number of bytes read from
fd. If the object represented by
fdis not capable of seeking (for example, a character device file), the read always occurs from the "current" position.
fdinto
word. The number of bytes read is equal to the size of the
unsigned longtype, which is four bytes on 32-bit Linux systems, and eight bytes on 64-bit systems. On return,
nrcontains the number of bytes read, or
−1on error:
unsigned long word; ssize_t nr; /* read a couple bytes into 'word' from 'fd' */ nr = read (fd, &word, sizeof (unsigned long)); if (nr == −1) /* error */
lenbytes, and it could produce certain errors that this code does not check for and handle. Code such as this, unfortunately, is very common. Let's see how to improve it.
read( )to return a positive nonzero value less than
len. This can happen for a number of reasons: less than
lenbytes may have been available, the system call may have been interrupted by a signal, the pipe may have broken (if
fdis a pipe), and so on.
0is another consideration when using
read( ). The
read( )system call returns
0to indicate end-of-file (EOF); in this case, of course, no bytes were read. EOF is not considered an error (and hence is not accompanied by a
−1return value); it simply indicates that the file position has advanced past the last valid offset in the file, and thus there is nothing else to read. If, however, a call is made for
write( ).
write( )is the counterpart of
read( )and is also defined in POSIX.1:
#include <unistd.h> ssize_t write (int fd, const void *buf, size_t count);
write( )writes up to
countbytes starting at
bufto the current file position of the file referenced by the file descriptor
fd. Files backed by objects that do not support seeking (for example, character devices) always write starting at the "head."
−1is returned, and
errnois set appropriately. A call to
write( )can return
0, but this return value does not have any special meaning; it simply implies that zero bytes were written.
read( ), the most basic usage is simple:
const char *buf = "My ship is solid!"; ssize_t nr; /* write the string in 'buf' to 'fd' */ nr = write (fd, buf, strlen (buf)); if (nr == −1) /* error */
read( ), this usage is not quite right. Callers also need to check for the possible occurrence of a partial write:
unsigned long word = 1720; size_t count; ssize_t nr; count = sizeof (word); nr = write (fd, &word, count); if (nr == −1) /* error, check errno */ else if (nr != count) /* possible error, but 'errno' not set */
write( )system call is less likely to return a partial write than a
read( )system call is to return a partial read. Also, there is no EOF condition for a
write( )system call. For regular files,
write( )is guaranteed to perform the entire requested write, unless an error occurs.
write( )may return an error revealing what caused the first call to perform only a partial write (although, again, this situation is not very common). Here's an example:
ssize_t ret, nr; while (len != 0 && (ret = write (fd, buf, len)) != 0) { if (ret == −1) { if (errno == EINTR) continue; perror ("write"); break; } len -= ret; buf += ret; }
fsync( )system call, defined by POSIX.1b:
#include <unistd.h> int fsync (int fd);
fsync( )ensures that all dirty data associated with the file mapped by the file descriptor
fdis written back to disk. The file descriptor
fdmust be open for writing. The call writes back both data and metadata, such as creation timestamps, and other attributes contained in the inode. It will not return until the hard drive says that the data and metadata are on the disk.
fsync( )to know whether the data is physically on the disk. The hard drive can report that the data was written, but the data may in fact reside in the drive's write cache. Fortunately, data in a hard disk's cache should be committed to the disk in short order.
fdatasync( ):
#include <unistd.h> int fdatasync (int fd);
fsync( ), except that it only flushes data. The call does not guarantee that metadata is synchronized to disk, and is therefore potentially faster. Often this is sufficient.
int ret; ret = fsync (fd); if (ret == −1) /* error */
O_DIRECTflag to
open( )instructs the kernel to minimize the presence of I/O management. When this flag is provided, I/O will initiate directly from user-space buffers to the device, bypassing the page cache. All I/O will be synchronous; operations will not return until completed.
close( )system call:
#include <unistd.h> int close (int fd);
close( )unmaps the open file descriptor
fd, and disassociates the process from the file. The given file descriptor is then no longer valid, and the kernel is free to reuse it as the return value to a subsequent
open( )or
creat( )call. A call to
close( )returns
0on success. On error, it returns
−1, and sets
errnoappropriately. Usage is simple:
if (close (fd) == −1) perror ("close");
close( )may also result in an unlinked file finally being physically removed from the disk.
close( ). This can result in missing a crucial error condition because errors associated with deferred operations may not manifest until later, and
close( )can report them.
errnovalues on failure. Other than
EBADF(the given file descriptor was invalid), the most important error value is
EIO, indicating a low-level I/O error probably unrelated to the actual close. Regardless of any reported error, the file descriptor, if valid, is always closed, and the associated data structures are freed.
lseek( )system call is provided to set the file position of a file descriptor to a given value. Other than updating the file position, it performs no other action, and initiates no I/O whatsoever:
#include <sys/types.h> #include <unistd.h> off_t lseek (int fd, off_t pos, int origin);
lseek( )depends on the
originargument, which can be one of the following:
SEEK_CUR
fdis set to its current value plus
pos, which can be negative, zero, or positive. A
posof zero returns the current file position value.
SEEK_END
fdis set to the current length of the file plus
pos, which can be negative, zero, or positive. A
posof zero sets the offset to the end of the file.
SEEK_SET
fdis set to
pos. A
posof zero sets the offset to the beginning of the file.
−1and
errnois set as appropriate.
fdto 1825:
off_t ret; ret = lseek (fd, (off_t) 1825, SEEK_SET); if (ret == (off_t) −1) /* error */
fdto the end of the file:
off_t ret; ret = lseek (fd, 0, SEEK_END); if (ret == (off_t) −1) /* error */
lseek( )returns the updated file position, it can be used to find the current file position via a
SEEK_CURto zero:
int pos; pos = lseek (fd, 0, SEEK_CUR); if (pos == (off_t) −1) /* error */ else /* 'pos' is the current position of fd */
lseek( )are seeking to the beginning, seeking to the end, or determining the current file position of a file descriptor.
lseek( )to advance the file pointer past the end of a file. For example, this code seeks to 1,688 bytes beyond the end of the file mapped by
fd:
int ret; ret = lseek (fd, (off_t) 1688, SEEK_END); if (ret == (off_t) −1) /* error */
lseek( ), Linux provides two variants of the
read( )and
write( )system calls that each take as a parameter the file position from which to read or write. Upon completion, they do not update the file position.
pread( ):
#define _XOPEN_SOURCE 500 #include <unistd.h> ssize_t pread (int fd, void *buf, size_t count, off_t pos);
countbytes into
buffrom the file descriptor
fdat file position
pos.
pwrite( ):
#define _XOPEN_SOURCE 500 #include <unistd.h> ssize_t pwrite (int fd, const void *buf, size_t count, off_t pos);
countbytes from
bufto the file descriptor
fdat file position
pos.
pbrethren, except that they completely ignore the current file position; instead of using the current position, they use the value provided by
pos. Also, when done, they do not update the file position. In other words, any intermixed
read( )and
write( )calls could potentially corrupt the work done by the positional calls.
read( )or
write( )call with a call to
lseek( ), with three differences. First, these calls are easier to use, especially when doing a tricky operation such as moving through a file backward or randomly. Second, they do not update the file pointer upon completion. Finally, and most importantly, they avoid any potential races that might occur when using
lseek( ). As threads share file descriptors, it would be possible for a different thread in the same program to update the file position after the first thread's call to
lseek( ), but before its read or write operation executed. Such race conditions can be avoided by using the
pread( )and
pwrite( )system calls.
0from
pread( )indicates EOF; from
pwrite( ), a return value of
0indicates that the call did not write anything. On error, both calls return
#include <unistd.h> #include <sys/types.h> int ftruncate (int fd, off_t len);
#include <unistd.h> #include <sys/types.h> int truncate (const char *path, off_t len);
len. The
ftruncate( )system call operates on the file descriptor given by
fd, which must be open for writing. The
truncate( )system call operates on the filename given by
path, which must be writable. Both return
0on success. On error, they return
−1, and set
errnoas appropriate.
len. The data previously existing between
lenand the old length is discarded, and no longer accessible via a read request.
Edward Teach was a notorious English pirate. He was nicknamed Blackbeard.
#include <unistd.h> #include <stdio.h> int main( ) { int ret; ret = truncate ("./pirate.txt", 45); if (ret == −1) { perror ("truncate"); return −1; } return 0; }
Edward Teach was a notorious English pirate.
read( )system call is issued, and there is not yet any data—the process will block, no longer able to service the other file descriptors. It might block for just a few seconds, making the application inefficient and annoying the user. However, if no data becomes available on the file descriptor, it could block forever. Because file descriptors' I/O is often interrelated—think pipes—it is quite possible for one file descriptor not to become ready until another is serviced. Particularly with network applications, which may have many sockets open simultaneously, this is potentially quite a problem.
read( )system call, it takes an interesting journey. The C library provides definitions of the system call that are converted to the appropriate trap statements at compile-time. Once a user-space process is trapped into the kernel, passed through the system call handler, and handed to the
read( )system call, the kernel figures out what object
FILEtypedef, which is defined in
<stdio.h>.
fopen( ):
#include <stdio.h> FILE * fopen (const char *path, const char *mode);
pathaccording to the given modes, and associates a new stream with it.
modeargument describes how to open the given file. It is one of the following strings:
r
r+
w
w+
a
a+
b, although this value is always ignored on Linux. Some operating systems treat text and binary files differently, and the
bmode instructs the file to be opened in binary mode. Linux, as with all POSIX-conforming systems, treats text and binary files identically.
fopen( )returns a valid
FILEpointer. On failure, it returns
NULL, and sets
errnoappropriately.
stream:
FILE *stream; stream = fopen ("/etc/manifest", "r"); if (!stream) /* error */
fdopen( )converts an already open file descriptor (
fd) to a stream:
#include <stdio.h> FILE * fdopen (int fd, const char *mode);
fopen( ), and must be compatible with the modes originally used to open the file descriptor. The modes
wand
w+may be specified, but they will not cause truncation. The stream is positioned at the file position associated with the file descriptor.
fdopen( )returns a valid file pointer; on failure, it returns
NULL.
open( )system call, and then uses the backing file descriptor to create an associated stream:
FILE *stream; int fd; fd = open ("/home/kidd/map.txt", O_RDONLY); if (fd == −1) /* error */ stream = fdopen (fd, "r"); if (!stream) /* error */
fclose( )function closes a given stream:
#include <stdio.h> int fclose (FILE *stream);
fclose( )returns
0. On failure, it returns
EOFand sets
errnoappropriately.
fcloseall( )function closes all streams associated with the current process, including standard in, standard out, and standard error:
#define _GNU_SOURCE #include <stdio.h> int fcloseall (void);
0; it is Linux-specific. | http://www.oreilly.com/catalog/9780596009588/toc.html | crawl-001 | refinedweb | 2,427 | 64.1 |
new and** virtual** Keyword are the same as i feel,since new Keyword in the method hides the same base class methods whereas virtual keyword override the base class,So what is difference between new and virtual ,and how to determine when to use anyone of it?
[CODE] public class Base123 { public void add() { Console.WriteLine ("Hi tis is A"); } public virtual void a() { Console.WriteLine ("Hi its Base"); } } class Derived123:Base123 { public new void add() { Console.WriteLine ("Hi this B"); } public override void a() { Console.WriteLine ("Hi its derived"); } public static void Main (string[] args) { Derived123 d=new Derived123(5); d.add (); d.a(); } } [/CODE] | https://www.daniweb.com/programming/software-development/threads/458724/please-explain-diff-between-new-and-virtual | CC-MAIN-2018-51 | refinedweb | 106 | 72.36 |
CodePlexProject Hosting for Open Source Software
I'm interested in pausing the world dynamics to allow objects in the scene to be manipulated (moved, rotated, etc.). While dynamics are disabled, I do want collisions to be active, so that objects can be manipulated without their bodies intersecting
in space where they would collide in real-time.
My first instinct was to manually turn off gravity and call ResetDynamics on all bodies in the world. This does freeze the scene, but does not disable dynamics, and moving objects with the mouse is subject to inertia. I also considered (based
on something I read on Box2D) setting the update timestep to zero. I have not investigated this route fully, yet; however, it does seem that updating positions/rotations and other properties would not be reflected in the world, because those are affected
by the time step, as well.
Since I'm kind of wandering in the dark here, I was hoping someone might be able to provide some insight on how they think this could best be accomplished. Thanks for any help.
Joseph G.
Just an update; I'm playing with the idea of implementing a mouse joint that is more like a weld joint than a distance joint, in that the relative inertia of the selected body becomes static. When the mouse button is released, the previously selected
body will call ResetDynamics to prevent additional motion when the user has released the button. I don't know if this will work, mostly because, as weld joints have two bodies, I would need to construct a arbitrary body for the cursor, which seems likely
to cause problems.
Anyone with any thoughts? Any input would be most appreciated; as this has proven to be a most difficult task.
Thanks
To move objects you manually set their velocities. There are a few other threads where I answer this question. I even provided an example once. But here are the basic steps:
This also can work with rotation but that requires quite a bit more math to find all the right angles.
NOTE: I pulled all this directly from my head so I could be forgetting something. But this should get you on the right path.
Thanks, Matt.
This seems like a somewhat cumbersome method to manipulate the scene, using the physics API, rather than explicitly setting the class properties of sprites based on mouse position. Perhaps this is what you're getting at?
Joseph
@Bludo: To keep collisions functioning properly you have to move your bodies by changing there velocity. We (the engine developers) could add this to the list of features if you post a detailed suggestion to the Issue Tracker. Make sure you set the type
as Feature. If I get some time I might be able to whip up a sample.
Okay, now I see.
I don't know if it's worth putting in a full-on feature request for this. However, being able to pause dynamics to manipulate bodies could provide for some very interesting (and testbed-friendly) physics testing.
I'm guessing that I would probably be able to implement this in a fork independently sooner than it would get through the request pipeline (but maybe not).
I'd be open to suggestions for a point of entry. I assume this would take some pretty heavy refactoring of the World class for starters. As well as some reworking of most of the dynamics namespace.
There would be no refactoring at all. Just a few methods added to allow users to simply "set" the position/rotation of a body and have that translated into linear/angular velocities for them. All the mouse transforming code would have to stay as part of
the sample.
So I've implemented a method as you recommended where bodies can be moved by setting the linear velocity manually (I actually found an old demo you made and updated it to work with the latest version of Farseer to see what you were getting at).
I do have a question, however, that may be better off in a new post. I'm setting the body's linear velocity to move to the cursor position. But the body isn't always able to keep up with the speed of the cursor. The velocity appears to
have a ceiling if the mouse is moved too quickly. I dug around the forums, and it appears setting the MaxTranslation value in Settings will allow bodies to be moved at higher velocities.
So my question is: why is the value a constant? I'm reluctant to modify any hard-coded constraints within the Farseer library, itself. I assume there is a reason for this value to go unmodified, so are there any caveats we should be aware of
before altering the settings values, particularly MaxTranslation?
Are you sure you want to delete this post? You will not be able to recover it later.
Are you sure you want to delete this thread? You will not be able to recover it later. | https://farseerphysics.codeplex.com/discussions/259137 | CC-MAIN-2016-44 | refinedweb | 841 | 62.58 |
Remove nodes from graph or reset entire default graph
When working with the default global graph, is it possible to remove nodes after they've been added, or alternatively to reset the default graph to empty? When working with TF interactively in IPython, I find myself having to restart the kernel repeatedly. I would like to be able to experiment with graphs more easily if possible.
Update 11/2/2016
tf.reset_default_graph()
Old stuff
There's
reset_default_graph, but not part of public API (I think it should be, does someone wants to file an issue on GitHub?)
My work-around to reset things is this:
from tensorflow.python.framework import ops ops.reset_default_graph() sess = tf.InteractiveSession()
★ Back to homepage or read more recommendations:★ Back to homepage or read more recommendations:
From: stackoverflow.com/q/33765336 | https://python-decompiler.com/article/2015-11/remove-nodes-from-graph-or-reset-entire-default-graph | CC-MAIN-2019-26 | refinedweb | 135 | 57.67 |
The setting: the Lands Between. The continent that dwells between the deadly land of Euraxia and the peaceful kingdom of Yggdrasil. This continent that connects the two is created from the blood and sweat of the past. Each tribe, each country, and each city holds their differences.
The continent of the Elder Ring.
Throughout the continent, there are cities and towns where the people live a simple yet noble life. The people in these cities are quiet and are laid-back, but they occasionally take part in the larger world of the continent.
The Land of Wizards.
In the middle of the continent, there is a land that brings forth its magic and has its own tribal, cultural, and familial characteristics. Here, there are wizards and mages who protect this land and the people that dwell there. Their magic and the strength of their spirit are as powerful as the ring on their finger.
The Lands Between.
The ocean, called the Lands Between, is the center of the continent. It is a land full of danger and turmoil. It is a land haunted by the monsters that walk the endless seas of the world, and it is a land of many mysteries.
The Lands Between are divided into eight regions. These regions are divided into cities and provinces that make up the inhabitants of the region. The map of the region in-game is similar to the one shown in the title screen.
i. The Eastern Lands.
The Eastern Lands is the main region of the Lands Between.
When you enter the region, the game will start from the village inhabited by the Lahn tribe. The inhabitants of this village call the fruit that grows in this region “Apple.”
As the story unfolds, you will get lost in an adventure that is brimming with wonder and thrill. You will meet the people of the region, and through them, you will earn the trust of the region.
II. The Southern Lands.
This region is inhabited by the Skall tribe who call the fruit that grows in this region “Grapes.” The population is small and the inhabitants are devoted to music and dance.
Unlike the Lahn and other tribes, the Skall do not share a language.
As the story unfolds, you will learn the harsh history of the Southern Lands as you meet the people who live there.
III. The Midian Lands.
This region is the land of the Midian tribe. They call the fruit that grows in this region “Orange.”
Features Key:
For example, there are characters from Final Fantasy I, II, and III that will allow you to enjoy satisfying playthroughs and revelations.
Card customizations that allow you to customize the elemental attributes of your cards, weapons, artifacts, and deities, and enhance them to make them more powerful.
High and Excellent elements that allow us to increase the relish element of the game for more thrilling battles.
High-level enhancements including special moves and weapon skills, and Battle Points that boost your victory battle strength.
Each character has more than 20-voice lines, and has captured the essence of their original personalities.
In this prologue, the previously unannounced emblem is revealed for each job, and the character profile can be fully established.
There are lots of twists that can be justifiably expected to enthrall you.
Elsword2公式サイト
BOSS MOJAVE SET UPDATE
New Battlefield Begins.
2019/05/15
BOSS MOJAVE SET
Hidden Gem* *
SHINY GRANITE SET
New Weapon Archetype: Fusion Cuirass
LUXURY STONE SET
New Battlefield: DGA
Elden Ring Crack + Download [Win/Mac] [2022-Latest]
“I feel that Auro is pushing Elden Ring Free Download to keep the title”
Crush Game
“I highly recommend this game”
GRACE 100
“ELDEN RING is already a challenging game that I’m playing even after only 6 hours”
GRACE 100
“ELDEN RING review: it’s an RPG and more. It’s a game with more depth and more at stake. I’ve finally managed to understand this game and I just wish there were more like it in the world”
Falcon of Dboard
“I did fall into a short slump after completing the first story… until I found a way to make myself enjoy the story again, and now I’m all smiles”
Celtic Sage
“ELDEN RING – is a great RPG with the potential to take over the genre and carve its own path”
OGRE Magazine
“ELDEN RING is a great RPG with unique features and a deep story”
5 STARS Game Beat
“ELDEN RING review: The game is strong on several levels, whether it’s the story, the visuals, the music and the gameplay”
Beyond the Beyond
“ELDEN RING is a game for all ages.”
3DBOY
“ELDEN RING is an entertaining, gripping, and epic story that I recommend you try if you like fantasy RPGs”
5 STARS Game Beat
“ELDEN RING review: The story is very strong and engaging, the characters are well fleshed out and each of them has their own identity. The story is emotionally gripping, with some very gripping battle scenes. The art style is beautiful, with detailed character designs and animations. It’s also a very polished game; there are not many glitches and the framerate is always stable. The gameplay is very smooth, and the weapons and characters have a good balance of gameplay variety with its solid controls. There’s also a very balanced and realistic character building system, as well as a very well implemented farm, armour and clothing system.”
5 STARS Game Beat
“ELDEN RING review: The story is very strong and engaging, the characters are well fl
bff6bb2d33
Elden Ring [Win/Mac]
With all the various types of online gambling, bitcoin might not be the best choice. However, Bitcoin live dealers are also quite convenient. What is more, it’s easy to place bets using your Bitcoins. And, if you are determined to gamble online, using the virtual currency is the best option for you.
Bitcoins are an electronic cryptocurrency that can be used to pay at any online store. While many people use them as a cryptocurrency to purchase digital goods, many others use them in more traditional gaming exchanges. You can use your bitcoins to gamble online, with minimal fees and a lot less chance of being scammed. With the use of your new virtual currency, all of the basic live dealer games are available to you. Just choose a casino, register, deposit your bitcoins, and begin playing.
When you choose a Bitcoin live dealer casino, you’re given an opportunity to earn a few advantages and more convenience. To begin with, the site is much more secure than the many brick-and-mortar casinos. The deposits and withdrawals are also much quicker and easier. Live games are also a lot more convenient because you don’t have to actually be at the real location to bet. However, there are still drawbacks to this crypto casino method that you should be aware of before you choose to bet on real-time games.
One drawback is that you don’t have the same level of customer support as you would at a brick-and-mortar casino. Online casinos must still comply with various compliance standards established by local and federal law.
Another drawback is that you could still get scammed, even if you use bitcoins as your method of payment. Scammers can make it difficult for you to identify them, including by using excellent and convincing tactics. As a result, it’s important for you to stay vigilant, ensure that the gambling site you use is legitimate, and never deposit money via a web portal that you didn’t understand.
Anytime you gamble, you run the risk of losing some or all of your money, and so you should be cautious when using Bitcoin live dealer casinos. However, there are quite a few advantages to using these live casino websites.
How to win roulette games? You have to choose right spinning wheel games. If you don’t know how to play the spinning wheel games, don’t worry. This article will help
What’s new:
[04:58]Natan I. Holder (author)Translating patient-reported outcomes into clinical practice: post partum depression in women.
Postpartum depression (PPD) is a debilitating condition that affects women during the first 12 months after childbirth. Despite the prevalence and potential serious consequences of PPD, documentation of PPD is poor in most clinical environments. Further, there is a dearth of research on which to build evidence-based models of treatment for this condition. We therefore conducted a literature review to understand the potential causes of poor documentation of PPD. Three possible factors that contribute to poor documentation were identified: a lack of awareness of mental health issues postpartum; a misunderstanding of the clinical course of PPD; and the challenges of accessing treatment for women with PPD. Suggestions for overcoming these barriers and translating PPD into clinical practice are described.namespace BizHawk.Emulation.Common
{
internal static partial class Syp
{
public static float R0
{
get { return Emulator.Cpu.R[0]; }
set
{
if (value >= 0.0f)
{
Emulator.Cpu.R[0] = value;
}
else
{
Emulator.Cpu.R[0] += value;
}
}
}
public static float R1
{
get { return Emulator.Cpu.R[1]; }
set
{
if (value >= 0.0f)
{
Emulator.Cpu.R[1] = value;
}
else
{
Emulator.Cpu.R[1] += value;
}
}
}
public static float R2
Free Download Elden Ring [Win/Mac]!
Logo and Steam Group “Ashenha”
How to install and play ELDEN RING game:!
How to install ELDEN RING game without cracks:
1) Download the standalone installer and install it.
2) Play the game and have fun!
Find some black market workarounds, here:
1) How to install ELDEN RING without cracks:
1) Download the standalone installer and install it.
2) Play the game and have fun!
If you found any bug report or error, please make a request.
Thank you.
Logo and Steam Group “Ashenha”
Instructions for installing in window mode:
1) Download the standalone installer and install it.
2) Play the game and have fun!
IF YOU’VE ANY PROBLEM, PLEASE MAKE A REQUEST!
Thank you.Q:
How to tell if a situation is temporary or not?
If I ask, “How long is your car?” and the answer is, “It’s on sale now, with a one month warranty. I can buy another one in three months.” how can I know if this is temporary or not?
I’m not a native English speaker and I’m not sure if the use of “How long” means that it’s temporary. Does anyone know how to tell if it’s temporary or not?
A:
If I ask, “How long is your car?” and the answer is, “It’s on sale now, with a one month warranty. I can buy another one in three months.” how can I know if this is temporary or not
How To Install and Crack Elden Ring:
System Requirements:
High definition rendering required. Minimum system requirements are processor 3 GHz, video card 256MB, and operating system Windows XP, Windows Vista or Windows 7.
Trauma Center: Under the Knife by Speedbump Games is a free non-commercial hidden object game.
The first of it’s kind, Trauma Center is the world’s first ‘video’ hidden object game. When the player first launches the game, they have a choice of a single game or a multi-game package. The multi-game package is the most popular
Related sites: | https://jayaabadiexpress.com/elden-ring-deluxe-edition-patch-full-version-skidrow-v-1-02-dlcserial-number-full-torrent-free-mac-win/ | CC-MAIN-2022-40 | refinedweb | 1,898 | 63.49 |
3
The Rendering Pipeline
Written by Caroline Begbie & Marius Horga
Now that you know a bit more about 3D models and rendering, it’s time to take a drive through the rendering pipeline. In this chapter, you’ll create a Metal app that renders a red cube. As you work your way through this chapter, you’ll get a closer look at the hardware that’s responsible for turning your 3D objects into the gorgeous pixels you see onscreen. First up, the GPU and CPU.
The GPU and CPU
Every computer comes equipped with a Graphics Processing Unit (GPU) and Central Processing Unit (CPU).
The GPU is a specialized hardware component that can process images, videos and massive amounts of data really fast. This operation is known as throughput and is measured by the amount of data processed in a specific unit of time. The CPU, on the other hand, manages resources and is responsible for the computer’s operations. Although the CPU can’t process huge amounts of data like the GPU, it can process many sequential tasks (one after another) really fast. The time necessary to process a task is known as latency.
The ideal setup includes low latency and high throughput. Low latency allows for the serial execution of queued tasks, so the CPU can execute the commands without the system becoming slow or unresponsive — and high throughput lets the GPU render videos and games asynchronously without stalling the CPU. Because the GPU has a highly parallelized architecture specialized in doing the same task repeatedly and with little or no data transfers, it can process larger amounts of data.
The following diagram shows the major differences between the CPU and GPU.
The CPU has a large cache memory and a handful of Arithmetic Logic Unit (ALU) cores. In contrast, the GPU has a small cache memory and many ALU cores. The low latency cache memory on the CPU is used for fast access to temporary resources. The ALU cores on the GPU handle calculations without saving partial results to memory.
The CPU typically has only a few cores, while the GPU has hundreds — even thousands of cores. With more cores, the GPU can split the problem into many smaller parts, each running on a separate core in parallel, which helps to hide latency. At the end of processing, the partial results are combined, and the final result is returned to the CPU. But cores aren’t the only thing that matters.
Besides being slimmed down, GPU cores also have special circuitry for processing geometry and are often called shader cores. These shader cores are responsible for the beautiful colors you see onscreen. The GPU writes an entire frame at a time to fit the full rendering window; it then proceeds to rendering the next frame as quickly as possible, so it can maintain a respectable frame rate.
The CPU continues to issue commands to the GPU, ensuring that the GPU always has work to do. However, at some point, either the CPU will finish sending commands or the GPU will finish processing them. To avoid stalling, Metal on the CPU queues up multiple commands in command buffers and will issue new commands, sequentially, for the next frame without waiting for the GPU to finish the previous frame. This means that no matter who finishes the work first, there will always be more work to do.
The GPU part of the graphics pipeline starts after the GPU receives all of the commands and resources. To get started with the rendering pipeline, you’ll set up these commands and resources in a new project.
The Metal Project
So far, you’ve been using Playgrounds to learn about Metal. Playgrounds are great for testing and learning new concepts, but it’s also important to understand how to set up a full Metal project using SwiftUI.
➤ In Xcode, create a new project using the Multiplatform App template.
➤ Name your project Pipeline, and fill out your team and organization identifier. Leave all of the checkbox options unchecked.
➤ Choose the location for your new project.
Excellent, you now have a fancy, new SwiftUI app. ContentView.swift is the main view for the app; this is where you’ll call your Metal view.
The MetalKit framework contains an
MTKView, which is a special Metal rendering view. This is a
UIView on iOS and an
NSView on macOS. To interface with UIKit or Cocoa UI elements, you’ll use a
Representable protocol that sits between SwiftUI and your
MTKView. If you want to understand how this protocol works, you can find the information in our book, SwiftUI Apprentice.
This configuration is all rather complicated, so in the resources folder for this chapter, you’ll find a pre-made MetalView.swift.
➤ Drag this file into your project, making sure that you check all of the checkboxes so that you copy the file and add it to both targets.
➤ Open MetalView.swift.
MetalView is a SwiftUI
View structure that contains the
MTKView property and hosts the Metal view.
➤ Open ContentView.swift, and change:
Text("Hello, world!") .padding
To:
VStack { MetalView() .border(Color.black, width: 2) Text("Hello, Metal!") } .padding()
Here, you add
MetalView to the view hierarchy and give it a border.
➤ Build and run your application using either the macOS target or the iOS target.
You’ll see your hosted
MTKView. The advantage of using SwiftUI is that it’s relatively easy to layer UI elements — such as the “Hello Metal” text here — underneath your Metal view.
You now have a choice. You can subclass
MTKView and replace the
MTKView in
MetalView with the subclassed one. In this case, the subclass’s
draw(_:) would get called every frame, and you’d put your drawing code in that method. However, in this book, you’ll set up a
Renderer class that conforms to
MTKViewDelegate and sets
Renderer as a delegate of
MTKView.
MTKView calls a delegate method every frame, and this is where you’ll place the necessary drawing code.
Note: If you’re coming from a different API world, you might be looking for a game loop construct. You do have the option of using
CADisplayLinkfor timing, but Apple introduced
MetalKitwith its protocols to manage the game loop more easily.
The Renderer Class
➤ Create a new Swift file named Renderer.swift, and replace its contents with the following code:
import MetalKit class Renderer: NSObject { init(metalView: MTKView) { super.init() } } extension Renderer: MTKViewDelegate { func mtkView( _ view: MTKView, drawableSizeWillChange size: CGSize ) { } func draw(in view: MTKView) { print("draw") } }
Here, you create an initializer and make
Renderer conform to
MTKViewDelegate with the two
MTKView delegate methods:
mtkView(_:drawableSizeWillChange:): Called every time the size of the window changes. This allows you to update render texture sizes and camera projection.
draw(in:): Called every frame. This is where you write your render code.
➤ Open MetalView.swift, and in
MetalView, add a property to hold the renderer:
@State private var renderer: Renderer?
➤ Change
body to:
var body: some View { MetalViewRepresentable(metalView: $metalView) .onAppear { renderer = Renderer(metalView: metalView) } }
Here, you initialize the renderer when the metal view first appears.
Initialization
Just as you did in the first chapter, you need to set up the Metal environment.
Metal has a major advantage over OpenGL in that you’re able to instantiate some objects up-front rather than create them during each frame. The following diagram indicates some of the Metal objects you can create at the start of the app.
MTLDevice: The software reference to the GPU hardware device.
MTLCommandQueue: Responsible for creating and organizing
MTLCommandBuffers every frame.
MTLLibrary: Contains the source code from your vertex and fragment shader functions.
MTLRenderPipelineState: Sets the information for the draw — such as which shader functions to use, what depth and color settings to use and how to read the vertex data.
MTLBuffer: Holds data — such as vertex information — in a form that you can send to the GPU.
Typically, you’ll have one
MTLDevice, one
MTLCommandQueue and one
MTLLibrary object in your app. You’ll also have several
MTLRenderPipelineState objects that will define the various pipeline states, as well as several
MTLBuffers to hold the data. Before you can use these objects, however, you need to initialize them.
➤ Open Renderer.swift, and add these properties to
Renderer:
static var device: MTLDevice! static var commandQueue: MTLCommandQueue! static var library: MTLLibrary! var mesh: MTKMesh! var vertexBuffer: MTLBuffer! var pipelineState: MTLRenderPipelineState!
All of these properties are currently implicitly unwrapped optionals for convenience, but you can add error-checking later if you wish.
You’re using class properties for the device, the command queue and the library to ensure that only one of each exists. In rare cases, you may require more than one, but in most apps, one is enough.
➤ Still in Renderer.swift, add the following code to
init(metalView:) before
super.init():
guard let device = MTLCreateSystemDefaultDevice(), let commandQueue = device.makeCommandQueue() else { fatalError("GPU not available") } Renderer.device = device Renderer.commandQueue = commandQueue metalView.device = device
This code initializes the GPU and creates the command queue.
➤ Finally, after
super.init(), add this:
metalView.clearColor = MTLClearColor( red: 1.0, green: 1.0, blue: 0.8, alpha: 1.0) metalView.delegate = self
This code sets
metalView.clearColor to a cream color. It also sets
Renderer as the delegate for
metalView so that the view will call the
MTKViewDelegate drawing methods.
➤ Build and run the app to make sure everything’s set up and working. If everything is good, you’ll see the SwiftUI view as before, and in the debug console, you’ll see the word “draw” repeatedly. Use this console statement to verify that your app is calling
draw(in:) for every frame.
Note: You won’t see
metalView’s cream color because you’re not asking the GPU to do any drawing yet.
Create the Mesh
You’ve already created a sphere and a cone using Model I/O; now it’s time to create a cube.
➤ In
init(metalView:), before calling
super.init(), add this:
// create the mesh let allocator = MTKMeshBufferAllocator(device: device) let size: Float = 0.8 let mdlMesh = MDLMesh( boxWithExtent: [size, size, size], segments: [1, 1, 1], inwardNormals: false, geometryType: .triangles, allocator: allocator) do { mesh = try MTKMesh(mesh: mdlMesh, device: device) } catch let error { print(error.localizedDescription) }
This code creates the cube mesh, as you did in the previous chapter.
➤ Then, set up the
MTLBuffer that contains the vertex data you’ll send to the GPU.
vertexBuffer = mesh.vertexBuffers[0].buffer
This code puts the mesh data in an
MTLBuffer. Next, you need to set up the pipeline state so that the GPU will know how to render the data.
Set Up the Metal Library
First, set up the
MTLLibrary and ensure that the vertex and fragment shader functions are present.
➤ Continue adding code before
super.init():
// create the shader function library let library = device.makeDefaultLibrary() Renderer.library = library let vertexFunction = library?.makeFunction(name: "vertex_main") let fragmentFunction = library?.makeFunction(name: "fragment_main")
Here, you set up the default library with some shader function pointers. You’ll create these shader functions later in this chapter. Unlike OpenGL shaders, these functions are compiled when you compile your project, which is more efficient than compiling your functions on the fly. The result is stored in the library.
Create the Pipeline State
To configure the GPU’s state, you create a pipeline state object (PSO). This pipeline state can be a render pipeline state for rendering vertices, or a compute pipeline state for running a compute kernel.
➤ Continue adding code before
super.init():
// create the pipeline state object let pipelineDescriptor = MTLRenderPipelineDescriptor() pipelineDescriptor.vertexFunction = vertexFunction pipelineDescriptor.fragmentFunction = fragmentFunction pipelineDescriptor.colorAttachments[0].pixelFormat = metalView.colorPixelFormat pipelineDescriptor.vertexDescriptor = MTKMetalVertexDescriptorFromModelIO(mdlMesh.vertexDescriptor) do { pipelineState = try device.makeRenderPipelineState( descriptor: pipelineDescriptor) } catch let error { fatalError(error.localizedDescription) }
The PSO holds a potential state for the GPU. The GPU needs to know its complete state before it can start managing vertices. Here, you set the two shader functions the GPU will call and the pixel format for the texture to which the GPU will write. You also set the pipeline’s vertex descriptor; this is how the GPU will know how to interpret the vertex data that you’ll present in the mesh data
MTLBuffer.
Note: If you need to use a different data buffer layout or call different vertex or fragment functions, you’ll need additional pipeline states. Creating pipeline states is relatively time-consuming — which is why you do it up-front — but switching pipeline states during frames is fast and efficient.
The initialization is complete, and your project compiles. Next up, you’ll start on drawing your model.
Render Frames
MTKView calls
draw(in:) for every frame; this is where you’ll set up your GPU render commands.
➤ In
draw(in:), replace the
guard let commandBuffer = Renderer.commandQueue.makeCommandBuffer(), let descriptor = view.currentRenderPassDescriptor, let renderEncoder = commandBuffer.makeRenderCommandEncoder( descriptor: descriptor) else { return }
You’ll send a series of commands to the GPU contained in command encoders. In one frame, you might have multiple command encoders, and the command buffer manages these.
You create a render command encoder using a render pass descriptor. This contains the render target textures that the GPU will draw into. In a complex app, you may well have multiple render passes in one frame, with multiple target textures. You’ll learn how to chain render passes together later.
➤ Continue adding this code:
// drawing code goes here // 1 renderEncoder.endEncoding() // 2 guard let drawable = view.currentDrawable else { return } commandBuffer.present(drawable) // 3 commandBuffer.commit()
Here’s a closer look at the code:
- After adding the GPU commands to a command encoder, you end its encoding.
- You present the view’s drawable texture to the GPU.
- When you commit the command buffer, you send the encoded commands to the GPU for execution.
Drawing
It’s time to set up the list of commands that the GPU will need to draw your frame. In other words, you’ll:
- Set the pipeline state to configure the GPU hardware.
- Give the GPU the vertex data.
- Issue a draw call using the mesh’s submesh groups.
➤ Still in
draw(in:), replace the comment:
// drawing code goes here
With:
renderEncoder.setRenderPipelineState(pipelineState) renderEncoder.setVertexBuffer(vertexBuffer, offset: 0, index: 0)) }
Great, you set up the GPU commands to set the pipeline state and the vertex buffer, and to perform the draw calls on the mesh’s submeshes. When you commit the command buffer at the end of
draw(in:), you’re telling the GPU that the data and pipeline are ready, and it’s time for the GPU to take over.
The Render Pipeline
Are you ready to investigate the GPU pipeline? Great, let’s get to it!
In the following diagram, you can see the stages of the pipeline.
The graphics pipeline takes the vertices through multiple stages, during which the vertices have their coordinates transformed between various spaces.
Note: This chapter describes immediate-mode rendering (IMR) architecture. Apple’s chips for iOS since A11, and Silicon for macOS, use tile-based rendering (TBR). New Metal features are able to take advantage of TBR. However, for simplicity, you’ll start off with a basic understanding of general GPU architecture. If you want a preview of some differences, watch Apple’s WWDC 2020 video Bring your Metal app to Apple silicon Macs.
As a Metal programmer, you’re only concerned about the Vertex and Fragment Processing stages since they’re the only two programmable stages. Later in the chapter, you’ll write both a vertex shader and a fragment shader. For all the non-programmable pipeline stages, such as Vertex Fetch, Primitive Assembly and Rasterization, the GPU has specially designed hardware units to serve those stages.
1 - Vertex Fetch
The name of this stage varies among different graphics Application Programming Interfaces (APIs). For example, DirectX calls it Input Assembler.
To start rendering 3D content, you first need a scene. A scene consists of models that have meshes of vertices. One of the simplest models is the cube which has six faces (12 triangles). As you saw in the previous chapter, you use a vertex descriptor to define the way vertices are read in along with their attributes — such as position, texture coordinates, normal and color. You do have the option not to use a vertex descriptor and just send an array of vertices in an
MTLBuffer, however, if you decide not to use one, you’ll need to know how the vertex buffer is organized ahead of time.
When the GPU fetches the vertex buffer, the
MTLRenderCommandEncoder draw call tells the GPU whether the buffer is indexed. If the buffer is not indexed, the GPU assumes the buffer is an array, and it reads in one element at a time, in order.
In the previous chapter, you saw how Model I/O imports .obj files and sets up their buffers indexed by submesh. This indexing is important because vertices are cached for reuse. For example, a cube has 12 triangles and eight vertices (at the corners). If you don’t index, you’ll have to specify the vertices for each triangle and send 36 vertices to the GPU. This may not sound like a lot, but in a model that has several thousand vertices, vertex caching is important.
There is also a second cache for shaded vertices so that vertices that are accessed multiple times are only shaded once. A shaded vertex is one to which color was already applied. But that happens in the next stage.
A special hardware unit known as the Scheduler sends the vertices and their attributes on to the Vertex Processing stage.
2 - Vertex Processing
In the Vertex Processing stage, vertices are processed individually. You write code to calculate per-vertex lighting and color. More importantly, you send vertex coordinates through various coordinate spaces to reach their position in the final framebuffer.
You briefly learned about shader functions and about the Metal Shading Language (MSL) in Chapter 1, “Hello, Metal!”. Now it’s time to see what happens under the hood at the hardware level.
Look at this diagram of the architecture of an AMD GPU:
Going top-down, the GPU has:
- 1 Graphics Command Processor: This coordinates the work processes.
- 4 Shader Engines (SE): An SE is an organizational unit on the GPU that can serve an entire pipeline. Each SE has a geometry processor, a rasterizer and Compute Units.
- 9 Compute Units (CU): A CU is nothing more than a group of shader cores.
- 64 shader cores: A shader core is the basic building block of the GPU where all of the shading work is done.
In total, the 36 CUs have 2,304 shader cores. Compare that to the number of cores in your 8-core CPU.
For mobile devices, the story is a little different. For comparison, look at the following image showing a GPU similar to those in recent iOS devices. Instead of having SEs and CUs, the PowerVR GPU has Unified Shading Clusters (USC).
This particular GPU model has 6 USCs and 32 cores per USC for a total of only 192 cores.
Note: The iPhone X had the first mobile GPU entirely designed in-house by Apple. As it turns out, Apple has not made the GPU hardware specifications public.
So what can you do with that many cores? Since these cores are specialized in both vertex and fragment shading, one obvious thing to do is give all the cores work to do in parallel so that the processing of vertices or fragments is done faster. There are a few rules, though.
Inside a CU, you can only process either vertices or fragments, and only at one time. (Good thing there’s thirty-six of those!) Another rule is that you can only process one shader function per SE. Having four SE’s lets you combine work in interesting and useful ways. For example, you can run one fragment shader on one SE and a second fragment shader on a second SE at one time. Or you can separate your vertex shader from your fragment shader and have them run in parallel but on different SEs.
Creating a Vertex Shader
It’s time to see vertex processing in action. The vertex shader you’re about to write is minimal, but it encapsulates most of the necessary vertex shader syntax you’ll need in this and subsequent chapters.
➤ Create a new file using the Metal File template, and name it Shaders.metal. Then, add this code at the end of the file:
// 1 struct VertexIn { float4 position [[attribute(0)]]; }; // 2 vertex float4 vertex_main(const VertexIn vertexIn [[stage_in]]) { return vertexIn.position; }
Going through the code:
- Create a struct
VertexInto describe the vertex attributes that match the vertex descriptor you set up earlier. In this case, just
position.
- Implement a vertex shader,
vertex_main, that takes in
VertexInstructs and returns vertex positions as
float4types.
Remember that vertices are indexed in the vertex buffer. The vertex shader gets the current vertex index via the
[[stage_in]] attribute and unpacks the
VertexIn structure cached for the vertex at the current index.
Compute Units can process (at one time) batches of vertices up to their maximum number of shader cores. This batch can fit entirely in the CU cache and vertices can thus be reused as needed. The batch will keep the CU busy until the processing is done but other CUs should become available to process the next batch.
As soon as the vertex processing is done, the cache is cleared for the next batches of vertices. At this point, vertices are now ordered and grouped, ready to be sent to the primitive assembly stage.
To recap, the CPU sent the GPU a vertex buffer that you created from the model’s mesh. You configured the vertex buffer using a vertex descriptor that tells the GPU how the vertex data is structured. On the GPU, you created a structure to encapsulate the vertex attributes. The vertex shader takes in this structure as a function argument, and through the
[[stage_in]] qualifier, acknowledges that
position comes from the CPU via the
[[attribute(0)]] position in the vertex buffer. The vertex shader then processes all of the vertices and returns their positions as a
float4.
Note: When you use a vertex descriptor with attributes, you don’t have to match types. The
MTLBuffer
positionis a
float3, whereas
VertexIndefines the position as a
float4.
A special hardware unit known as the Distributer sends the grouped blocks of vertices on to the Primitive Assembly stage.
3 - Primitive Assembly
The previous stage sent processed vertices grouped into blocks of data to this stage. The important thing to keep in mind is that vertices belonging to the same geometrical shape (primitive) are always in the same block. That means that the one vertex of a point, or the two vertices of a line, or the three vertices of a triangle, will always be in the same block, hence a second block fetch isn’t necessary.
Along with vertices, the CPU also sends vertex connectivity information when it issues the draw call command, like this:
renderEncoder.drawIndexedPrimitives( type: .triangle, indexCount: submesh.indexCount, indexType: submesh.indexType, indexBuffer: submesh.indexBuffer.buffer, indexBufferOffset: 0)
The first argument of the draw function contains the most important information about vertex connectivity. In this case, it tells the GPU that it should draw triangles from the vertex buffer it sent.
The Metal API provides five primitive types:
- point: For each vertex, rasterize a point. You can specify the size of a point that has the attribute
[[point_size]]in the vertex shader.
- line: For each pair of vertices, rasterize a line between them. If a vertex was already included in a line, it cannot be included again in other lines. The last vertex is ignored if there are an odd number of vertices.
- lineStrip: Same as a simple line, except that the line strip connects all adjacent vertices and forms a poly-line. Each vertex (except the first) is connected to the previous vertex.
- triangle: For every sequence of three vertices, rasterize a triangle. The last vertices are ignored if they cannot form another triangle.
- triangleStrip: Same as a simple triangle, except adjacent vertices can be connected to other triangles as well.
There is one more primitive type known as a patch, but this needs special treatment. You’ll read more about patches in Chapter 19, “Tessellation & Terrains”.
As you read in the previous chapter, the pipeline specifies the winding order of the vertices. If the winding order is counter-clockwise, and the triangle vertex order is counter-clockwise, the vertices are front-faced; otherwise, the vertices are back-faced and can be culled since you can’t see their color and lighting. Primitives are culled when they’re entirely occluded by other primitives. However, if they’re only partially off-screen, they’ll be clipped.
For efficiency, you should set winding order and enable back-face culling in the pipeline state.
At this point, primitives are fully assembled from connected vertices and are ready to move on to the rasterizer.
4 - Rasterization
There are two modern rendering techniques currently evolving on separate paths but sometimes used together: ray tracing and rasterization. They are quite different, and both have pros and cons. Ray tracing — which you’ll read more about in Chapter 27, “Rendering With Rays” — is preferred when rendering content that is static and far away, while rasterization is preferred when the content is closer to the camera and more dynamic.
With ray tracing, for each pixel on the screen, it sends a ray into the scene to see if there’s an intersection with an object. If yes, change the pixel color to that object’s color, but only if the object is closer to the screen than the previously saved object for the current pixel.
Rasterization works the other way around. For each object in the scene, send rays back into the screen and check which pixels are covered by the object. Depth information is kept the same way as for ray tracing, so it will update the pixel color if the current object is closer than the previously saved one.
At this point, all connected vertices sent from the previous stage need to be represented on a two-dimensional grid using their
X and
Y coordinates. This step is known as the triangle setup. Here is where the rasterizer needs to calculate the slope or steepness of the line segments between any two vertices. When the three slopes for the three vertices are known, the triangle can be formed from these three edges.
Next, a process known as scan conversion runs on each line of the screen to look for intersections and to determine what’s visible and what’s not. To draw on the screen at this point, you need only the vertices and the slopes they determine.
The scan algorithm determines if all the points on a line segment or all the points inside of a triangle are visible, in which case the triangle is filled with color entirely.
For mobile devices, the rasterization takes advantage of the tiled architecture of PowerVR GPUs by rasterizing the primitives on a 32x32 tile grid in parallel. In this case, 32 is the number of screen pixels assigned to a tile, but this size perfectly fits the number of cores in a USC.
What if one object is behind another object? How can the rasterizer determine which object to render? This hidden surface removal problem can be solved by using stored depth information (early-Z testing) to determine whether each point is in front of other points in the scene.
After rasterization is finished, three more specialized hardware units take the stage:
- A buffer known as Hierarchical-Z is responsible for removing fragments that were marked for culling by the rasterizer.
- The Z and Stencil Test unit then removes non-visible fragments by comparing them against the depth and stencil buffer.
- Finally, the Interpolator unit takes the remaining visible fragments and generates fragment attributes from the assembled triangle attributes.
At this point, the Scheduler unit, again, dispatches work to the shader cores, but this time it’s the rasterized fragments sent for Fragment Processing.
5 - Fragment Processing
Time for a quick review of the pipeline.
- The Vertex Fetch unit grabs vertices from the memory and passes them to the Scheduler unit.
- The Scheduler unit knows which shader cores are available, so it dispatches work on them.
- After the work is done, the Distributer unit knows if this work was Vertex or Fragment Processing. If the work was Vertex Processing, it sends the result to the Primitive Assembly unit. This path continues to the Rasterization unit, and then back to the Scheduler unit. If the work was Fragment Processing, it sends the result to the Color Writing unit.
- Finally, the colored pixels are sent back to the memory.
The primitive processing in the previous stages is sequential because there’s only one Primitive Assembly unit and one Rasterization unit. However, as soon as fragments reach the Scheduler unit, work can be forked (divided) into many tiny parts, and each part is given to an available shader core.
Hundreds or even thousands of cores are now doing parallel processing. When the work is complete, the results will be joined (merged) and sent to the memory, again sequentially.
The fragment processing stage is another programmable stage. You create a fragment shader function that will receive the lighting, texture coordinate, depth and color information that the vertex function outputs. The fragment shader output is a single color for that fragment. Each of these fragments will contribute to the color of the final pixel in the framebuffer. All of the attributes are interpolated for each fragment.
For example, to render this triangle, the vertex function would process three vertices with the colors red, green and blue. As the diagram shows, each fragment that makes up this triangle is interpolated from these three colors. Linear interpolation simply averages the color at each point on the line between two endpoints. If one endpoint has red color, and the other has green color, the midpoint on the line between them will be yellow. And so on.
The interpolation equation is parametric and has this form, where parameter p is the percentage (or a range from 0 to 1) of a color’s presence:
newColor = p * oldColor1 + (1 - p) * oldColor2
Color is easy to visualize, but the other vertex function outputs are also similarly interpolated for each fragment.
Note: If you don’t want a vertex output to be interpolated, add the attribute
[[flat]]to its definition.
Creating a Fragment Shader
➤ In Shaders.Metal, add the fragment function to the end of the file:
fragment float4 fragment_main() { return float4(1, 0, 0, 1); }
This is the simplest fragment function possible. You return the interpolated color red in the form of a
float4. All the fragments that make up the cube will be red. The GPU takes the fragments and does a series of post-processing tests:
- alpha-testing determines which opaque objects are drawn (and which are not) based on depth testing.
- In the case of translucent objects, alpha-blending will combine the color of the new object with that already saved in the color buffer previously.
- scissor testing checks whether a fragment is inside of a specified rectangle; this test is useful for masked rendering.
- stencil testing checks how the stencil value in the framebuffer where the fragment is stored, compares to a specified value we choose.
- In the previous stage early-Z testing ran; now a late-Z testing is done to solve more visibility issues; stencil and depth tests are also useful for ambient occlusion and shadows.
- Finally, antialiasing is also calculated here so that final images that get to the screen do not look jagged.
You’ll learn more about post-processing tests in Chapter 20, “Fragment Post-Processing”.
6 - Framebuffer
As soon as fragments have been processed into pixels, the Distributer unit sends them to the Color Writing unit. This unit is responsible for writing the final color in a special memory location known as the framebuffer. From here, the view gets its colored pixels refreshed every frame. But does that mean the color is written to the framebuffer while being displayed on the screen?
A technique known as double-buffering is used to solve this situation. While the first buffer is being displayed on the screen, the second one is updated in the background. Then, the two buffers are swapped, and the second one is displayed on the screen while the first one is updated, and the cycle continues.
Whew! That was a lot of hardware information to take in. However, the code you’ve written is what every Metal renderer uses, and despite just starting out, you should begin to recognize the rendering process when you look at Apple’s sample code.
➤ Build and run the app, and you’ll see a beautifully rendered red cube:
Notice how the cube is not square. Remember that Metal uses Normalized Device Coordinates (NDC) that is
-1 to
1 on the
X axis. Resize your window, and the cube will maintain a size relative to the size of the window. In Chapter 6, “Coordinate Spaces”, you’ll learn how to position objects precisely on the screen.
What an incredible journey you’ve had through the rendering pipeline. In the next chapter, you’ll explore vertex and fragment shaders in greater detail.
Challenge
Using the
train.usd model in the resources folder for this project, replace the cube with this train. When importing the model, be sure to select Create Groups and remember to add the model to both targets.
Instead of changing the model’s vertical position in the SceneKit editor, change it in the vertex function using this code:
float4 position = vertexIn.position; position.y -= 1.0;
Finally, color your train blue.
Refer to the previous chapter for asset loading and the vertex descriptor code if you need help. The finished code for this challenge is in the project challenge directory for this chapter.
Key Points
- CPUs are best for processing sequential tasks fast, whereas GPUs excel at processing small tasks synchronously.
- SwiftUI is a great host for
MTKViews, as you can layer UI elements easily.
- Separate Metal tasks where you can to the initialize phase. Initialize the device, command queues, pipeline states and model data buffers once at the start of your app.
- Each frame, create a command buffer and one or more command encoders.
- GPU architecture allows for a strict pipeline. Configure this using PSOs (pipeline state objects).
- There are two programmable stages in a simple rendering GPU pipeline. You calculate vertex positions using the vertex shader, and calculate the color that appears on the screen using the fragment shader. | https://www.raywenderlich.com/books/metal-by-tutorials/v3.0/chapters/3-the-rendering-pipeline | CC-MAIN-2022-21 | refinedweb | 5,813 | 64.41 |
I'm Using the Following code for signature :
def new
imgFormat = System::get_property('platform') == 'WINDOWS' ? "bmp" : "jpg"
Rho::Signature.takeFullScreen({ :imageFormat => imgFormat, :penColor => 0x0066FF, :penWidth=>5, :border => true, :bgColor => '#ffffff' }, url_for(:action => :signature_callback))
render :action => :show_signature
end
It's seems that the Default Path of Signature image has been changed from Rho4.0 to Rho4.1.
Now the Path in 4.1 is:
in 4.0 it was :
data/data/com.xxx.xxx/rhodata/db/dbfiles/signature.png
the Image saved to that path but I'm not able to display it. I tried to Copy the image file as in this solution that i Found on stakeOverflow but i got an error saying that file is not found.
Any help ? | https://developer.zebra.com/thread/29943 | CC-MAIN-2018-05 | refinedweb | 121 | 60.31 |
Here's a more complete version. It works with
> matplotlib-0.54 now, and I've tested it with a few of the
> example scripts, though I'm sure there are bugs. It
> doesn't do images or mathtext yet.
I just tried it and actually got simple_plot to work - excellent!
I'm about to do a bug fix release of 0.54. Would you like to release
the code under the matplotlib license? If so I'll include it, but
hold off on announcing it until you are ready.
Another thing that might be useful for SVG is to add grouping
elements. Eg, in axes.draw we could add
def draw(...)
renderer.begin_group('axes')
..plot a bunch of stuff
renderer.end_group('axes')
backend renderers that don't support groupings could just pass on
these calls, but for those who do, like svg, we could add the
appropriate grouping commands. Ditto for legends, axis, etc. This
might help the matplotlib svg output play nicely with svg editors.
JDH | https://discourse.matplotlib.org/t/svg-backend/794 | CC-MAIN-2021-43 | refinedweb | 168 | 78.04 |
A photo of the hardware setup is attached. The white clip on the black lead is simulating a pressed button with a 10K pull-down resistor.
Another photo of the makecode is attached. Given that the "button" is attached (by clipping white clip to black lead), the neopixels should all be blue (as in the photo). But, if the program runs some (anywhere from 2 to 6 minutes), eventually pin 8 goes low and stays low, so the neopixels go to red.
I have tried this with multiple circuit playgrounds and on two different crickits.
To test the hardware and the button simulator connections, running this circuit python code works correctly and does not exhibit this behavior of working at first and then failing after a few minutes.
All help will be appreciated. In not too many days students will starting using makecode and the last thing I want to frustrate them with is hardware not working. Hopefully, it is something with my button simulator.
Here is the circuit python code:
- Code: Select all | TOGGLE FULL SIZE
import time
from adafruit_circuitplayground.express import cpx
from adafruit_crickit import crickit
ss = crickit.seesaw
BUTTON_8 = crickit.SIGNAL8
ss.pin_mode(BUTTON_8, ss.INPUT_PULLDOWN)
while True:
button_8_connected = ss.digital_read(BUTTON_8)
if button_8_connected:
cpx.pixels.fill((0, 0, 10))
else:
cpx.pixels.fill((10, 0, 0))
time.sleep(0.25) | https://forums.adafruit.com/viewtopic.php?f=58&p=689631 | CC-MAIN-2019-13 | refinedweb | 224 | 59.19 |
SPU_RUN(2) Linux Programmer's Manual SPU_RUN(2)
spu_run - execute an SPU context
#include <sys/spu.h> int spu_run(int fd, unsigned int *npc, unsigned int *event); Note: There is no glibc wrapper for this system call; see NOTES.. Execution of SPU code happens synchronously, meaning that spu_run() blocks while the SPU is still running. If there is a need to execute SPU code in parallel with other code on either the main CPU or other SPUs, a new thread of execution must be created first (e.g., using pthread_create(3)). When spu_run() returns, the current value of the SPU program counter is written to npc, so successive calls to spu_run() can use the same npc pointer. The event argument provides a buffer for an extended status code. If the SPU context was created with the SPU_CREATE_EVENTS_ENABLED flag, then this buffer is populated by the Linux kernel before spu_run() returns. The status code may be one (or more) of the following constants:.
The spu_run() system call was added to Linux in kernel 2.6.16.
This call is Linux-specific and implemented only by the PowerPC architecture. Programs using this system call are not portable.
Glibc does not provide a wrapper for this system call; call it using syscall(2). Note however, that spu_run() is meant to be used from libraries that implement a more abstract interface to SPUs, not to be used from regular applications. See ⟨⟩ for the recom‐ mended libraries.
The following is an example of running a simple, one-instruction SPU program with the spu_run() system call. #include <stdlib.h> #include <stdint.h> #include <unistd.h> #include <stdio.h> #include <sys/types.h> #include <fcntl.h> #define handle_error(msg) \ do { perror(msg); exit(EXIT_FAILURE); } while (0) int main(void) { int context, fd, spu_status; uint32_t instruction, npc; context = spu_create("/spu) handle_error("open"); write(fd, &instruction, sizeof(instruction)); /* set npc to the starting instruction address of the * SPU program. Since we wrote the instruction at the * start of the mem file, the entry point will be 0x0 */ npc = 0; spu_status = spu_run(context, &npc, NULL); if (spu_status == ); }
close(2), spu_create(2), capabilities(7), spufs(7)
This page is part of release 4.07 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. Linux 2012-08-05 SPU_RUN(2) | http://man7.org/linux/man-pages/man2/spu_run.2.html | CC-MAIN-2016-40 | refinedweb | 395 | 65.42 |
Yikes, can't believe i didn't catch that. Thanks!
Type: Posts; User: Elementality
Yikes, can't believe i didn't catch that. Thanks!
So far, my loop will print the first element and then it errors out. Any suggestions would be great.
public class Test {
/**
* @param args
*/
public static void main(String[] args)...
if i enter "y" when prompted to play again, i get the "good bye" message. what is happening?
public static void main(String args[])
{
Scanner in = new Scanner(System.in);
...
I'll try to keep this short.
I need to access ParkedCar.minutesParked and ParkingMeter.minutesPurchased from ParkingTicket.calculateFine() and I keep getting an error that the variable isn't... | http://www.javaprogrammingforums.com/search.php?s=8c32820cfb243781aac258f86afd04f9&searchid=1515223 | CC-MAIN-2015-18 | refinedweb | 114 | 72.02 |
Background
The idea of this article originated from a contest (Petrozavodsk Summer-2016. Petr Mitrichev Contest 14), which I believe is attributed to Petr. In this contest, an interesting problem is proposed:
"Cosider this process: pick a random number ni uniformly at random between 10 and 100. Generate ni random points with integer coordinates, picking each coordinate independently and uniformly at random from all integers between 0 and 109, inclusive. Find the convex hull of those points.
Now you are given 10000 polygons generated by this program. For each polygon, you need to guess the value ni that was used for generating it.
Your answer will be accepted if the average (over all 10000 hulls) absolute difference between the natural logarithm of your guess and the natural logarithm of the true ni is below 0.2."
Unfortunately, I didn't really manage to work this one out during our 5-hour training session. After the training is over, however, I have tried to read the solution program written by Petr, which looks like the following:
//... public class h { static int[] splitBy = new int[] {/* 1000 seemingly-random elements */}; static double[] splitVal = new double[] {/* another 1000 seemingly-arbitrarily-chosen elements */}; static double[] adjYes = new double[] {/* Another 1000 seemingly-stochastically-generated elements */}; static double[] adjNo = new double[] {/* ANOTHER 1000 seemingly-... elements, I'm really at my wit's end */}; public static void main(String[] args) { /* Process the convex hull, so that key.data[0] is the average length of the convex hull to four sides of the square border (i.e. (0, 0) - (1E9, 1E9)); key.data[1] is the area of the hull; key.data[2] is the number of points on the hull. */ double res = 0; for (int ti = 0; ti < splitBy.length; ++ti) { if (key.data[splitBy[ti]] >= splitVal[ti]) { res += adjYes[ti]; } else { res += adjNo[ti]; } } int guess = (int) Math.round (res); if (guess < 10) guess = 10; if (guess > 100) guess = 100; pw.println (guess); } }
While I was struggling to understand where all the "magic numbers" come from, I do realize that the whole program is somewhat akin to a "features to output" black box, which is extensively studied in machine learning. So, I have made my own attempt at building a learner that can solve the above problem.
A lightweight learner
Apparently, most online judge simply do not support scikit-learn or tensorflow, which are common machine learning libraries in Python. (Or an 100MB model file, the imagination of 500 users with an 100MB file each makes my head ache. And yes, there are even multiple submissions.) Therefore, some handcraft code is necessary to implement a learner that is easy to use.
As a university student, I, unfortunately, do not know much about machine learning, especially in regression, where even fewer methods are adaptable. However, I somehow got particularly attracted by the idea of the neural network after some googling, both because its wide application and its simplicity, especially due to the fact that its core code can be written with about 60 lines. I will introduce some of its basic mechanism below.
Understand neural network in one section
A neural network, naturally, is made up of neurons. A neuron in a neural network, by definition, is something that maps an input vector x to an output y. Specifically, a certain neuron consists of a weight vector w with the same length as x, a bias constant b, and some mapping function f, and we compute its output with the following simple formula:
In general, we tend to use the sigmoid function as f for reasons we will see below. Other values are "tweakable" parts of a neuron that will be adjusted according to the data, as is the process of learning.
Turning back to the topic of a neural network, it will contain several layers of neurons, where each layer reads the input from the previous layer (the first layer, of course, from the input) and outputs the result as the inputs of the next layer (or the final answer, if it is the last layer). As such, it will not be very difficult to implement a neuron network if all parameters are given, since copying the formula above will suffice.
However, how do we know what these parameters are, anyway? Well, one common way is called "gradient descent". With this method, you imagine a hyperspace with each parameter of a neuron network as an axis. Then, each point in this hyperspace actually represents a neuron network. If we can give every point (neuron network) an error value that indicates how far it is away from the ideal model, then we can simply pick a random point and begin walking (descending) towards a direction where the error value is decreasing. In the end we will reach a point where the error value is very small, which represents a good neural network that we want. As you can guess, in practice we generate a random data (a random convex hull, regarding the problem above) and dictate the error value to be the square of the difference between the output of the network and the real answer, and walk one "step" towards the smaller error value. If sufficient data is generated, then this "stochastic gradient descent" method should approximately be equal to "gradient descent".
Now I can claim due to the fact that the sigmoid function is differentiable at every point, with some difficult-to-understand maths we can actually figure out the best direction in which the error value decreases the fastest without actually trying to explore around. This extensively studied field is named Backpropagation, and simply copying the result from wikipedia is sufficient for us to build a neural network ourselves.
I will show my code regarding the aforementioned problem below:
#include <bits/stdc++.h> //ft inputs, n layers, m neurons per layer. template <int ft = 3, int n = 2, int m = 3, int MAXDATA = 100000> struct network { double wp[n][m][ft/* or m, if larger */], bp[n][m], w[m], b, val[n][m], del[n][m], avg[ft + 1], sig[ft + 1]; network () { std::mt19937_64 mt (time (0)); std::uniform_real_distribution <double> urdn (0, 2 * sqrt (m)); for (int i = 0; i < n; ++i) for (int j = 0; j < m; ++j) for (int k = 0; k < (i ? m : ft); ++k) wp[i][j][k] = urdn (mt); for (int i = 0; i < n; ++i) for (int j = 0; j < m; ++j) bp[i][j] = urdn (mt); for (int i = 0; i < m; ++i) w[i] = urdn (mt); b = urdn (mt); for (int i = 0; i < ft + 1; ++i) avg[i] = sig[i] = 0; } double compute (double *x) { for (int j = 0; j < m; ++j) { val[0][j] = bp[0][j]; for (int k = 0; k < ft; ++k) val[0][j] += wp[0][j][k] * x[k]; val[0][j] = 1 / (1 + exp (-val[0][j])); } for (int i = 1; i < n; ++i) for (int j = 0; j < m; ++j) { val[i][j] = bp[i][j]; for (int k = 0; k < m; ++k) val[i][j] += wp[i][j][k] * val[i - 1][k]; val[i][j] = 1 / (1 + exp (-val[i][j])); } double res = b; for (int i = 0; i < m; ++i) res += val[n - 1][i] * w[i]; // return 1 / (1 + exp (-res)); return res; } void desc (double *x, double t, double eta) { double o = compute (x), delo = (o - t); // * o * (1 - o) for (int j = 0; j < m; ++j) del[n - 1][j] = w[j] * delo * val[n - 1][j] * (1 - val[n - 1][j]); for (int i = n - 2; i >= 0; --i) for (int j = 0; j < m; ++j) { del[i][j] = 0; for (int k = 0; k < m; ++k) del[i][j] += wp[i + 1][k][j] * del[i + 1][k] * val[i][j] * (1 - val[i][j]); } for (int j = 0; j < m; ++j) bp[0][j] -= eta * del[0][j]; for (int j = 0; j < m; ++j) for (int k = 0; k < ft; ++k) wp[0][j][k] -= eta * del[0][j] * x[k]; for (int i = 1; i < n; ++i) for (int j = 0; j < m; ++j) bp[i][j] -= eta * del[i][j]; for (int i = 1; i < n; ++i) for (int j = 0; j < m; ++j) for (int k = 0; k < m; ++k) wp[i][j][k] -= eta * del[i][j] * val[i - 1][k]; b -= eta * delo; // for (int i = 0; i < m; ++i) w[i] -= eta * delo * o * (1 - o) * val[i]; for (int i = 0; i < m; ++i) w[i] -= eta * delo * val[n - 1][i]; } void train (double data[MAXDATA][ft + 1], int dn, int epoch, double eta) { for (int i = 0; i < ft + 1; ++i) for (int j = 0; j < dn; ++j) avg[i] += data[j][i]; for (int i = 0; i < ft + 1; ++i) avg[i] /= dn; for (int i = 0; i < ft + 1; ++i) for (int j = 0; j < dn; ++j) sig[i] += (data[j][i] - avg[i]) * (data[j][i] - avg[i]); for (int i = 0; i < ft + 1; ++i) sig[i] = sqrt (sig[i] / dn); for (int i = 0; i < ft + 1; ++i) for (int j = 0; j < dn; ++j) data[j][i] = (data[j][i] - avg[i]) / sig[i]; for (int cnt = 0; cnt < epoch; ++cnt) for (int test = 0; test < dn; ++test) desc (data[test], data[test][ft], eta); } double predict (double *x) { for (int i = 0; i < ft; ++i) x[i] = (x[i] - avg[i]) / sig[i]; return compute (x) * sig[ft] + avg[ft]; } std::string to_string () { std::ostringstream os; os << std::fixed << std::setprecision (16); for (int i = 0; i < n; ++i) for (int j = 0; j < m; ++j) for (int k = 0; k < (i ? m : ft); ++k) os << wp[i][j][k] << " "; for (int i = 0; i < n; ++i) for (int j = 0; j < m; ++j) os << bp[i][j] << " "; for (int i = 0; i < m; ++i) os << w[i] << " "; os << b << " "; for (int i = 0; i < ft + 1; ++i) os << avg[i] << " "; for (int i = 0; i < ft + 1; ++i) os << sig[i] << " "; return os.str (); } void read (const std::string &str) { std::istringstream is (str); for (int i = 0; i < n; ++i) for (int j = 0; j < m; ++j) for (int k = 0; k < (i ? m : ft); ++k) is >> wp[i][j][k]; for (int i = 0; i < n; ++i) for (int j = 0; j < m; ++j) is >> bp[i][j]; for (int i = 0; i < m; ++i) is >> w[i]; is >> b; for (int i = 0; i < ft + 1; ++i) is >> avg[i]; for (int i = 0; i < ft + 1; ++i) is >> sig[i]; } }; #define cd const double & const double EPS = 1E-8, PI = acos (-1); int sgn (cd x) { return x < -EPS ? -1 : x > EPS; } int cmp (cd x, cd y) { return sgn (x - y); } double sqr (cd x) { return x * x; } #define cp const point & struct point { double x, y; explicit point (cd x = 0, cd y = 0) : x (x), y (y) {} int dim () const { return sgn (y) == 0 ? sgn (x) < 0 : sgn (y) < 0; } point unit () const { double l = sqrt (x * x + y * y); return point (x / l, y / l); } //counter-clockwise point rot90 () const { return point (-y, x); } //clockwise point _rot90 () const { return point (y, -x); } point rot (cd t) const { double c = cos (t), s = sin (t); return point (x * c - y * s, x * s + y * c); } }; bool operator == (cp a, cp b) { return cmp (a.x, b.x) == 0 && cmp (a.y, b.y) == 0; } bool operator != (cp a, cp b) { return cmp (a.x, b.x) != 0 || cmp (a.y, b.y) != 0; } bool operator < (cp a, cp b) { return (cmp (a.x, b.x) == 0) ? cmp (a.y, b.y) < 0 : cmp (a.x, b.x) < 0; } point operator - (cp a) { return point (-a.x, -a.y); } point operator + (cp a, cp b) { return point (a.x + b.x, a.y + b.y); } point operator - (cp a, cp b) { return point (a.x - b.x, a.y - b.y); } point operator * (cp a, cd b) { return point (a.x * b, a.y * b); } point operator / (cp a, cd b) { return point (a.x / b, a.y / b); } double dot (cp a, cp b) { return a.x * b.x + a.y * b.y; } double det (cp a, cp b) { return a.x * b.y - a.y * b.x; } double dis2 (cp a, cp b = point ()) { return sqr (a.x - b.x) + sqr (a.y - b.y); } double dis (cp a, cp b = point ()) { return sqrt (dis2 (a, b)); } bool turn_left (cp a, cp b, cp c) { return sgn (det (b - a, c - a)) >= 0; } std::vector <point> convex_hull (std::vector <point> a) { int cnt = 0; std::sort (a.begin (), a.end ()); std::vector <point> ret (a.size () << 1, point ()); for (int i = 0; i < (int) a.size (); ++i) { while (cnt > 1 && turn_left (ret[cnt - 2], a[i], ret[cnt - 1])) --cnt; ret[cnt++] = a[i]; } int fixed = cnt; for (int i = (int) a.size () - 1; i >= 0; --i) { while (cnt > fixed && turn_left (ret[cnt - 2], a[i], ret[cnt - 1])) --cnt; ret[cnt++] = a[i]; } return std::vector <point> (ret.begin (), ret.begin () + cnt - 1); } const int FT = 10, N = 3, M = 4, DATA = 5000000; network <FT, N, M, DATA> nt; void process (double *data, const std::vector <point> &cv) { data[0] = 0; data[1] = cv.size (); data[2] = 1E9; data[3] = 1E9; data[4] = 1E9; data[5] = 1E9; data[6] = 0; data[7] = 0; data[8] = 0; data[9] = 1E18; for (int i = 0; i < cv.size (); ++i) { data[0] += det (cv[i], cv[(i + 1) % cv.size ()]); data[2] = std::min (data[2], cv[i].x); data[3] = std::min (data[3], cv[i].y); data[4] = std::min (data[4], 1E9 - cv[i].x); data[5] = std::min (data[5], 1E9 - cv[i].y); data[6] += dis (cv[i], cv[(i + 1) % cv.size ()]); data[7] += dis2 (cv[i], cv[(i + 1) % cv.size ()]); data[8] = std::max (data[8], dis (cv[i], cv[(i + 1) % cv.size ()])); data[9] = std::min (data[9], dis (cv[i], cv[(i + 1) % cv.size ()])); } } #ifdef LOCAL double data[DATA][FT + 1]; void gen_data () { std::mt19937_64 mt (time (0)); std::uniform_int_distribution <int> un (10, 100), uid (0, 1000000000); for (int cnt = 0; cnt < DATA; ++cnt) { int n = un (mt); std::vector <point> pp, cv; pp.resize (n); for (int i = 0; i < n; ++i) pp[i] = point (uid (mt), uid (mt)); cv = convex_hull (pp); process (data[cnt], cv); data[cnt][FT] = log (n); } } void train () { gen_data (); std::cout << "Generated." << std::endl; nt.train (data, DATA, 10, 0.01); std::cout << "Trained." << std::endl; double err = 0; gen_data (); for (int cnt = 0; cnt < DATA; ++cnt) { int ans = std::max (std::min ((int) round (exp (nt.predict (data[cnt]))), 100), 10), real = round (exp (data[cnt][FT])); err += std::abs (log (ans) - data[cnt][FT]); } std::cout << "Error: " << std::fixed << std::setprecision (10) << err / DATA << "\n"; std::cout << nt.to_string () << "\n"; } #endif //Error: 0.1931354718 const std::string str = "-1.3935330713026979 -1.6855399067812094 0.0080830627163314 -0.0456935204722265 -0.0134075027893312 -0.0225523456635180 -0.0063385628290917 -0.0539803586714664 0.0285615302448615 -0.0494911914462167 4.8156449644606729 1.4141506946578808 0.0609360500524733 -0.0003939198727380 0.0974573799765315 -0.0237353370691478 1.8823428209722266 -0.0253047473923868 -0.4320535560737181 -0.0465107808811315 3.9087941887190509 1.1605608005526673 -0.0055668920418296 0.0435006757439702 0.0168235313881814 0.0201162039464427 0.0005391585736898 -0.0281508733844095 0.0333519586644446 0.0138573643228493 -2.2584700896151499 -1.4482223377667529 -0.0087279280693970 -0.0067154696151271 0.0931324993850075 -0.0246202499367100 0.1202642385880566 0.0742084099697175 -0.0576728791494071 0.0048743269227391 0.7245500807425441 1.1451918581258713 1.5362660419598355 2.2247337510533445 -1.3361750107632056 -0.6202098232180651 2.6126102753644820 -2.6224614441649430 -1.6239623067583704 0.0103356949589122 2.3160496100308006 0.7983423380182993 2.7402717506451184 -0.4696450730651245 -5.9647497198434687 2.5913155713930167 -0.7492492475439734 -2.6902420797953686 -1.8925204583122668 4.1461581213116920 -1.3283582467170971 1.6545505567705860 -1.8337202201037788 5.9739259421367326 -0.0514667439768578 5.4697553418504725 0.4195708475958704 -8.2367354144820037 -2.7322241686093887 1.5109443918819332 3.4841054954004966 0.4931178741138817 -5.7924156962977023 1.3728086432563174 -3.0935727146312852 -2.1233242848314902 2.7215008891274142 -2.1566137088986088 -2.4630818715655334 -1.2871925059203080 -3.6940220894585978 -4.0749736367183296 0.4396854435207839 -0.5672193348210693 -1.8759334759019322 -1.8112273299081239 1.5777769313392334 0.6556212919805925 -0.6391849809511954 1549414420922584064.0000000000000000 10.0076592000000009 24937256.4471242018043995 24942299.7631161995232105 24933933.8955707997083664 24907107.2519301995635033 3354856368.0634231567382812 1588693346300253696.0000000000000000 716306964.7579888105392456 91336248.0137662440538406 3.8563280309383297 243867607280379392.0000000000000000 2.2982303054101396 34087367.3454539999365807 34090994.8023483082652092 34100829.3740548193454742 33993881.5109920650720596 263817168.3726958632469177 361286974354604864.0000000000000000 129441292.3767285346984863 67804188.7760041356086731 0.5986632816451091"; void run () { nt.read (str); std::ios::sync_with_stdio (0); int N; std::cin >> N; for (int i = 0; i < N; ++i) { int cnt; std::cin >> cnt; std::vector <point> cv (cnt, point ()); for (int j = 0; j < cnt; ++j) std::cin >> cv[j].x >> cv[j].y; static double test[FT]; process (test, cv); std::cout << std::max (std::min ((int) round (exp (nt.predict (test))), 100), 10) << "\n"; } } int main () { #ifdef LOCAL train (); #else run (); #endif }
As we can see, the core code of a neuron network takes only about 60 lines to finish (which I believe is the reason why I can never learn how to code the infamous link-cut tree — the implementation of my brain just got beaten by sheer line numbers), and its output as parameters is significantly less than most models. These are the reasons that I think the neuron network may be one popular ML methods for competitive programming.
Extension to other problems
I believe I have seen a few similar problems before, such as give you 10 numbers generated uniformly from [1, n] and let you guess what value n is. However, as of now I cannot find any more examples. Some help is appreciated if you happen to know some similar problems.
Conclusion
Now that hopefully you have known something about machine learning, do hurry to solve all the problems...Well, at least, before all the "nefarious" problem makers know the tricks and even find better features than you can.
P.S. What approach would you use to solve the "convex hull guessing" problem? Do you know any similar problems? Do let me know with comments, your help is most appreciated!
Update: Since some of you challenged that Petr's solution is specifically made for the judge's test cases, I compiled the code and tested it against my random data. It would appear that his solution achieves ~0.195 error, which is below the threshold and thus satisfies the requirement. For those interested I have pasted the original code at here. Be warned though, it is very long. | https://codeforces.com/topic/61170/en10 | CC-MAIN-2022-05 | refinedweb | 3,084 | 64.95 |
#include <fbxvertexcachedeformer.h>
This class deforms control points of a geometry using control point positions stored in the associated cache object.
Definition at line 27 of file fbxvertexcachedeformer.h.
Vertex cache deformer data type.
Definition at line 33 of file fbxvertexcachedeformer.h.
Reimplemented from FbxDeformer.
Definition at line 29 of file fbxvertexcachedeformer.h.
Assign a cache object to be used by this deformer.
Get the cache object used by this deformer.
NULLif no cache object is assigned.
Indicate if the deformer is active or not.
Definition at line 52 of file fbxvertexcachedeformer.h.
The channel name used in the cache file.
Definition at line 55 of file fbxvertexcachedeformer.h.
The cache set used by this vertex cache deformer.
Definition at line 58 of file fbxvertexcachedeformer.h.
The vertex cache deformer type.
Definition at line 61 of file fbxvertexcachedeformer.h. | https://help.autodesk.com/cloudhelp/2018/ENU/FBX-Developer-Help/cpp_ref/class_fbx_vertex_cache_deformer.html | CC-MAIN-2022-21 | refinedweb | 139 | 63.66 |
be the change you want to see in the world - Gandhi.
One of my friends work for a company where he has to work in a remote PC and where he lives he does not have good internet connection. He neither has permission for opening up third party email website such GMail, nor he could download email with 100MBs of attachment using Outlook. He tried for even whole night trying download those email. However his poor internet connection did not let him download completely such single email from server. He had to redownload all the email from the beginning. So he asked me to help him out in such situation as he did not have more than 6 hours in hand to finish downloading those email and reply them. Unfortunately he was not a developer and I even myself did not have the time for it. Still for the sake of friendship, I agreed to help him out, and built an app in less than 10 minutes for him. This project has been hosted at MSDN Code Gallery.
I use many free components and tools everyday, I am grateful at, made my life lot easier. I wish I could write several blog posts on them, and thanking them for making it happening for me. One of them is OpenID and the .NET implementation DotNetOpenID that saved me a lot of coding and headache that day. My friend did not have any web server, so I built that app and hosted on my web server. But I already exceeded my database creation limit on that server and I really do not want to take a chance to screw up other existing databases. So I could not create tables to store membership information. His requirement was the site needs to be secured and password protected, and he can upload from remote PC and can download to his local PC through resume supported http downloaders such as Free Download Manager. Did I say I am a big fan of this free software too? Try it out for yourself.
From OpenID.
And DotNetOpenID is a C# library adds OpenID 2.0 Provider and Relying Party, OAuth Consumer and Service Provider, and InfoCard Selector support to your web site both programmatically and through convenient drop-in ASP.NET controls.
Here are the snapshots of the end product. There will be two other pages Upload and Browse which are pretty self-explanatory. You will be able to upload new file, browse files and delete unnecessary files. Most important part of this page is it has a login page which is not like regular login box where you enter username and password. It has a OpenID Login instead. You will have to type your OpenID URL here. If you do not have one, you can click register and have the same. The reason behind no username rather than OpenID URL is, there could be many different OpenID provider websites that will provide you with OpenID account creation opportunity. Many popular providers are ClaimID, MyOpenID, MyID and so on. No matter what you chose to open OpenID account with, you will still be able to access all the sites that support OpenID. That is why OpenID identifies yourself as an URL instead of Username/ID.
When you type your OpenID URL there and hit Login, you will be redirected to the provider’s page like the following where you will have to type your password in and allow access the site to use your account information such as your name, email address and so on. See the highlighted key information below:
If you click on Stay signed in, you will never have to write password again. Whenever the authenticated application in our case our DropZone application will redirect to the provider’s login page it will be automatically authenticated and redirected back to our app. So, you will not see the login page again. Consider OpenID as a global ID or passport that you can use in many giant and popular websites. Popularity of OpenID is being increased everyday exponentially.
When you are done with authenticating yourself to the application you will be redirected to the Upload page where you can select a file and upload it to server. The beauty of OpenID here is you do not have to worry about creating tables in your database for keeping track of users, change password, forgot password, register, login mechanism, manage sessions, cookies and what not functionalities you can imagine. You will also find another page named Browse which has a simple GridView that displays file list from the directory where files will be stored. We will see shortly how we would do that. You will also be able to delete files from the grid which results into deleting the actual uploaded file from the server
Before we get into how we could setup OpenID for our application, let us take a look at some configuration that we will be using through out its development lifecycle. You can write comma separated values for allowed OpenID users who can authenticate and use this application. You may want to keep this list short and filled up with OpenID URLs of friends only. You will also need to configure the store path where files will be stored and URL prefix for the files to download. Here are the configuration you will have to place inside web.config file:
<appSettings>
<add key="allowed_users" value="user1@myopenid.com,user2@myopenid.com"/>
<add key="store_path" value="d:\\dropzone\\"/>
<add key="download_url_prefix" value=""/>
</appSettings>
DotNetOpenID is an implementation of OpenID API and as the name suggests of course it is meant for .NET applications. After you have referenced the DLL you will be able to use the OpenID Login control in your WebForm. You can use the design-time view and set properties for the control like the following:
From HTML view it may look like the following snippet:
<cc1:OpenIdLogin
You will notice the RequestEmail, RequestCountry and so on properties, there are the items that will be asked to and bring back from the OpenID provider. You will also see event handlers are being registered to customize our needs to perform specific actions upon those events. Now that we have included the OpenID login control, its time for implementing the event handlers.
I used the same State class that was being shipped with the Samples for DotNetOpenID, which looks like the following:
public class State
{
public static void Clear()
{
ProfileFields = null;
FriendlyLoginName = null;
PapePolicies = null;
}
public static ClaimsResponse ProfileFields
get { return HttpContext.Current.Session["ProfileFields"] as ClaimsResponse; }
set { HttpContext.Current.Session["ProfileFields"] = value; }
public static string FriendlyLoginName
get { return HttpContext.Current.Session["FriendlyUsername"] as string; }
set { HttpContext.Current.Session["FriendlyUsername"] = value; }
public static PolicyResponse PapePolicies
get { return HttpContext.Current.Session["PapePolicies"] as PolicyResponse; }
set { HttpContext.Current.Session["PapePolicies"] = value; }
}
This class holds information for the logged on user in static properties that include Profile detail as well as FriendlyLoginName which is the OpenID URL for the logged on user. You will also notice that the class has a Clear method which nullifies every property to state that there is currently no logged on user at the moment. LoggedIn is the event that gets fired when an user successfully login using his OpenID. So we need to write code to handle that:
1: protected void OpenIdLogin1_LoggedIn(object sender, OpenIdEventArgs e)
2: {
3: if (e.Response.Status == AuthenticationStatus.Authenticated)
4: {
5: var isAllowed = false;
6: var allowedUsers = ConfigurationManager.AppSettings["allowed_users"].Split(",".ToCharArray());
7: foreach (var allowedUser in allowedUsers)
8: {
9: if (e.Response.FriendlyIdentifierForDisplay.Equals(allowedUser, StringComparison.InvariantCultureIgnoreCase))
10: {
11: isAllowed = true;
12: break;
13: }
14: }
15:
16: if (isAllowed)
17: {
18: State.FriendlyLoginName = e.Response.FriendlyIdentifierForDisplay;
19: State.ProfileFields = e.Response.GetExtension<ClaimsResponse>();
20: State.PapePolicies = e.Response.GetExtension<PolicyResponse>();
21: Response.Redirect("Upload.aspx");
22: }
23: else
24: {
25: State.Clear();
26: }
27: }
28: }
This code takes care of the information that are passed to this method after successful login from OpenID, and iterates through the allowed OpenIDs from web.config to validate whether the user should get access to the resources. If the user is found, State is being populated with the information received from OpenID. Next thing is to protect pages from being accessed without authentication by hotlink. We could do this by validating whether the State has FriendlyLoginName data like string.IsNullOrEmpty(State.FriendlyLoginName).
1. Set write properties for your Uploads folder such as this from IIS:
2. For additional security to the Uploads folder you could also set permission for Administrator/other users you like:
3. Also make sure the file is being uploaded can be handled by the server. For instance, if you upload 7z files and your server does not have a clue how to handle this request, you could set the mime type for that particular type of file like the following:
This will ensure the response header would be set to application/x-zip-compressed which will cause the browser or file downloader software to initiate the download. However, regular extensions should work just fine without adding entries to mime types.
4. ASP.NET by default allows you to upload maximum 4MB size of file. Exceeding that limit would be easy to configure in web.config by the attribute in httpRuntime. You will have to determine the maximum limit of megabytes you are going to allow by multiplying with 1024. Like 200MB * 1024 = 204800 Kilobytes.
<httpRuntime maxRequestLength="204800" />
5. ASP.NET default maximum timeout for each request is 30 seconds, after that you request will be ended prematurely. If you need to upload larger files that needs sufficient timeout value, you can also set that up from the same XML element in web.config like the one mentioned above. This value is in seconds. We are setting here 12 hours timeout.
<httpRuntime executionTimeout="43200" />
Keep in mind that this application is open source which has not gone through thorough testing efforts and you should use this totally at your own risk. However, I used this application by myself to transfer hundreds of megabytes of large files without any problem. You will also find I did not put any effort for good architecture for this tiny tool, nor have I incorporated any exception handling mechanisms.
Do you know from where I got the theme for this application? FreeCssTemplates is another free website that I am grateful at, for letting me use numerous of their templates in my applications for no charge.
As you probably know from the earlier posts or you might have attended already Microsoft Day, I spoke on Development in ASP.NET [WebForms, LINQ, Dynamic Data, Futures] on June 20th. Some of the enthusiasts still communicating with me even after the Microsoft Day @ Dhaka. I really appreciate your passion guys. One of them just told me that after my session and private Q&A with me on LINQ, he could successfully convince his supervisor to welcome change to the existing architecture they were working on. They are going to replace their data access model with LINQ to SQL. Feels so good to hear that! I am happy because at least some people found it useful and are actually utilizing the knowledge in real life, they learned from the event. This is how community contributions add value to the industry.
Not everyone in the event was ASP.NET developers there that day. There were also business decision makers and developers who are switching to Microsoft stack of tools and technologies lately. Three persons personally met me after the session, owner/supervisor of their companies/engineering teams, to identify the opportunities with Dynamic Data and LINQ for their businesses and/or as prototyping tool.
I also talked in general about the event with the people who were attending the event. One really positive point they made was its a wonderful opportunity for them to do business networking with the industry's developers. one of them showed me bunch of his business cards he brought that day with him! They thanked Microsoft for bringing them all under the same umbrella. They said they were looking forward to such events in future. It was also a great get together for me with the industry experts and MVPs. I personally received a lot of good, positive and not-so-positive ;) feedback on the event. Despite the criticisms on some of the Level 100 sessions, the guys I've just talked about and the success story were the answer to those. However I would like to thank those who attended the event and provided with really constructive feedback no matter positive or negative. But, it still was a huge success!!
I hope this event is just the beginning, not the end. I would feel lucky if I could present in the same event next time. Its always fun working with the community. Thanks again guys for attending and making it a huge success.. I will take the inaugural session on that day which starts from 9.30am.
Saturday June 20th, 2009. 9.00AM – 6.00PM
IDB Auditorium E/8-A Rokeya Sharani, Sher-e-Bangla Nagar, Agargaon, Dhaka 1207
For more information:
9:00
9:30
Opening Speech
Feroz Mahmood & Abhishek Kant
9:30
10:30
Development in ASP.NET
Tanzim Saqib
10:30
11:15
My First ASP.NET MVC App
Mehfuz Hossain MVP
11:45
Unit Testing in MVC and Deo of dotnetshoutouts.com
Kazi Manzur Rashid MVP
12:30
Developing in Silverlight
Faisal Hossain Khan
12:30
1:30
Lunch
2:00
Introduction to MS Sharepoint Server
MJ Ferdous
2:45
Production Challenges of ASP.NET Websites
Omar Al Zabir MVP
3:15
Windows Azure
Ashic Mahtab
3:15
3:45
Tea Break
4:30
Overview of Visual Studio Team System 2010
Mohammad Ashraful Alam MVP
5:00
Features of Windows 71
Omi Azad MVP
5:30
IE8 and Windows Live Features1
Irtija A. Akhter
I have described before in my prior post how you can run your ASP.NET MVC application in Visual Studio 2010. There is currently no support for ASP.NET MVC project types in Visual Studio 2010 Beta 1. So what I did is I created a starter kit for ASP.NET MVC 1.0 which you can download from here. Now follow the steps:
1. Execute the AspNetMvc2010.vsi, you will see the following dialog box. Proceed.
2. Say yes to whatever comes along.
3. Now go to File > New Project, you will find ASP.NET MVC Application in Visual C# > Web tab.
Hope this saves some time!
You probably already know that ASP.NET MVC is not included with Visual Studio 2010 Beta1 since MVC was released when Visual Studio release was being locked down. I hope it will be available from Beta2. It also looks like if you install ASP.NET MVC after Visual Studio 2010 it does not pick up as well. So currently the only way to create and work with a MVC project is to create a project in Visual Studio 2008 and then open it with Visual Studio 2010. If you do so, it will ask you to convert this project to 2010 version which is little strange to me since there are not many things to convert from 2008 to 2010 as of yet. Anyway, so as usual you have to go through a Conversion Wizard.
You will notice an error while it was performing conversion. It says that the project type is not supported by this installation. You will also find the Solution Explorer is unable to load the project.
1. Now its the same procedure for Visual Studio 2008 if ASP.NET MVC is not installed. Right click on the project name from the Solution Explorer and Edit.
2. You will see the project definition XML. Seek for the line below:
>
3. Remove the first GUID. Save and close the file.
4. Reload the project.
5. Conversion Wizard will pop up again, now with an additional dialog box whether we would like to upgrade to .NET Framework 4.0
6. Choose whather you want. Now that the conversion is complete, you will see the report is clean.
You are ready to go. Note that even though you can get it working with Visual Studio 2010 in this way, but you still cannot have the functionality of adding new types from Context menu such as “Add View..” and so on.
There are two different ways to build controls for ASP.NET MVC as of now. The most common way is by HTML Helper extension methods. You will find such methods being used in numerous places inside Views. Such methods can take any complexity of parameters, yet return only standard HTML tags in string format as response. Our Menu control will render a basic structure of CSS Menu, which appearance can be controlled from the site’s CSS file. So no matter which look & feel you would like to see your menu reflects, you do not have to change your Menu control. You will only have to make changes into your CSS.
The following TextBox method renders an input tag with name “username”:
<%= Html.TextBox("username") %>
This sort of extension methods often become handy when you need to render complex or reusable controls in different Views. Today I will talk about a simple Menu control I built which renders UL/LI tags, but can take parameters ranging from string to complex types such as MvcMenuItem. Let us see an example how we would like to see Menu control to be used:
1: <%
2: var list = new List<MvcMenuItem>();
3:
4: list.Add(new MvcMenuItem{ Text = "Home", ActionName = "Index" });
5: list.Add(new MvcMenuItem("About", "About", "Home"));
6: list.Add(new MvcMenuItem("Feedback", "alert('feedback');"));
7: %>
8:
9: <%= Html.Menu()
10: .ClientId("menu")
11: .AddRange(list)
12: .HtmlAttributes(new { style="color: Red" })
13: .Render()
14: %>
From the snippet you can see, the Menu control is taking a list of MvcMenuItem class which we will create later, is being invoked from Html class and supports method calls in chain which is known as Fluent Interface. Let us take a look at how many different ways we can add menu item to the Menu control:
MvcMenu Add(MvcMenuItem item)
MvcMenu Add(string text, string actionName, string controllerName, string clientCallbackMethod)
MvcMenu Add(string text, string actionName, string controllerName)
MvcMenu Add(string text, string clientCallbackMethod)
MvcMenu Add(string text)
MvcMenu AddRange(List<MvcMenuItem> items)
I am sure you noticed the return type in every method mentioned above. It is because we have to return the current instance of the class from within every method to support Fluent Interface. Let us also take a look at the class properties of MvcMenuItem which has several constructor overloads, helpful when initializing instances. You will notince ClientCallbackMethod property which indicates if this menu item is meant to be calling a client side method. You pass a JavaScript method/code block the menu item will execute it upon user’s click.
public class MvcMenuItem
public string Text { get; set; }
public string ActionName { get; set; }
public string ControllerName { get; set; }
public string ClientCallbackMethod { get; set; }
HtmlHelper resides in a different assembly than we are building MVC control for. So how would we write extension for it? All we have to do is, reference the System.Web.Mvc assembly and we will write the class inside that namespace too:
namespace System.Web.Mvc
public static class MvcMenuExtensions
public static MvcMenu Menu(this HtmlHelper helper)
{
return new MvcMenu(helper);
}
As we are writing extension method, you will notice we followed the convention of putting it in a static class, and the static method named Menu is taking the HtmlHelper which we will pass to the MvcMenu we will be building. Constructor of the MvcMenu class is important. It takes that HtmlHelper and store it to its private variable so that it can be reused later. We need it to be reused later since we are supporting Fluent Interface.
public MvcMenu(HtmlHelper helper)
Helper = helper;
Items = new List<MvcMenuItem>();
As you can understand MvcMenu is the class we are writing responsible for processing menu items, and rendering. Next part is easy and straight forward. We will implement Render method which will write Menu Items for us:
1: public MvcMenu HtmlAttributes(object dictionary)
3: HtmlProperties = new RouteValueDictionary(dictionary);
4: return this;
5: }
6:
7: public string Render()
8: {
9: var ulTag = new TagBuilder("ul");
10: ulTag.MergeAttribute("id", Id ?? string.Empty);
11: ulTag.MergeAttributes(HtmlProperties);
12:
13: foreach (var item in Items)
14: {
15: var liTag = new TagBuilder("li");
16:
17: if(!string.IsNullOrEmpty(item.ClientCallbackMethod))
18: liTag.InnerHtml = string.Format("<a href=\"javascript:;\" onclick=\"{1}\">{0}</a>", item.Text, item.ClientCallbackMethod);
19: else
20: liTag.InnerHtml = Html.LinkExtensions.ActionLink(Helper, item.Text, item.ActionName, item.ControllerName ?? string.Empty);
21:
22: ulTag.InnerHtml += liTag.ToString();
23: }
24:
25: return ulTag.ToString();
26: }
We used the ActionLink method which helps us generate valid URL from Action and Controller name. We will reach up to that point only if there is no ClientCallbackMethod is defined. You will also notice the TagBuilder class which is can render a standard HTML tag for you even with specified style. Also do not forget to add the namespace in web.config:
<namespaces>
<add namespace="TanzimSaqib.Mvc.Menu"/>
...
</namespaces>
<namespaces>
<add namespace="TanzimSaqib.Mvc.Menu"/>
...
</namespaces>
Download the code. Enjoy.:
The Usage You should be able to use this library as follows:.
Windows Azure was announced on PDC 2008 (Oct 27) and will hopefully be released mid next year. You probably already know about Azure by this time. If no, I would like to quote some from as intro:.? | http://weblogs.asp.net/TanzimSaqib/ | crawl-002 | refinedweb | 3,620 | 64.3 |
Python beginner, trying to:
number1 = raw_input("Insert number1 = ")
print number1
number2 = raw_input("Insert number2 = ")
print number2
def sumfunction (number1, number2):
print "Were summing number1 and number2 %d + %d" % (number1, number2)
return number1 + number2
print()
Change your code to use a
main function. It will help you understand the code flow better:
def sumfunction(n1, n2): print "Were summing %d + %d" % (n1, n2) return n1 + n2 def input_int(prompt): while True: try: return int(raw_input(prompt)) except ValueError: print "Invalid input! Please try again." def main(): number1 = input_int("Insert number1 = ") print number1 number2 = input_int("Insert number2 = ") print number2 result = sumfunction(number1, number2) print "Result: %d" % result if __name__ == '__main__': main()
This is the standard way to write Python scripts. See, when the script runs, it actually executes everything along the way. So we put the
__name__ check at the end, to say "okay, the script is loaded, now actually start running it at this predefined point (
main)".
I've changed your variable names so you can understand that they are scoped to the function in which they are declared.
I'm also showing how
main gets the return value from
sumfunction, and stores it to a variable (just like you did with
raw_input). Then, I print that variable.
Finally, I've written a function called
input_int. As DeepSpace's answer indicated,
raw_input returns a string, but you want to work with integers. Do do this, you need to call
int on the string returned by
raw_input. The problem with this is, it can raise a
ValueError exception, if the user enters an invalid integer, like
"snake". That's why I catch the error, and ask the user to try again. | https://codedump.io/share/sJKubGe6ziJN/1/python-defined-new-function-but-it-does-not-print-the-result | CC-MAIN-2017-47 | refinedweb | 279 | 60.75 |
Hello. I am trying to do a try.. catch statement and where there is exception error thrown, i want it to go back to the beginning. I have done the following, but I know I have made a mistake somewhere, but am stumped. Please assist me here
here is the code :
thank you all !thank you all !Code:#include <iostream> using namespace std; void printList(short list[]); int main() { short n[10] = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}; int index=0, value; printList(n); while (index != -1) { try { cout << "Enter array index (0..9) to update (-1 to quit): "; cin >> index; if (index > 10) throw 1; else if (index == -1) break; } catch (int index) { cout << "Exception thrown" << endl; } cout << "Enter new value: "; cin >> value; n[index] = value; printList(n); } return 0; } void printList(short list[]) { cout << "List: "; for (int i=0; i<10; i++) cout << list[i] << " "; cout << endl; } | http://cboard.cprogramming.com/cplusplus-programming/85059-menu-loop.html | CC-MAIN-2014-52 | refinedweb | 153 | 78.99 |
5 Best Practices for Feature Flagging
Stay connected.
In their most basic form, feature flags are “if and else” statements that create a conditional branch into code, which allows developers to restrict some functionality to certain groups of users. If a flag or toggle is switched on, the code is executed; else skipped. However, feature flags can go beyond simple boolean states and could be a complex logical statement, list, array and more.
Teams across the globe use feature flags (also called feature toggles) for reliable releases and experimentation. A major advantage of feature flags is that they provide a granular separation between delivery and release. Teams can carefully roll out features to a limited audience, validate their functionality and performance and gradually release the same features to a bigger audience using blue-green, canary and other phased deployment practices. While there are obvious benefits of using feature flags, they can get increasingly complex and challenging to maintain over time.
While most teams start using feature flags relying on in-house systems, they lack a scalable strategy for managing the feature flags . As more flags are added to the code, the application becomes harder to test, more difficult for team members to understand and more susceptible to errors. At times, testing in production can also leak issues and expose vulnerabilities to the public. This puts extra pressure on teams and can lead to increased test costs. Without the tools necessary to manage feature flags at scale, the costs and risks associated with feature flagging often outweigh its benefits.
You can easily avoid such risks following the best practices in feature flag management. Here are the top five ways for managing your feature flags better:
Flag naming convention. Adopt a naming convention that allows you to get information about the type or the purpose of the changes introduced to a flag. CloudBees Rollout provides you with a unique approach that enforces name uniqueness under namespaces based on the type system of the programming language you are using. The effects of feature distribution are not restricted to the engineering team, you should log the feature status to your usage analytics, support systems, etc. With CloudBees Rollout, we have exposed the impression handler to allow you to report to all your different platforms.. We recommend using the CloudBees Rollout’ label system to classify every flag type.
Access control
Putting feature flags in the hands of non-technical users can be a double-edged sword. Use a granular approach to access controls. Different teams should have different levels of visibility and access to environments and flags. For example, you can consider imposing restrictions on temporary logins or those with support credentials.
Lifecycle management hygiene
Over time, feature flags can contribute to code complexity with deprecated branches and features that were never released or were replaced with something else. Current flags can conflict with previous ones. It can be a challenge to identify which flags are required and which ones are redundant or obsolete. One way to figure this out is to see if a feature flag is always on or off; if so, it has served its purpose. You can also define the flag status to help your teams distinguish between short-term and long-term flags. The status field can be dynamically updated after a defined period, allowing your team to identify which flags are safe to remove.
To avoid technical debt from building up, you need to carefully manage flags with precise control and visibility into their changes, rollouts and sunsetting. However, with bootstrapped, homegrown systems, all this can be challenging. Enterprise-ready feature flag management systems can make your mission significantly easier.
Myself and Erez Rusovsky will be presenting more on this topic at DevOps World | Jenkins World Lisbon on Dec 4. Please join us if you'd like to learn more on this topic.
Additional Resources:
Watch this webinar about why CI/CD needs feature flagging
Learn the difference between progressive delivery and feature flags with Jenkins X and CloudBees Rollout
Check out this ultimate guide to getting started with feature flags
Stay up to date
We'll never share your email address and you can opt out at any time, we promise. | https://www.cloudbees.com/blog/5-best-practices-feature-flagging | CC-MAIN-2021-17 | refinedweb | 703 | 51.89 |
import "go.chromium.org/luci/server/auth/authdb/internal/graph"
Package graph implements handling of the groups graph.
Such graphs are built from list of AuthGroup proto messages that reference each other by name.
type Graph struct { Nodes []Node // all graph nodes NodesByName map[string]NodeIndex // name => index in Nodes }
Graph is a static group graph optimized for traversals.
Not safe for concurrent use.
Build constructs the graph from a list of AuthGroup messages.
Ancestors returns a set with 'n' and all groups that include it.
Descendants returns a set with 'n' and all groups it includes.
NodeByName returns a node given its name or nil if not found.
func (g *Graph) ToQueryable() (*QueryableGraph, error)
ToQueryable converts the graph to a form optimized for IsMember queries.
Visit passes each node in the set to the callback (in arbitrary order).
Stops on a first error returning it as is. Panics if 'ns' has invalid indexes.
type Node struct { *protocol.AuthGroup // the original group proto Nested []NodeIndex // directly nested groups Parents []NodeIndex // direct parent (nesting) groups Index NodeIndex // index of this node within the graph's list of nodes // contains filtered or unexported fields }
Node is a node in a group graph.
NodeIndex is an index of a node within graph's list of nodes.
Used essentially as a pointer that occupies x4 less memory than the real one.
Note: when changing the type, make sure to also change SortedNodeSet.MapKey and replace math.MaxUint16 in Build(...) with another bound.
NodeSet is a set of nodes referred by their indexes.
Add adds node 'n' to 'ns'.
func (ns NodeSet) Sort() SortedNodeSet
Sort converts the NodeSet to SortedNodeSet.
Update adds all nodes in 'another' to 'ns'.
type NodeSetDedupper map[string]SortedNodeSet
NodeSetDedupper helps to find duplicate NodeSet's.
func (nsd NodeSetDedupper) Dedup(ns NodeSet) SortedNodeSet
Dedup returns a sorted version of 'ns' (perhaps reusing an existing one).
QueryableGraph is a processed Graph optimized for IsMember queries and low memory footprint.
It is built from Graph via ToQueryable method. It is static once constructed and can be queried concurrently.
TODO(vadimsh): Optimize 'memberships' to take less memory. It turns out string keys are quite expensive in terms of memory: a totally empty preallocated map[identity.Identity]SortedNodeSet (with empty keys!) is already *half* the size of the fully populated one.
func BuildQueryable(groups []*protocol.AuthGroup) (*QueryableGraph, error)
BuildQueryable constructs the queryable graph from a list of AuthGroups.
func (g *QueryableGraph) GroupIndex(group string) (idx NodeIndex, ok bool)
GroupIndex returns a NodeIndex of the group given its name.
IsMember returns true if the given identity belongs to the given group.
func (g *QueryableGraph) IsMemberOfAny(ident identity.Identity, groups SortedNodeSet) bool
IsMemberOfAny returns true if the given identity belongs to any of the given groups.
Groups are given as a sorted slice of group indexes obtained via GroupIndex.
SortedNodeSet is a compact representation of NodeSet as a sorted slice.
func (ns SortedNodeSet) Has(idx NodeIndex) bool
Has is true if 'idx' is in 'ns'.
func (a SortedNodeSet) Intersects(b SortedNodeSet) bool
Intersects is true if 'a' and 'b' have common elements.
func (ns SortedNodeSet) MapKey() string
MapKey converts 'ns' to a string that can be used as a map key.
Package graph imports 7 packages (graph) and is imported by 3 packages. Updated 2020-09-18. Refresh now. Tools for package owners. | https://godoc.org/go.chromium.org/luci/server/auth/authdb/internal/graph | CC-MAIN-2020-40 | refinedweb | 556 | 59.9 |
bcopy - memory operations
#include <strings.h> void bcopy(const void *s1, void *s2, size_t n);
The bcopy() function copies n bytes from the area pointed to by s1 to the area pointed to by s2.
The bytes are copied correctly even if the area pointed to by s1 overlaps the area pointed to by s2.
The bcopy() function returns no value.
No errors are defined.
None.
For portability to implementations conforming to earlier versions of this specification, memmove() is preferred over this function.
The following are approximately equivalent (note the order of the arguments):
bcopy(s1,s2,n) ~= memmove(s2,s1,n)
None.
memmove(), <strings.h>. | http://pubs.opengroup.org/onlinepubs/007908775/xsh/bcopy.html | CC-MAIN-2015-35 | refinedweb | 106 | 57.37 |
tag:blogger.com,1999:blog-81797194088042467852021-06-22T09:34:41.081+05:30{errorception} blogRakesh Pai on Errors<p>I've just pushed a build that lets you comment on your errors. (Finally!)</p><p>I'll admit, I've been very sceptical about adding this feature to Errorception. Thing is, this moves Errorception closer to being a bug <em>management</em> tool, whereas I just want it to be a bug <em>reporting</em> tool. You already use a bug management tool internally, and there's no point trying to replicate those features in Errorception. It only creates confusion for you.</p>.</p><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="" /></a></div><p>Building commenting systems is a complex task, and I'm certainly not considering this feature-complete. That said, the current implementation is fast, minimal, and it "just works", in the typical Errorception style. (See how I let that last bit sneak in there?)</p><p>Let me know what you think!</p><img src="" height="1" width="1" alt=""/>Rakesh Pai CORS on Amazon CloudFront with S3 as your Origin Server<p>Today I was debugging a customer's CloudFront setup to ensure that they were supporting CORS correctly. <a href="">Amazon has documented the process</a>, but the docs seem to be structured to work as a reference rather than a how-to. I thought I'd document what I had to do to get things working right, so that this serves as a starting point for others to get set up.</p><p>As you probably know, <a href="">enabling CORS is important if you want to catch cross-domain JavaScript errors</a>. If you are using CloudFront as a CDN, you are most likely using a different domain (or subdomain) to serve your files, and will need to set up CORS at CloudFront.</p><p>The bulk of the surprises with setting up CORS with CloudFront are with configuring S3 correctly. This is the typical setup for most people (CloudFront using S3 as their Origin Server), so you'll probably have to deal with this first.</p><h3>Configuring S3</h3><p>S3 has this unnecessarily complicated "CORS configuration" that you need to create. Here's the steps to get that right:</p><ul><li>Log into your <a href="">AWS S3 console</a>, select your bucket, and select "Properties". S3 CORS configurations seem to apply at the level of the bucket, and not the file. I have no clue why.</li><li>Expand the "Permissions" pane, and click on "Add CORS configuration" or "Edit CORS configuration" depending on what you see.</li><li>You should already be provided with a default permission configuration XML. (Seriously, Amazon? 2014? XML?) If not, use the following XML to get started. <pre><code><?xml version="1.0" encoding="UTF-8"?><br /><CORSConfiguration xmlns=""><br /> <CORSRule><br /> <AllowedOrigin>*</AllowedOrigin><br /> <AllowedMethod>GET</AllowedMethod><br /> <MaxAgeSeconds>3000</MaxAgeSeconds><br /> <AllowedHeader>Authorization</AllowedHeader><br /> </CORSRule><br /></CORSConfiguration><br /></code></pre><p>You should look at <a href="">Amazon's docs</a> to see what this configuration means.<p><p>In the course of this debugging exercise, I discovered the hard way that Amazon's XML parser cares about the <code><?xml ?></code> declaration, and the <code>xmlns</code> on the root node. If you omit these, Amazon will fail silently, showing you a happy looking green tick! (Can you imagine how hard it was to figure this out?)</p></li><li>Once you've saved the configuration, go get a coffee (or other preferred poison) while you wait for S3 to be one with your new configuration, and really internalise it's true meaning. (It takes a couple of minutes. Some sort of caching, I guess.)</li><li: <pre><code>$ curl -sI -H "Origin: example.com" -H "Access-Control-Request-Method: GET"<br /><br />HTTP/1.1 200 OK<br />Date: Wed, 05 Nov 2014 13:37:20 GMT<br />Access-Control-Allow-Origin: *<br />Access-Control-Allow-Methods: GET<br />Access-Control-Max-Age: 3000<br />Vary: Origin, Access-Control-Request-Headers, Access-Control-Request-Method<br />Cache-Control: max-age=604800, public<br />...snip...<br /></code></pre> You should see the "Access-Control-Allow-Origin: *" header, and the "Vary: Origin" header in the output. If you do, you're golden.</li></ul><p>With that, you are almost done! CloudFront's configuration is a piece of cake in comparison.<p><h3>Configuring CloudFront</h3><ul><li>Go to your <a href="">CloudFront console</a>, select your distribution and go to the "Behaviors" tab.</li><li>You should already have a "Default" behavior listed there. Select it and hit "Edit".</li><li>Under the "Whitelist Headers" section, navigate their clunky UI to add the "Origin" header to the whitelist.</li><li>Save, get another coffee, and wait for this to propagate through CloudFront's caches. This will take some time.</li><li>Test! Again, you will have to use the process above to make sure you are flipping the right switches within Amazon. That is, use curl (or some HTTP client), and ensure that you specify the extra headers. You should see the "Access-Control-" headers in the response.</li></ul> <p>There you go! That should get you set up. I can't believe Amazon has made it so complicated to essentially send an additional HTTP header. Well, regardless, I hope this post helps you get set up correctly.</p> <p>You will also need to modify your script tags to ensure that you catch JS errors correctly. You can read more about that <a href="">in the docs</a>.</p> <p>Not catching JS errors yet? You should really give <a href="">Errorception</a> a shot.</p><img src="" height="1" width="1" alt=""/>Rakesh Pai Flooding<p>Flooding in the IRC sense, that is. Not the global warming sense, of course. Because global warming isn't real, amirite?</p><p!</p><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="" /></a></div><p>No more! Errorception now imposes a per-user rate limit. This is an arbitrary limit to try to separate the wheat from the chaff. Currently, the per-user rate limit is set to 50 errors per 250ms.</p><p.</p><p!</p><p>As always, suggestion and feedback always welcome.</p><img src="" height="1" width="1" alt=""/>Rakesh Pai Coat of Paint<p>Errorception just got a redesign! <a href="">Give it a look</a>! I'm particularly excited about the <a href="">revamped docs</a>.</p><p.</p><p>The <q>logged-in</q> area of the site hasn't been redesigned yet, but there has to be something for next time, right? ;)</p><p>Let me know what you think! Feedback welcome, as always.</p><img src="" height="1" width="1" alt=""/>Rakesh Pai Source Maps<p>Since the <a href="">launch of source maps</a>, you have been asking for a way to support source maps without having to make all your code public. Today I'm glad to announce private source map support in Errorception. </p><p>!) </p><p style="background: #FCFCE1; border: 1px solid #fc0; padding: 10px;"> <strong>A quick primer</strong>: Source map files, generated by your minifier, contains a mapping between your original source code and your minfied code. Your source map files are linked to from within your minified code, using a <code>//#sourceMappingUrl=...</code>. </p><h2>The First Cut</h2><p>. </p><p> However, everyone disliked the idea of making API calls from their deploy scripts. Every single one of them! Here's the big reasons: </p><ul> <li>It introduces a remote network dependency in the deploy script. This means that the deployment's success or failure depends on a third-party service (Errorception in this case), and on the network. This simply isn't a good idea.</li> <li.</li> <li>Making API calls isn't really the simplest of things to do, especially when you have to do it from a deploy script. You'd first have to prepare your <q>bundle</q> by.</li> <li.</li></ul><h2>Errorception's Crawler</h2><p>. </p><p> Errorception already has a crawler that crawls your site to get the required JavaScript files, needed both for the <a href="">code view</a> and for <a href="">source maps support</a>.!) </p><p> Let's say you have an error in one of your script files. Errorception's crawler <a href="">parses your JavaScript file</a> to look for the <code>//#sourceMappingUrl</code> pragma comment. If it finds this comment, it already has everything it needs to crawl your source map files and your source code. This is how <q>public</q> source maps work already. </p><p> However, many people would rather not have that <code>//#sourceMappingUrl</code> comment in their code. That's because this comment is the one link to all of your code, and will let anyone with a browser access the original unminified source code. </p><h2>Private Source Maps</h2><p> If this <code>//#sourceMappingUrl</code> comment is removed from your minified file, your source maps are now effectively private. This is because no one can know where you've put your files if there isn't a link pointing them to them. HTTP doesn't have any discovery mechanism built in, and a <q>secret</q> path is just as unguessable as a password, since no one else knows the secret. (This assumes that you don't have directory listing turned on.) </p><p> So, this is how <q>private</q>. </p><h2>Examples</h2><p> Here's how Errorception uses your secret folder to discover your source map file: Let's say an error occurred in your script at <code></code>, and you've specified your secret folder to be <code>deadbeef</code>, Errorception will look for the source map at <code></code>. That is, it looks inside a secret folder (which is expected to be a sibling of the script file), for a file that has the same name as the script file with a <code>.map</code> appended to it. </p><p> To give you another example, if the error was in <code></code> and you specify your secret folder to be <code>secret</code>, Errorception will look for the source map at <code></code>. </p><p> All of this sounds complicated, but it really isn't. In fact, in most cases, it will simply be one or two lines in your deploy script — to strip the <code>//#sourceMappingUrl</code> comment, and to copy your source map files and original source code to the secret folder. Doing stuff like copying and modifying files is exactly what deploy scripts are good at, so it plays to the strengths of the deploy script too. </p><h2>But this isn't really private at all</h2>. </p><p>. </p><p> Also, Errorception turned three last week. Drink one for Errorception! Cheers! </p><img src="" height="1" width="1" alt=""/>Rakesh Pai Maps Are Here!<p>In <a href="">the previous blog post</a> I talked about the exciting new feature of highlighting exactly where the error is, <em>in your code</em>. The fact that this is even possible to do externally, is the kind of stuff that distinguishes JavaScript from all other languages. It is why Errorception has this singular focus on JavaScript. </p><p>This post is to highlight one more such feature — source maps. </p><p>If you have errors in your minified code and Errorception's crawler discovers a <a href="">source-map</a>. </p> <div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="" /></a></div> <p style="text-align: center; font-size: 0.9em; color: #999; font-style: italic;">Mapped! Showing you the error in your un-minified source!</p> <p>Errorception shows you exactly where the error is in your original, unminified source-code. Not just that, it does all of this automatically, and across all your stack frames! Isn't that just awesome?</p><p>You just need to make your source-map file available, and Errorception will do the rest. The tweaks needed to your build script are real simple too — it's usually <a href="">just</a> a <a href="">flag</a> in <a href="">most</a> minifiers.</p> <div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="" /></a></div><p style="text-align: center; font-size: 0.9em; color: #999; font-style: italic;">Your minified code is just a click away. Note the tabs at the top-right of your code.</p> <p. </p><p>Oh, and of course, this also means that Errorception now supports compile-to-JS languages as well. <a href="">CoffeeScript</a>, <a href="">TypeScript</a>, <a href="">ClojureScript</a> and <a href="">others</a>, welcome to Errorception! You should feel at home. <p>As always, suggestions and feedback always welcome. <a href="mailto:rakeshpai@errorception.com">Mail</a>, <a href="">Tweet</a>, or leave a comment below.</p><img src="" height="1" width="1" alt=""/>Rakesh Pai's Your Error!<div dir="ltr" style="text-align: left;" trbidi="on"><p>Today's release is a game changer! </p><p>Now, whenever possible, your code and stack-traces take center stage in your error reports. Errorception looks at your code and the data from the error, and attempts to point out where exactly in your source file the error occurred. </p><p! </p> <div class="separator" style="clear: both; text-align: center; margin-bottom: 20px;">Two stack frames of an error</p> <p>See that screaming out: "Here's your error!" Isn't it just amazing how accurate it is? Do you see how easily you will be able to smash bugs with this?</p> <h2>Browser support</h2><p>This feature relies somewhat heavily on stack-traces being available. All recent versions of Chrome (Desktop and Android) provide stack-traces in <code>window.onerror</code> out of the box, so you are already covered there. Stack-traces in <code>window.onerror</code> are new, <a href="">having made it into the spec</a> only recently. <a href="">Firefox just implemented this</a> about a week ago, so I expect the next release will ship with <code>window.onerror</code> stack traces. This also works for errors from IE10 for at least the first stack-frame, since IE10 provides a column number for every <code>window.onerror</code> error.</p> <p>Additionally, this works just about everywhere if you <a href=""><code>.push</code> your errors</a> to Errorception. Until recently, Firefox didn't provide column numbers in their stacks, but <a href="">that just got fixed</a> a couple of days ago, expanding browser support to all popular browsers. In older versions of Firefox, you should still be able to see the highlight for the first stack-frame for <code>.push</code>ed errors.</p> <div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="" /></a></div> <p style="text-align: center; font-size: 0.9em; color: #999; font-style: italic;">It even works accurately with minified code!</p> <p>If it isn't possible to highlight the offending keyword/token, Errorception will attempt to highlight the offending column. Errorception will definitely highlight the line of the error in all cases regardless.</p> <h2>The tech</h2> <p.</p><p>There are no settings to be configured, no knobs to be turned — It All Just Works. Best of all, this is all done without any extra work on the client-side at all, so your users don't face any performance penalty whatsoever. </p><p?)</p><p>As usual, feedback always welcome. <a href="mailto:rakeshpai@errorception.com">Mail</a>, <a href="">Tweet</a> or leave a comment below.</p></div><img src="" height="1" width="1" alt=""/>Rakesh Pai control over error posting<p>While Errorception has given you <a href="">some control</a> over error posting since a very long time, this has at best been very coarse-grained. Now, that gets fixed.</p> <p>You now have full programmatic control over which errors get posted to Errorception. You just have to define <code>_errs.allow</code> to be a function that returns <code>true</code> or <code>false</code>. Examples are the best way to demonstrate this, so here goes:</p> <p>To ignore all errors from say IE6:</p><pre><code>_errs.allow = function() {<br /> return (navigator.userAgent.indexOf("MSIE 6") == -1);<br />}</code></pre> <p>To only allow errors from yourdomain.com and its subdomains:</p><pre><code>_errs.allow = function() {<br /> return (location.hostname.indexOf("yourdomain.com") != -1);<br />}</code></pre> .</p><pre><code>_errs.allow = function(err) {<br /> return (err.url.indexOf("ad-script.js") != -1);<br />}</pre></code> <p>On a side note, this <code>indexOf</code> and <code>-1</code> business above is so ugly! <a href=""><code>String.prototype.contains</code></a> can't come soon enough!</p> <p>This was a fun feature to build, especially because of a very interesting corner-case. All of this has been well documented, so <a href="">give the docs a look</a>. It's very interesting how Errorception uses itself to log errors encountered in this edge-case in a way that doesn't cause the world to implode.</p><img src="" height="1" width="1" alt=""/>Rakesh Pai of Origin<p>Sometimes, when debugging client-side errors, knowing <em>where</em> the user is from can be useful. For example, I recently had a situation where I had debug an error that only occurred for users behind the <a href="">Great Firewall of China</a>. Granted these kinds of issues only crop up rarely, but at such times knowing that this error only occurs in certain geographical locations can be immensely useful when trying to debug. </p><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="" /></a></div><p. </p><p>Happy debugging! </p><img src="" height="1" width="1" alt=""/>Rakesh Pai Everywhere<p>This has been a long time coming.</p><p>Errorception is now proudly 100% HTTPS. (Well, nearly 100% – read on.)</p><p>It turns out, migrating a site to HTTPS isn't as simple as it seems, especially if you have to do it right. I had a huge checklist to look at and verify for this launch. Here's what else has changed with this update:</p><ul><li.</li><li>Cookies are only set when using HTTPS, and have been marked as <a href="">secure cookies</a>. HTTP cookies that were set in the past are now meaningless. In fact, I've deleted the entire old session-store to ensure that there can be no <a href="">session hijacking</a>.</li><li>The encryption is end-to-end. In this case, it means that SSL doesn't just terminate at the load-balancer. The connections between the load-balancer and the app servers are also all SSL. <strong>Everything</strong> is encrypted. <a href="">Take that, NSA</a>!</li><li>Cookies will henceforth be <a href="">inaccessible to client-side code</a> to prevent a large class of XSS attacks. <li>There are several other security measures implemented. For example, Errorception now implements <a href="">HTTP Strict Transport Security</a>, <a href="">prevents clickjacking</a> where possible, reduces <a href="">MIME-type security risks</a> where possible, and has <a href="">force-turned on XSS-filters</a> to prevent reflected XSS attacks.</li><li>All external assets included in the site are now loaded over HTTPS as well, to prevent <a href="">mixed-content</a> scenarios. All links from communications like emails have been updated to use HTTPS URLs. Links that have been forged in the past will still work, but will be redirected to HTTPS.</li></ul><p.</p><p>I'm by no means a security expert, so if you find any lapses, please feel free to let me know. (I'm rakeshpai at errorception dot com.) Also, security is never really <q>done</q>, so I consider this as only the first step in getting to better security.</p><img src="" height="1" width="1" alt=""/>Rakesh Pai Throw When You Can Just Push?<p>I had <a href="">previously launched</a> a feature to <code>.push</code> your errors to Errorception if you liked. It also required you to <code>throw</code> your errors immediately after the push. </p><p>Turns out, many people don't like to <code>throw</code> their errors. There could be several reasons for not throwing your errors – for example, if you want to handle it gracefully, but still want it logged. Also, some frameworks such an <a href="">Angular</a> and <a href="">Ember</a> provide you with Error objects, but throwing them might not be what you want to do. </p><p>Now with Errorception, you don't need to <code>throw</code> error objects anymore. Simply <code>.push</code> them, and you are done! </p><p>I must hasten to add that I would consider this usage <q>advanced</q>. A <code>throw</code>,. </p><p>If your code was already throwing errors after pushing them, you won't have to change a thing. Don't worry – you won't get duplicate errors. In fact, this behaviour (<code>push</code>, with or without <code>throw</code>, without duplicates) will be supported forever, because of the reasons in the paragraph above.</p><img src="" height="1" width="1" alt=""/>Rakesh Pai Traces, window.onerror, and the future<p>One argument that many people have made (and still make) is that <code>window.onerror</code> doesn't provide sufficient information to track down client-side JavaScript errors. While there's certainly some truth to it, I've always thought of it as a case of <a href="">worse is better</a>.</p><p <a href="">family</a> including <a href="">WebSockets</a>, and <code>setTimeout</code> and <a href="">family</a>. That's something you'll just have to do. Because this requires modification of your code, it's terribly invasive. This also means that you will almost certainly miss catching errors in several cases, simply due to oversight. And that's still not saying anything about the <a href="">performance overhead</a> of working with try/catch blocks, <a href="">not just in Chrome</a>. All that, just to get some additional data.</p><p>Errorception lets you <a href="">pass your error objects to us</a> if you want to, since quite some time now. However, less than 10% of errors at Errorception are recorded using this method. It is obvious that the <code>window.onerror</code> approach works far better, either because try/catch isn't comprehensive, or because it is inconvenient to use.</p><h3><pre style="background: #eee; padding: 3px; border: 1px solid #bbb; font-weight: normal;"><code>window.onerror = function(message, fileName, lineno) { ... }</code></pre></h3><p>That's all the data you got from <code>window.onerror</code>: the error message, the URL of the script, and the line number. Errorception of course records much more information about the browser and its state for you automatically, so there's already a lot of context available.</p><p>But Errorception has sorely lacked a very vital piece of data to aid debugging: stack traces. Stack traces are trivial to extract from the error object you get in the <code>catch</code> block of a <code>try/catch</code> statement. Sure, there were a couple of <a href="">tricks up our sleeve</a> to get fake stack traces where possible, but those were severely limited.</p><p>Obviously this problem wasn't one that just Errorception faced. The web community went to browser vendors and standards bodies with their torches and pitchforks (ok, it wasn't quite as dramatic as that), to ask for some love for <code>window.onerror</code>. A couple of months ago, the HTML spec finally <a href="">added</a> two <a href="">new attributes for <code>window.onerror</code></a>.</p><h3><pre style="background: #eee; padding: 3px; border: 1px solid #bbb; font-weight: normal;"><code>window.onerror = function(message, fileName, lineno, <strong>colno, error</strong>) { ... }</code></pre></h3><p!</p><h2>Browser support</h2><p>As of this writing, no production browser supports these new attributes. But don't let that dishearten you – the spec is only about 3 months old after all. IE10 does support the <code>colno</code> <a href="">rolled this out in Chrome Canary</a> two months ago, so it should be be in a public release soon. <a href="">Discussions are on in Firefox's Bugzilla</a>, and I expect this to be resolved soon as well. The folk over at <a href="">WebKit seem interested too</a>, though admittedly progress has been slow.</p><h2>Errorception and window.onerror</h2><p>Needless to say, Errorception has now rolled out support for the new attributes on <code>window.onerror</code>. Since Errorception already uses <code>window.onerror</code> to record errors, you literally don't need to change a thing (yes, even though the attributes are <q>new</q>). Errorception will record stack traces for your errors whenever available. In fact, I've already tested this with Chrome Canary, and it works like a charm! Yes, this <a href="">works for Cross-Origin errors too</a>!</p><p>This should finally lay to rest the argument about whether try/catch blocks are better for JavaScript error logging, or if one should use <code>window.onerror</code>. ;)</p><p>As always, if you have any questions or feedback, the comments are open.</p><img src="" height="1" width="1" alt=""/>Rakesh Pai Hello To CORS<p>Errorception now uses <a href="">CORS</a> when available to send errors from the browser to the server. This makes the error POSTing process much more lightweight.</p><p.</p>. <a href="">JSON</a> and <a href="">CORS</a> are available in every browser worth their salt.</p><p>After weeks of development and extensive testing, I've released a new version of the tracking code to make use of JSON and CORS where possible, to ensure that your users see the least amount of performance degradation at any time. If CORS (or Microsoft's <code>XDomainRequest</code>) isn't available, the code falls back to working as it always has - with form fields and iframes.</p><p><strong>Upgrade if you haven't already!</strong> This new code is only released to people who are using the latest tracking snippet. The new tracking snippet was <a href="">released earlier this year</a>, and already implements tons of performance improvements over the old one. You should upgrade if you haven't already. If you signed up after May, you are already on the latest code. If you've upgraded already, you rock!</p><img src="" height="1" width="1" alt=""/>Rakesh Pai's i18n Strings for Error Messages<p>After <a href="">the release last week</a>, there have been some requests to both Raj and me to have access to the raw data used for powering the IE i18n de-duplication, from several people, including from some of my competitors.</p> <p>After a little bit of email back-and-forth with Raj, and ensuring that we were both ok with making this data available publicly, I'm glad to to have just pushed <a href="">all the raw data to GitHub</a>. Go ahead, give it a look!</p> <h2>What? You are giving data to your competitors?</h2> <p.</p> <p>Cheers!</p><img src="" height="1" width="1" alt=""/>Rakesh Pai Your Errors, Now In English<p>Internet Explorer has this interesting behavior that all its errors are localized to the user's preferred language. I'm not sure if that's a good thing, since <a href="">English is effectively the lingua-franca of programming</a>, and there are so few <a href="">programming languages that don't use English keywords</a>. Ah, well. One could argue, that if the user has set their browser locale to a non-English language, the browser should display errors in the user's preferred language. After all, the error message is seen by the user.</p><p!</p><p>I'm proud to announce that this problem has now been cracked in the most comprehensive and cleanest way possible. With a lot of help from <a href="">Rajasekharan Vengalil<.</p><p <a href="">raw logs</a>.</p><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="" /></a></div><p!</p><p>Oh, and don't forget to buy Raj a beer when you meet him. :)<p><img src="" height="1" width="1" alt=""/>Rakesh Pai Script Delivery<p>I.<p><p>There were two major problems with CloudFront:</p><ul><li><strong>No CNAME support</strong>. Not for HTTPS, at least. As a result, I've had to give you a script with a <code>someuglytoken.cloudfront.net</code> URL in it. But the problem is beyond just ugliness. It gives me no control to change CDNs in the future. I've been living with this so far, but it's far from ideal. CloudFlare solves this problem elegantly.</li><li><strong>It's a dumb CDN</strong>..</li></ul><p>The second issue might seem obvious and trivial, but has its consequences. For example, PageSpeed recommends that every asset on the page should have a <code>Vary: Accept-Encoding</code> header. However, on CloudFront if I provide a <code>Vary</code> header, I'd be lying to the network, because I'm not able to vary the content at all. As a result, all sites that had embedded Errorception's snippet would have seen a slight reduction in their PageSpeed score. Indeed, <a href="">there have been requests to fix this</a>.</p>.</p><h3>Action needed</h3><p>Unfortunately, this change requires you to change the script snippet that you've embedded on your site. I know it's a pain to have to do this. I apologize. The <a href="">last time</a> I asked you to do this was over a year and a half ago, so I assure you that these kinds of changes don't happen frequently. Please go to your Settings > Tracking snippet to get the new snippet.<p><h2>Upgrade plan</h2><p>The current script on Amazon's CloudFront will be supported for a couple of months more. Over this period, if there are any critical bugs, I'll be making releases to both the CloudFront and CloudFlare CDNs. Feature releases will <strong>not</strong> be rolled out to CloudFront, so you will not get any new features if you are using the old snippet. It's highly recommended that you upgrade as soon as possible to ensure that you have the very latest features.</p><img src="" height="1" width="1" alt=""/>Rakesh Pai your JS error notifications wherever you like!<p>Just three days ago, I had launched <a href="">WebHooks support</a>.</p> <p>Firstly, using the same underlying stack I'm using for WebHooks, I'll host integrations for most popular services myself so that you don't have do a thing. I'm calling this <q>service hooks</q>, blatantly copying the name from GitHub. Simply go over to your settings, and click on <q>Service Hooks</q> to find the list of services you can already integrate with today. You don't have to write any code to implement the hook. Just fill a form, and you are ready to go.</p> <p>Currently, I'm launching <a href="">Campfire</a>, <a href="">HipChat</a> and <a href="">PagerDuty</a> integration. Integrations with these services have been asked for before, so I decided to start with these. I expect this list to expand further.</p> <div style="text-align: center"><a href="" imageanchor="1" ><img border="0" src="" /></a> <a href="" imageanchor="1" ><img border="0" src="" /></a><p style="color: #999; font-size: 11px">What an error looks like on Campfire (left) and HipChat</p></div> <p>As interesting as that is, what's even more exciting is how it's implemented, and how it matters to you.</p> <h2><3 GitHub, <3 open-source</h2> <p>All the code that makes these service hooks possible is all open-source. <a href="">Check it out on GitHub</a>. Even the documentation on Errorception for these services is driven off README.md files from GitHub! That's just awesome!</p> <p!</p> <p <q>small little function</q>, I mean it - Have a look at <a href="">the WebHooks implementation</a> as an example.</p> <p.</p><img src="" height="1" width="1" alt=""/>Rakesh Pai WebHooks<p>Though the Errorception <a href="">HTTP API</a> is awesome for browsing your errors, it has so far been very hard to get real-time error notifications. There was the hacky solution of polling the API of course, but that's terribly inefficient, and just feels dirty. Today, this gets fixed.</p> <p>You can now configure WebHooks in your settings, which Errorception will POST to whenever it encounters an error on your site. You can choose if you want to receive POSTs for every occurrence of every error, of the very first time an error occurs.<p> <div style="text-align: center"><a href="" imageanchor="1" ><img border="0" src="" /></a></div> <p>This has been made available to all projects in Errorception, just like every other feature. Head over to the <a href="">WebHook docs</a> to learn how to make the most of this feature.<p> <p>As usual, feedback always welcome! I can't wait to see what you'll be pulling off with this. :)</p><img src="" height="1" width="1" alt=""/>Rakesh Pai Object Compatibility Table<p>I've been spending some time lately studying JavaScript's Error object. These are some of my notes on Error object compatibility across browsers, in case anyone else finds this useful.</p> <style>.compatTable td { text-align: center; } .compatTable .separate th, .compatTable .separate td { border-top: 1px solid #eee; } </style> <table width="100%" class="compatTable"> <thead> <tr> <th>Property</th> <th>Google Chrome</th> <th>Safari</th> <th>Opera</th> <th>Firefox</th> <th>MSIE</th> </tr> </thead> <tbody> <tr> <th scope="row" align="left">name</th> <td>Yes</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> </tr> <tr> <th scope="row" align="left">message</th> <td>Yes</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> </tr> <tr> <th scope="row" align="left">stack</th> <td>Yes</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> <td>Yes (IE10+)</td> </tr> <tr> <th scope="row" align="left">toString()</th> <td>Yes</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> </tr> <tr class="separate"> <th scope="row" align="left">type</th> <td>Yes</td> <td>No</td> <td>No</td> <td>No</td> <td>No</td> </tr> <tr> <th scope="row" align="left">columnNumber</th> <td>No</td> <td>No</td> <td>No</td> <td>Yes</td> <td>No</td> </tr> <tr class="separate"> <th scope="row" align="left">fileName</th> <td>No</td> <td>No</td> <td>No</td> <td>Yes</td> <td>No</td> </tr> <tr> <th scope="row" align="left">sourceURL</th> <td>No</td> <td>Yes</td> <td>No</td> <td>No</td> <td>No</td> </tr> <tr class="separate"> <th scope="row" align="left">line</th> <td>No</td> <td>Yes</td> <td>No</td> <td>No</td> <td>No</td> </tr> <tr> <th scope="row" align="left">lineNumber</th> <td>No</td> <td>No</td> <td>No</td> <td>Yes</td> <td>No</td> </tr> <tr class="separate"> <th scope="row" align="left">number</th> <td>No</td> <td>No</td> <td>No</td> <td>No</td> <td>Yes</td> </tr> </tbody></table><br /><h2>Notes:</h2><ul><li>Though file name is available in the <code>.stack</code> property, only Firefox and Safari provide this as an explicit property, and that too with different property names (FF: <code>.fileName</code>, Safari: <code>.sourceURL</code>).</li><li <a href="">this discussion</a>.</li><li>Firefox doesn't provide column numbers in the stack at all. However, it does provide a <code>.columnNumber</code> property which is only useful for the first stack frame.</li><li>The <code>.number</code> property (IE) is practically useless. It points to IE's internal representation of errors.</li><li><code>.line</code> (Safari) and <code>.lineNumber</code> (Firefox) properties give the line number of the first stack frame of the error. No one else provides a similar property, though this data is available in the <code>.stack</code> everywhere except Firefox.</li><li>The <code>.toString()</code> formatting seems consistent, and similar to the formatting of the error message in <code>window.onerror</code>. That is, it uses the format <code>name + ": " + message</code>. The only exception to this, of course, is that <code>window.onerror</code> formats errors differently <a href="">when the source file has x-domain restrictions</a>.</li><li>Column numbers in the <code>.stack</code> property are only available in IE10+ and Chrome. Opera provides a <code>.stacktrace</code> property in addition to <code>.stack</code> that has column numbers (go figure!). No other browser provides column numbers in the stack trace. As mentioned above, Firefox does provide an explicit <code>.columnNumber</code> property that's only useful for the first stack frame.</li><li>No stack support for IE<10. Nothing. Zilch.</li></ul><img src="" height="1" width="1" alt=""/>Rakesh Pai Traces and Error Objects<p>A frequently requested feature has been that of stack traces with errors. However, because <code>window.onerror</code>, doesn't give access to an error object, it has not been possible for Errorception to capture stack traces. That's set to change today.</p><p>Starting today, you will be able to pass error objects to Errorception. An example is probably the best way to explain this.</p><pre style="background: #f9f9f9; padding: 10px;"><code>try {<br /> var myObject = JSON.parse(jsonString); // will break if jsonString is invalid<br />} catch(e) {<br /> <strong>_errs.push(e);</strong><br /> <strong>throw e;</strong><br />}<br /></code></pre><p>When you pass such errors manually to Errorception, Errorception will now be able to record the stack trace for this error. Undoubtedly, this can be very useful for debugging.</p><h2>Important</h2><ul><li>What you <code>push</code> to <code>_errs</code> should be a valid Error object, that is, it should be an <code>instanceof Error</code>.</li><li>It is important that you <code>throw</code> the error right after passing it to <code>_errs</code>. This is for two reasons. Firstly, you really want your program's execution to stop when you encounter an error, and <code>throw</code> is a great way to do so. <strike>Secondly, Errorception internally uses both the <code>Error</code> object and the data from <code>window.onerror</code> to capture error data. If you don't <code>throw</code> the error, Errorception will ignore the error object.</strike> <strong>Update</strong>: Throwing errors <a href="">isn't required</a> anymore, but is highly recommended.</li></ul><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left:1em; margin-right:1em"><img border="0" width="700" src="" /></a></div><p.</p><p><strong>Bonus</strong>: This works perfectly well with the recently launched ability to <a href="">record custom information with errors</a>. For example, in Errorception I was recently doing this (yes, Errorception uses Errorception):</p><pre style="background: #f9f9f9; padding: 10px;"><code>try {<br /> var myObject = JSON.parse(jsonString);<br />} catch(e) {<br /> <strong>_errs.meta = {json: jsonString};</strong><br /> _errs.push(e);<br /> throw e;<br />}<br /></code></pre><img src="" height="1" width="1" alt=""/>Rakesh Pai JS Errors Are Now An API Call Away<p>It always annoys me when data gets caught in inaccessible silos, yet that's exactly what I had ended up building with Errorception. Today, that gets rectified.</p><p>Today I'm announcing the first cut of the Errorception API. As far as I know at least, it is the first API in the wild tailored specifically for JS errors in your site.</p><p>Being a huge fan of simplicity, the API is really simple to use too. In fact, I've embedded a <code>curl</code> example right here in this blog post:</p><pre style="background: #444; font: 13px Monaco, 'Courier New', 'DejaVu Sans Mono', 'Bitstream Vera Sans Mono', monospace; color: #eee; padding: 10px;"><code>$ curl -i<br />HTTP/1.1 200 OK<br />Content-Type: application/json; charset=utf-8<br />Content-Length: 77<br />Connection: keep-alive<br /><br />{"links":[{"rel":"projects","href":""}]}<br /></code></pre><p>You can view detailed API documentation on <a href="">the API docs page</a>.</p><p>I'm excited to see what you will do with this data. I'm hoping to evolve this API based on what your experience is like. As usual, feel free to mail me at rakeshpai at errorception dot com with any suggestions or feedback.</p><img src="" height="1" width="1" alt=""/>Rakesh Pai Daily Emails<p>The daily notification email, which had been turned off for some time now due to a snag, has now been turned back on.</p><p>Previously, all the emails used to be sent at the turn of the day on the server. This was problematic in at least two ways.</p><ul><li>The load on the DB to sift through the large volume of errors to generate emails was turning out to be a bit much. The server had begun to protest about the load.</li><li.</li></ul><p>Turns out, solving the latter problem solved the former problem as well, because the server load of sending out the email would get more-or-less evenly distributed across the entire day. So, taking inspiration from <a href="">what Stride did recently</a>, I've rolled out something similar for Errorception.</p><p>A <a href="">small JS snippet</a> <a href="">'time' module by Nathan Rajlich (TooTallNate)</a>. It just works!</p><p.</p><p><strong>Almost forgot:</strong> Wish you a great 2013! Happy debugging!</p><img src="" height="1" width="1" alt=""/>Rakesh Pai Cross-Domain JS Errors<p>As I've <a href="">mentioned before</a>, most modern browsers do not provide access to error information in <code>window.onerror</code> for scripts loaded from across domains. This is a very severe restriction, and has limited the usefulness of <a href="">Errorception</a> to some extent. </p><p>Fortunately, a couple of months ago, <a href="">Firefox landed a patch</a> to add this feature, and this has already been shipped with the latest versions of Firefox. Chrome is expected to follow suit very soon, since this has already <a href="">landed in Webkit</a>. </p><p>Unfortunately, this doesn't work out of the box, and will require some tweaking of your server and markup. Fortunately, the changes you need to make are minimal. </p><h2>On the server</h2><p>You will need to <a href="">enable CORS</a> for the external JS file you load. The most minimal way to do this is to set the following HTTP header in the response for your JS file. </p><p><code>Access-Control-Allow-Origin: *</code></p><p>That's the only server-side change you need to make!</p><h2>In the markup</h2><p>Script tags have now got a new non-standard attribute called <code>crossorigin</code>. The most secure value for this would be <code>anonymous</code>. So, you'll have to modify your script tags to look like the following.</p><p><code><script src="" <b>supports</a> reporting errors for cross-domain scripts. All WebKit browsers including Chrome is expected to support this very soon. This isn't a problem with IE at all, since IE already reports errors to <code>window.onerror</code> irrespective of the domain (yay, security!). Standardisation for the new attribute <a href="">has been proposed</a> though it hasn't gotten anywhere.</p><p><strong>Update</strong>: Thanks to Matthew Schulkind for pointing out <a href="">in the comments below</a>: <a href="">filed a bug with Mozilla</a> about this.<img src="" height="1" width="1" alt=""/>Rakesh Pai Custom Data With Your Errors<p>The more context you have around an error, the better it'll help you when debugging. And who understands your application's context better than you!</p> <p>Starting today, you will be able to record custom information with your errors. It's super simple too! Just create an <code>_errs.meta</code> object, and add anything you want to it!</p> <script src=""></script> <p>You can pass the <code>_errs.meta</code> object any number of properties, and the values can either be strings, numbers or booleans. Values with other types will be ignored.</p> <div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left:1em; margin-right:1em"><img border="0" width="681" src="" /></a></div> <p>You can even add and remove properties from the <code>_errs.meta</code> object at runtime. So, if your user changes his or her preferences about cats while using your application, you can set <code>_errs.meta.lovesCats = false;</code> when that happens. The tracking script will record the new value <code>lovesCats</code> from that point on whenever an error occurs.</p> <p>This can be a huge help when debugging your code. Imagine if you could record which user got the error, which action the user was performing at the time, and on which area of your page!</p> <h2>Other improvements</h2><p.</p> <p>As always, feedback welcome. I can't wait to see what you will do with this ability to record custom data.</p> <h2>Limits</h2><p>The custom data recorded is put into the same store as the one used for <a href="">raw error data</a> and shares the same limits. That is, you can currently put in upto 25 MB of data. Beyond that, older data is purged to make room for the new data.</p><img src="" height="1" width="1" alt=""/>Rakesh Pai Error Data<p>Since about 2 weeks now, I've been recording information about each individual error occurrence. This data was previously being discarded. I thought, why throw away perfectly fine data, if it can be useful in any way to help in debugging?</p> <p>A couple of minutes ago, I rolled out a UI to start looking at this data. Now, every error details page will show you all the information that we've captured for every occurrence of the error.</p> <div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left:1em; margin-right:1em"><img border="0" width="600" src="" /></a></div> <p>Give it a spin, and let me know what you think.</p><br /><h2>Limits</h2><p>The reason I used to discard this data previously is because it grows really big really quickly. So, for now, I've decided to cap the amount of logs stored. <strong>Each project gets 25 MB of storage for these raw logs</strong>. If you end up generating more logs than the limit, older log entries are discarded to make room for the new ones.</p><p>Though 25 MB might seem small, it's actually quite a bit. Considering that each log line will realistically be much lesser than 0.5kb of data, you can store more than 50,000 individual log entires before older records get purged.</p><p>That said, I've not wholly decided on the 25 MB limit, and am open to change my mind. Feedback welcome.</p><img src="" height="1" width="1" alt=""/>Rakesh Pai | https://feeds.feedburner.com/errorceptionBlog | CC-MAIN-2021-31 | refinedweb | 8,187 | 57.16 |
does.
BTW: Your first link to .less is the same as to LESS for ruby - it should be :)
Greets, Gordon
Ruby's implementation generates static css files that are served up by the webserver.
So my question is - aside from runtime file generation, what other benefits exist from the approach?
blog.smoothfriction.nl/...
Summary:
It's quicker, easier to deploy for a .NET dev.
@Joel, @Steve - I can't imagine this would be too difficult with T4, or even a custom MSBuild step. I might look at it, unless they get to it first.
- Not everyone in the .NET sphere will/can install Ruby on their dev machines.
- The ruby impl has trouble parsing Less files created in VS so they need to be built in some other editor.
- We want to adhere to the Less syntax as much as possible, but we are now at a point where we can take things further.
@Joel, @Steve With regards to the HttpHandler, the DotLess projects comes with an executable that allows you to create CSS from an input file so hooking this up to a build scripts should be a 10 min job.
I do have to say however, that as the HttpHandler supports caching then your not loosing anything much performance wise.
Additionally, we are looking to add the ability to pass query string parameters(and environmental info) to the handler to be used as variables in the Less file.This obviously woludnt be possible as a static compilation step.
class="span-15 prepend-1 colborder"
Using Less/DotLess you can merge these in to something more meaningful.
Also mix-ins can be used to bring in sections of nested rules with namespaces i.e:
.outer{
content:"ignore me";
.inner{
content:"mix me";
}
}
#mixer{
.outer > .inner;
}
Also mixins with variables are coming that will allow things like:
.rounded_corners (@radius: 5px) {
-moz-border-radius: @radius;
-webkit-border-radius: @radius;
border-radius: @radius;
}
#header {
.rounded_corners;
}
#footer {
.rounded_corners(10px);
}
But I myself still don't think of Less as a complete CSS replacement and tactics that are good with CSS are still valid. Here's my thoughts:
enginechris.wordpress.com/...
This really should be in the CSS standard so CSS editors don't panic with the @var statements.
intellisense would be probably for mixins/variables at first, as that's what you'd like to be able to lookup easily. | http://odetocode.com/blogs/scott/archive/2009/11/22/keeping-css-files-dry-with-less.aspx | CC-MAIN-2014-35 | refinedweb | 397 | 64.41 |
Unmerging and merging Excel cells are indispensable for handling Excel worksheet. This article aims at introducing the solution to unmerge Excel cells in c# through several lines of code. We need an Excel .NET component called Spire.XLS to help us complete the process.
First we need to complete the preparatory work before unmerge Excel cells in C#:
- Download the.
- Add Spire.XLS as namespace.
Here comes to the explanation of the code:
Step 1: Create an instance of Spire.XLS.Workbook.
Workbook book = new Workbook();
Step 2: Load the file base on a specified file path.
book.LoadFromFile(@"..\..\abc.xlsx");
Step 3: Get the first worksheet.
Worksheet sheet = book.Worksheets[0];
Step 4: Unmerge the cells.
sheet.Range["A2"].UnMerge();
Step5: Save as the generated file.
book.SaveToFile(@"..\..\result.xlsx", ExcelVersion.Version2010);
Here is the whole code:
static void Main(string[] args) { Workbook book = new Workbook(); book.LoadFromFile(@"..\..\abc.xlsx"); Worksheet sheet = book.Worksheets[0]; sheet.Range["A2"].UnMerge(); book.SaveToFile(@"..\..\result.xlsx", ExcelVersion.Version2010); }
Please preview the original effect screenshot:
And the generated effect screenshot:
| http://www.e-iceblue.com/Tutorials/Spire.XLS/Spire.XLS-Program-Guide/How-to-Unmerge-Excel-Cells-in-C.html | CC-MAIN-2014-35 | refinedweb | 177 | 53.58 |
How to escape curly braces in XAML?April 17, 2013
If you want to display curly braces in text elements of your XAML UI, you can easily run into problems because curly braces usually indicate markup extensions like data bindings or resources.
Let’s assume we want to display the text {dev news} in a WPF TextBlock. Our first attempt will probably look something like this:
<TextBlock Text="{dev news}"/>
Unfortunately, this causes a syntax error in the XAML parser because this notation indicates a markup extension.
The easy solution is to add an extra pair of leading curly braces acting as escape sequence:
<TextBlock Text="{}{dev news}" />
If you are working in older versions of .NET and WPF or Silverlight, this escape sequence functionality is probably not available, resulting in yet another XAML parser error.
In this case a good workaround is to keep the string containing the curly braces in a view model property and bind to it from XAML.
<TextBlock Text="{Binding Path=TextWithCurlyBraces}" />
public class MyViewModel
{
public string TextWithCurlyBraces
{
get { return "{dev news}"; }
}
} | https://wolfgang-ziegler.com/Blog/how-to-escape-curly-braces-in-xaml | CC-MAIN-2021-25 | refinedweb | 177 | 58.21 |
I'm attempting to rearrange an equation from an answer on the Mathematics StackExchange.
The answer given is this equation:
$$L^2 = (-ab(t)+p)^2-\left(\frac{(-ab(t)+p).(cd(t)-ab(t))}{(cd(t)-ab(t))^2}(cd(t)-ab(t))\right)^2$$
Where $a$, $b$, $c$, $d$, and $p$ are known 2D points, $L$ is a known length, and $t$ is an unknown scalar. $ab(t)$ indicates interpolation between $a$ and $b$.
I am interested in rearranging this to solve for $t$. Here's what I've tried in Sage:
def sqr(var): return var.dot_product(var) var('ax bx cx dx px ay by cy dy py t L') a = vector([ax, ay]) b = vector([bx, by]) c = vector([cx, cy]) d = vector([dx, dy]) p = vector([px, py]) g = a - at + bt h = c - ct + dt u = p - g v = h - g eq = L^2 == sqr(u) - sqr((u.dot_product(v)/sqr(v)) * v) eq.solve(t)
At the
solve step I have observed it to sit for quite a while without producing a result. Two questions:
Am I inputting the problem correctly?
Is there any way to know if this is likely to terminate in a reasonable time? I have no idea what the solver looks like under the hood, and wouldn't want to wait for some
O(n!)calculation to terminate :) | https://ask.sagemath.org/questions/26993/revisions/ | CC-MAIN-2019-47 | refinedweb | 230 | 74.29 |
Hey guys,
I've been trying to convert a java executable (.jar) to a .NET executable (.exe) and the reason I'm here is because I succeeded... Nope, I wish..
My code:
public class Main { public static void main(String[] args) { System.out.println("Hello, world!"); } }
Easy enough right.. So when I export it into a JAR file called HelloWorld.jar and run it in a command prompt (java -jar HelloWorld.jar), it returns "Hello, world!", which is good.
Then I convert the .jar file into a .exe file using jar2exe. The exe file comes out and BOOM, when I run it I get an error saying "The main startup class can not be found".
Also, when I try to use a different program to convert my .jar into a .exe (IKVM), I get the same error, only this time during the conversion which results in nothing.
Any ideas?
PS. I know this is not technically a Java IDE issue, but I couldn't find any better place | http://www.javaprogrammingforums.com/java-ides/39057-export-jar-file-convert-exe-file-main-startup-class-could-not-found.html | CC-MAIN-2015-35 | refinedweb | 168 | 79.46 |
Dogtail creates and uses the directory /tmp/dogtail (it's configurable but only
by root editing a file in /usr) for its scratch files.
It should be possible to override that directory, in the case of frysk's test
framework, to be a directory within frysk's build tree.
----
Hmm, need a testcase for frysk-gtk/tests/ is that even possible?
Upstream bug:
------- Comment #2 from Zack Cerza 2006-06-15 18:52 UTC -------
Actually, you can change the directories that dogtail uses quite easily:
import os
from dogtail.config import config
config.scratchDir = os.path.join(os.environ['HOME'], 'dogtail')
config.logDir = os.path.join(config.scratchDir, 'logs')
config.dataDir = os.path.join(config.scratchDir, 'data')
This will put everything in ~/dogtail/ . It also creates the directories.
I thought I had documented this, but it seems I neglected to. I just did,
though, in CVS. | https://sourceware.org/bugzilla/show_bug.cgi?id=2778 | CC-MAIN-2022-40 | refinedweb | 146 | 60.41 |
MQTT C Client for Posix and Windows
The Paho MQTT C Client is a fully fledged MQTT client written in ANSI standard C. It avoids C++ in order to be as portable as possible. A C++ layer over this library is also available in Paho.
In fact there are two C APIs. "Synchronous" and "asynchronous" for which the API calls start with MQTTClient and MQTTAsync respectively. The synchronous API is intended to be simpler and more helpful. To this end, some of the calls will block until the operation has completed, which makes programming easier. In contrast, no calls ever block in the asynchronous API. All notifications of API call results are made by callbacks. This makes the API suitable for use in windowed environments like iOS for instance, where the application is not the main thread of control.
Features
Source
Source tarballs are available from the Git repository, as is the source, of course.
Download
Pre-built binaries for Windows, Linux and Mac are available from the downloads page.
The Windows binaries are built with Visual Studio 2013. If you do not have this installed, you will need to install the Visual C++ Redistributable Packages for Visual Studio 2013.
Development builds can also be downloaded here.
Building from source
Linux
The C client is built for Linux/Unix/Mac with make and gcc. To build:
git clone cd org.eclipse.paho.mqtt.c.git make
To install:
sudo make install
Windows
The Windows build uses Visual Studio or Visual C++. Free Express versions are available. To build:
git clone cd org.eclipse.paho.mqtt.c.git msbuild "Windows Build\Paho C MQTT APIs.sln" /p:Configuration=Release
To set the path to find msbuild, you can run utility program vcvars32.bat, which is found in a location something like:
C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\bin
Documentation
Reference documentation is online here.
Getting Started
These C clients connect to a broker using a TCP/IP connection using Posix or Windows networking, threading and memory allocation calls. They cannot be used with other networking APIs. For that, look at the Embdedded C client.
Here is a simple example of publishing with the C client synchronous API:
#include "stdio.h" #include "stdlib.h" #include "string.h" #include "MQTTClient.h" #define ADDRESS "tcp://localhost:1883" #define CLIENTID "ExampleClientPub" #define TOPIC "MQTT Examples" #define PAYLOAD "Hello World!" #define QOS 1 #define TIMEOUT 10000L int main(int argc, char* argv[]) { MQTTClient client; MQTTClient_connectOptions conn_opts = MQTTClient_connectOptions_initializer; MQTTClient_message pubmsg = MQTTClient_message_initializer; MQTTClient_deliveryToken token; int rc; MQTTClient_create(&client, ADDRESS, CLIENTID, MQTTCLIENT_PERSISTENCE_NONE, NULL); conn_opts.keepAliveInterval = 20; conn_opts.cleansession = 1; if ((rc = MQTTClient_connect(client, &conn_opts)) != MQTTCLIENT_SUCCESS) { printf("Failed to connect, return code %d\n", rc); exit(-1); } pubmsg.payload = PAYLOAD; pubmsg.payloadlen = strlen(PAYLOAD); pubmsg.qos = QOS; pubmsg.retained = 0; MQTTClient_publishMessage(client, TOPIC, &pubmsg, &token); printf("Waiting for up to %d seconds for publication of %s\n" "on topic %s for client with ClientID: %s\n", (int)(TIMEOUT/1000), PAYLOAD, TOPIC, CLIENTID); rc = MQTTClient_waitForCompletion(client, token, TIMEOUT); printf("Message with delivery token %d delivered\n", token); MQTTClient_disconnect(client, 10000); MQTTClient_destroy(&client); return rc; } | http://www.eclipse.org/paho/clients/c/ | CC-MAIN-2016-36 | refinedweb | 517 | 51.24 |
Every.
Football betting
The most common bet in football is so called 1×2 – decoded as win of home team, tie and win of away team. For a match, bookmakers present odds for each of the possible outcomes. These odds reflect probabilities of each outcome as seen by the bookmaker plus some small margin making the bookie’s living. Because we used my account at Bwin we were using odds in a decimal format with the following relation to implied probability: probability = 1/odds (so odds 2 means 50%, 2.5 means 40% and so on). The game is very easy if I place a bet on an outcome with odds 2.5 and I win, I get 2.5-times the bet. And zero otherwise. Tempting, right?
Statistical approach
The simplest way to pick the right bet is by using the expected value. It works as follows: when I place a bet of 100 on a win of the home team with odds being 4.0 and I estimate the probability that the home team actually wins as 0.3 the expected value of my bet is 120 (100 * 4.0 * 0.3). This is good because 120 is bigger than 100 so I am actually winning some money. The term expected value is a bit tricky because in fact I don’t expect to win 120, I rather expect to either win 400 or lose 100. But statistically in a long run, making hundreds of similar bets – I expect to win 120 per match. But that’s not the point here.
So now the task can be broken down into three parts:
- how to estimate the probability of each outcome
- how to decide how much money to place (100 or rather 10?)
- and at what threshold should I place the bet (is 120 as an expected value of a 100 bet enough?).
First we need some historical data.
Where to get the data
One of the first links in our rather unsophisticated search for “football data” was, which turned out to be just perfect for our needs. It offers weekly updated fixtures, results, odds and many other match statistics for all major European leagues for up to past 20 seasons.
The files are nicely organised, so it is very convenient to get data for specified leagues and seasons. For example the following code gets data from current season (2016/17) of the Premier League. Please note that all the coding was done in Python 3.5.
import posixpath import pandas as pd root_url = '' season = '1617' league = 'E0' # Premiere League data_file_suffix = '.csv' remote_file = posixpath.join(root_url, season, '{0}{1}'.format(league, data_file_suffix)) data = pd.read_csv(remote_file) print(data.shape)
Predicting match results
Once we had data, we could focus on building a predictive model for match outcomes. One of the first models for this was developed by Maher [1]. Imagine a match played by team i as home and team j as away. Maher modelled a number of goals scored by home team as a variable with Poisson distribution (Xij) and similarly for the number of goals scored by away team (Yij). Each of the variable is influenced by different parameters (𝜆 and 𝜇) and they are also assumed to be independent.
Number of goals scored by home team (Xij) is driven by attacking strength of the home team αi, defensive strength of the away team βj, and some home field advantage 𝛾. On the other hand, the number of goals scored by the away team (Yij) is driven by attacking strength of the away team (αj) and the defensive strength of the home team (βi). Or in other words by a nice formula:
P(Xij = x, Yij = y|𝜆, 𝜇) = Poisson(x|𝜆) · Poisson(y|𝜇),
where 𝜆 = 𝛾 · αiβj and 𝜇 = αjβi.
Practically it means that we take some relevant historical results and based on them we estimate attacking strength α and defense strength β for each team and also a general home field advantage 𝛾 using maximum likelihood.
Maher’s model was further improved by Dixon and Coles [2]. Firstly they realised that in reality the low score ties (0-0, 1-1) happen more often and the results 0-1 and 1-0 happen less often than they should according to Maher’s model. To correct for that they introduced a function 𝜏(x, y, 𝜆, 𝜇, 𝜌) defined as follows. Note that we get Maher’s model for 𝜌=0.
def tau_function(x, y, lambdaa, mu, rho): if x == 0 and y == 0: tau = 1 - lambdaa * mu * rho elif x == 0 and y == 1: tau = 1 + lambdaa * rho elif x == 1 and y == 0: tau = 1 + mu * rho elif x == 1 and y == 1: tau = 1 - rho else: tau = 1 return tau
Second improvement was that they considered unrealistic that the attacking and defensive strength do not evolve in time so they introduced a time-variant version of the parameters αit and βit. Technically they used exponential forgetting so that the recent results are more important for the model fitting than the historical ones. How quickly the importance of historical results decreases or in other words how quickly they are forgotten depends on a parameter 𝜀. And again we get Maher’s time-invariant model for 𝜀=0.
import numpy as np def get_time_weights(dates, epsilon): delta_days = [(max(dates) - d).days for d in dates] # future games not relevant return list(map(lambda x: 0 if x < 0 else np.exp(-1 * epsilon * x), delta_days))
When we put all of that together we are about to calculate maximum likelihood estimates for parameters of the following model.
P(Xij = x, Yij = y|𝜆, 𝜇, 𝜌) = 𝜏(x, y, 𝜆, 𝜇, 𝜌) · Poisson(x|𝜆) · Poisson(y|𝜇)
where 𝜆 = 𝛾 · αiβj and 𝜇 = αjβi and all α’s and β’s vary in time. Because we don’t want to optimise the product of exponential functions we will use log-likelihood function and because it’s easier to minimise than maximise (using
scipy.optimize.minimize) we will find the estimates by minimising the negative log-likelihood function.
The log-likelihood function looks something like this:
def time_ln_likelihood(values): return sum([(value['time_weights'] * (np.log(tau_function(value['home_goals'], value['away_goals'], value['lambda'], value['mu'], value['rho'])) + (-value['lambda']) + value['home_goals'] * np.log(value['lambda']) + (-value['mu']) + value['away_goals'] * np.log(value['mu']))) for value in values])
Before the optimization we need to make sure the model will be identified by adding the following constraint. The identification condition establishes that the log-likelihood has a unique global maximum.
def norm_alphas(params, number_of_teams): return sum(params[:number_of_teams]) / number_of_teams - 1
And once that is done we can run the optimization, where we address teams by IDs and not names and x0 is an array with the initial set of parameter estimates.
# optimize minimize_result = op.minimize(fun=lambda *args: -time_ln_likelihood(*args), x0=x0, args=(input_data_frame['HomeId'].tolist(), # home teams input_data_frame['AwayId'].tolist(), # away teams input_data_frame['FTHG'].tolist(), # home goals input_data_frame['FTAG'].tolist(), # away goals number_of_teams, get_time_weights(input_data_frame['Date'].tolist())), constraints=({'type': 'eq', 'fun': lambda *args: norm_alphas(*args), 'args': [number_of_teams]}))
This will give us all the parameters needed for our model so we can easily calculate the probability of each possible score for any future match. If we have a matrix
game_probabilities of all possible match results (with goals of home team in rows and goals of away team in columns) then the following simple aggregation gives us the probabilities of our 1×2 outcomes.
win_probability = sum(sum(np.tril(game_probabilities, -1))) # triangle-lower for home win draw_probability = game_probabilities.trace() # diagonal for draw loss_probability = sum(sum(np.triu(game_probabilities, 1))) # triangle-upper for home loss
Betting strategy
When I mentioned the expected value approach to betting let me stress out that a good betting strategy is not about correctly predicting the match results. It is about finding opportunities where our probability is higher than the bookie’s probability implied by the odds. By definition these opportunities are typically present in outcomes that have low probability. So we do not expect to bet on favourites where the odds are very low but rather on outsiders.
How much money to bet and when to actually place the bet is called a betting strategy. We decided to try three different methods to calculate the size of a bet:
- Fixed: betting a fixed amount of money scale_constant on each match
- Kelly’s [3]: bankroll * ((probability * odds – 1) / (odds – 1)), where bankroll is simply the overall budget, which reflects the profit and losses over time
- Variance-adjusted: min(bankroll, scale_constant / (2 * odds * (1 – probability)))
The threshold for placing a bet shows how risky bets we want to make. The more bets we make the closer we should theoretically get to the expected value. And because the number of accepted bets declines with the growing threshold the final payoff becomes more volatile.
What works best
At this stage we have all the building blocks needed so the next step is to give it a try. We decided to test our solution on the Premier League because it was the first data set on. To determine, which prediction method and which betting strategy should be used we run a grid search, which simply tried all specified parameter settings and stored how well they performed. The grid consisted of following 4 800 combinations.
MODEL_FITTING_GRID = {"method":['Dixon-Coles', 'Maher'], "epsilon": append(arange(.0005, .00225, .00025).tolist(), [0.0])} BETTING_GRID = {"betting_strategy": ['fixed_bet', 'Kelly', 'variance_adjusted'], "bet_threshold": arange(1.05, 1.55, .005)}
We tested this grid using a rolling-prediction over season 2015/2016 when taking data back to 2013 and considering also The Championship (second highest league in England) to include also matches of teams that had been promoted or relegated in time. The parameter we wanted to maximize is a simple return on investment (ROI) so in our case (total payoffs – total bets) / total bets.
The winning combination turned out to be time variant Dixen-Coles model with epsilon equal to 0.002, Kelly’s betting strategy and bet threshold of 1.1. Truth be told the bet threshold was higher but we wanted to place more bets.
So how is this combination working in the current season? (click the image to open interactive Tableau dashboard)
We can see that we started off really well but then got hit by a series of unsuccessful rounds when we lost almost 50% of our initial budget. Few recent rounds have put us back in the game but still way far from “easily making money”.
When I mentioned this exercise to my college friends, who have been meddling with sports betting, they told me that 1×2 bets on Premier League on Bwin is the hard mode of sports betting. So if you really want to give it a try don’t follow us and try to find some niche league and bet for the number of goals for instance.
It the light of that information we tried our approach on few other leagues, namely German Bundesliga, French Ligue 1, Belgian Jupiler League and Portuguese Primeira Liga. Again we run the grid search over the past and chose the combination of parameters with the highest ROI and with a reasonable number of bets.
The best combination for Bundesliga was time-variant Maher’s model with epsilon equal to 0.00075, Kelly’s betting strategy and bet threshold 1.47. This had a ROI of 0.21. And the performance in the current season so far?
For the Ligue 1 we chose time-variant Maher’s model with epsilon being 0.00175, Kelly’s betting strategy and threshold of 1.41 with a 0.54 ROI in season 2015/2016.
For Belgian Jupiler League we decided for time-variant Maher’s model with epsilon being 0.00175, Kelly’s betting strategy and threshold of 1.44 with 0.19 ROI in season 2015/2016.
And finally the Primeira Liga with time-variant Maher, epsilon of 0.00225, Kelly’s betting strategy and threshold of 1.11, which was losing the least in 2015/2016 – ROI of -0.09.
Final thoughts
As we can see the models can win some money even though with a huge volatility. That is positive news given we only tried the basic statistic approaches. In the future one might play with some machine learning approach, add more predictors into the model – e.g. weather or even data for individual players.
On the other hand we haven’t dealt with automatically placing the bets in higher volumes, which is often a very tricky part in this business if you don’t want to spend hours by making bets.
So all-in-all despite all the fun we had with this we’ll probably stick to our job at the moment. What about you, won’t you give it a try? 🙂
Big thanks goes to my friends Martin and Michal from aLook Analytics who turned my simple Python scripts into a fully automated betting tool.
I would also like to mention three great online sources of information and data:
- Historical football data
- Short paper by H. Langseth with all important knowledge
- Great blog about sports betting and many more
References
[1] Mike J. Maher. Modelling association football scores. Statistica Neerlandica, 36(3):109–118, 1982
[2] Mark J. Dixon and Stuart G. Coles. Modeling association football scores and inefficiencies in the football betting market. Applied Statistics, 46:265–280, 1997
[3] John L. Kelly. A new interpretation of information rate. IRE Transactions on Information Theory, 2(3):185–189, 1956
[4] Designed by Freepik
Hey congrats about this exciting article!
It seems that the model you have implemented does not take into account the home field advantage. How could you integrate this into the code? I tried to use your code to predict some football scores but there are some parts I missed (about the x0).
Do you have published the full code on Github or any similar platform?
Thanks!
Hi,
thanks for your interest!
Home field advantage is included in the model. See “Number of goals scored by home team (Xij) is driven by attacking strength of the home team αi, defensive strength of the away team βj, and some home field advantage 𝛾.”
x0 is just an array with initial parameter estimates for the optimisation. You can for example start with
x0 = [1 for _ in range(3*number_of_teams + 1)], i.e. all parameters equal to 1. See scipy.optimize.minimize.
Let me know if that helped.
Adam
Thanks you for this clarification. I read an other paper which helps me a lot in the implementation ().
I still have several questions about the log-likelihood function. Regarding how the minimization function works, sometimes the input values can generate a negative tau weight. Thus, the log is not defined in this case. How do you deal with this?
Btw, did you try to tune the x0 vector in order to fit better the function? What about the time weight function?
Many thanks for you answer!
Hi Jean-Jacques,
So far I’ve not run into tau function returning some negative number so I’m not dealing with it in any way :). Also I use the last element of x0 (rho) set to .03, contrary to what I wrote in my earlier comment (sorry for that). Maybe that’s why it doesn’t get negative.
The role of x0 is just to give the optimisation a starting point. So if you’d be smarter about it you can get the optimisation to converge faster.
I believe that the time weight function can be improved. I simply used a standard exponential forgetting as a starting point.
Good luck with your model. Let me know how it goes :).
A.
Ok it’s very strange because for games with low score (0:0 for example), I can have negative values for tau. I’ll investigate on this. On some others articles, I read that exponentiating the variables can help the solver. Are you doing sorcery like this? 🙂
On the code snippet you gave, you put a reference to value[‘lambda’] but you did not mentionned how lambda is computed. Is it only the product of 𝛾, αi and βj, the contained in the array argument “sent” by op.minimize?
Thanks again for you precious reply, I really help me a lot.
Yes, 𝜆 = 𝛾 · αiβj and 𝜇 = αjβi | https://blog.alookanalytics.com/2017/01/09/beating-bookies/ | CC-MAIN-2021-04 | refinedweb | 2,743 | 64.2 |
The QDeclarativeScriptString class encapsulates a script and its context. More...
#include <QDeclarativeScriptString>
This class was introduced in Qt 4.7.
The QDeclarativeScriptString class encapsulates a script and its context.
QDeclarativeScriptString is used to create QObject properties that accept a script "assignment" from QML.
Normally, the following QML would result in a binding being established for the script property; i.e. script would be assigned the value obtained from running myObj.value = Math.max(myValue, 100)
MyType { script: myObj.value = Math.max(myValue, 100) }
If instead the property had a type of QDeclarativeScriptString, the script itself -- myObj.value = Math.max(myValue, 100) -- would be passed to the script property and the class could choose how to handle it. Typically, the class will evaluate the script at some later time using a QDeclarativeExpression.
QDeclarativeExpression expr(scriptString.context(), scriptString.script(), scriptStr.scopeObject()); expr.value();
See also QDeclarativeExpression.
Constructs an empty instance.
Copies other.
Returns the context for the script.
See also setContext().
Returns the scope object for the script.
See also setScopeObject().
Returns the script text.
Sets the context for the script.
Sets the scope object for the script.
See also scopeObject().
Sets the script text.
Assigns other to this. | https://doc.qt.io/archives/qt-4.7/qdeclarativescriptstring.html | CC-MAIN-2021-17 | refinedweb | 197 | 54.69 |
Rendering items on the DOM individually can cause a significant performance lag for users, especially when they’re scrolling through large lists. To make scrolling more efficient, we should use a virtual scroll list, which increases page load speed and prevents the web application from stuttering.
A virtual scroll list is similar to a standard scroll list, however, only the data in the user’s current view is rendered at any moment. As a user scrolls down a page, new items are rendered as older items are removed.
In this article, we’ll explore
vue-virtual-scroll-list, an amazing library for creating virtual scroll lists in Vue.js. Let’s get started!
Rendering content in
vue-virtual-scroll-list
The
vue-virtual-scroll-list library has two primary methods for rendering a webpage’s content into a list,
item mode and
v-for mode.
item mode is ideal for rendering static content. Once content is appended on the DOM,
item mode frees up the memory that was being used. If you change the data, you’ll need to call
forceRender() and start the process again.
To render dynamic content, the better choice is
v-for mode. In
v-for mode, the data provided to the list is referenced inside the memory. Therefore, when the data changes, list items are re-rendered and the context is maintained.
Let’s take a closer look at the
vue-virtual-scroll-list library by comparing the performance of
item mode with and without using a virtual scroll list.
First, we’ll set up a new Vue.js project and install
vue-virtual-scroll-list. Then, we’ll create a list using randomly generated data. Finally, we’ll render our list with and without virtual scrolling, comparing the performance of each.
Setting up a Vue.js project
First, make sure you have Vue.js installed on your machine. Create a new Vue.js project with the following command:
vue create virtual-scroll-demo
Once the project is set up, install the
vue-virtual-scroll-list library:
npm install vue-virtual-scroll-list --save
Now, our project has the following structure:
Generating a list
Now that we have the base for our project set up, let’s get started on the foundation for creating both of our lists.
Navigate to your
/src folder and create a file called
data.js. Let’s add the following simple function that generates random data to
data.js:
let idCounter = 0; export function getData(count) { const data = []; for (let index = 0; index < count; index++) { data.push({ id: String(idCounter++), text: Math.random() .toString(16) .substr(10), }); } return data; }
Next, we’ll create a new file called
Item.vue, which is the
item component that we’ll render. In
Item.vue, we’ll include the following code block, which creates a template and styling for our list, as well as props that retrieve and display the data generated above:
<template> <div class="item"> <div class="id">{{ source.id }} - {{ source.text }}</div> </div> </template> <script> export default { name: 'item', props: { source: { type: Object, default() { return {} } } } } </script> <style scoped> .item { display: flex; flex-direction: column; border-bottom: 1px solid lightgrey; padding: 1em; } </style>
Rendering a list without virtual scroll
Now that we have a list created, let’s render the list items on our DOM without using the
vue-virtual-scroll-list. Add the following code to
App.vue:
<template> <div id="app"> <div class="wrapper"> <div class="list"> <p v- {{item}} </p> </div> <>
In the code block above, we rendered 100,000 items into our DOM. Let’s see how our list will perform with this much data and no virtual scroll. Start the project with the following npm command:
npm run serve
We’ll get the following output:
When we check the
inspect element in the browser, we’ll see that all of the HTML elements have been appended to the browser DOM, as seen in the image below:
Appending elements in the browser DOM will increase the DOM’s size. Therefore, the browser will require more time to append each item to the DOM, potentially causing a significant performance lag. Let’s look closely at the amount of time it took the browser to append our list to the DOM:
The event
DOMContentLoaded fired after 22 seconds, meaning the browser tab required 22 seconds to load before displaying the final rendered list. Similarly, as seen in the image below, rendering our list consumed 128 MB of memory:
Rendering a list with virtual scroll
Now, let’s try rendering our list using a virtual scroll. Import the
vue-virtual-scroll-list package in
main.js:
import Vue from "vue"; import App from "./App.vue"; Vue.config.productionTip = false; import VirtualList from "vue-virtual-scroll-list"; Vue.component("virtual-list", VirtualList); new Vue({ render: (h) => h(App), }).$mount("#app");
Next, we’ll render the data for the items inside the
virtual-list component. Let’s change our
App.js file to look like the following code block:
<template> <div id="app"> <div class="wrapper"> <virtual-list <>
Note that the data props are required for the virtual list to render the items. Running the code block above will give us the following output:
We can see in the image below that only a few items are rendered at one time. When the user scrolls down, newer items are rendered:
Now, our DOM tree is much smaller than before! When we render our virtual scroll list,
DOMContentLoaded will fire much faster than before!
As seen in the image above, the event fired in only 563 milliseconds. Similarly, our operation consumed only 79 MB of memory, which is much less than when we didn’t use a virtual scroll.
Conclusion
Now you know how to create a virtual scrolling list in Vue.js using the
vue-virtual-scroll-list library!
In this tutorial, we created a static list that uses randomly generated data, then implemented it in our Vue.js application, comparing its performance with and without using a virtual scroll.
Virtual scrolling lists are highly performant, especially when you have a large list of items on your webpage. Using a virtual scroll list can increase the page load speed and improve the user experience overall!. | https://blog.logrocket.com/create-performant-virtual-scrolling-list-vuejs/ | CC-MAIN-2022-05 | refinedweb | 1,042 | 54.83 |
Its called "Fundamentals of C++" by Lambert
This is a discussion on Was wondering if this book is any good! within the C++ Programming forums, part of the General Programming Boards category; Its called "Fundamentals of C++" by Lambert...
Its called "Fundamentals of C++" by Lambert
Published in 2001? It might be a little old.
has that much in the programming world changed since then?
No, but many books that were published only a couple years after the standard was released don't use many of the features of the standard library.
I found a table of contents that didn't look too bad, though, so it's hard to say without seeing the actual book.
Well it cant be old cause it shows to use /n; rather than endl;
What does \n versus endl have to do with being old? Both options have been available for a long time.
I downloaded the student material from, and it looks worse than I thought, but I'm not sure if that code is related to the book.
Does the code use <iostream.h>? If so, it is very old. Does it is apvector? If so, you might not want to use that to learn. Does it use C style strings?
Maybe you could post a sample program from
Well that address is not the book though.heres address I meant to say "Fundamentals of C++ Understanding Programming and Problem Solving". Heres a sample.......
// Program file: chbook.cpp
// This program updates a checkbook.
#include <iostream.h>
#include <iomanip.h>
int main ()
{
double starting_balance, ending_balance, trans_amount;
char trans_type;
etc..............
Bad. That uses the pre-standard header files. Some modern compilers don't even provide them anymore.
All the buzzt!
CornedBeeCornedBee
"There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code."
- Flon's Law
>> #include <iostream.h>
That's how you know the book is really out of date. That header is pre-standard and doesn't work at all on some modern compilers.
You'd learn something if you read it, but there are much better books out there that won't teach you old/bad habits that you'll have to break if you continue programming.
K well that book was borrowed from school.........btw I am wanting/thinking of going to college in computer science for computer engineering and software engineering and was wondering if I would need to learn C++?
>was wondering if I would need to learn C++?
uh let's say yes for that one!
I have a book called Practical C published in 1991, with minor corrections in 1992. It focuses on good programing style, I wonder how much has changed since then in terms of style
"When your work speaks for itself - don't interrupt!"
-Samantha Ingraham.
I don't care what anyone says,
C/C++ created the WORLD!!!
You do not need to learn C++ before going to college and getting one of those degrees. If you know which university you want to go to and if you know that it uses C++ in its classes, then it could help to start learning on your own. I would find out what books that college (or any local university you'd consider going to) uses.
If they teach Java, then learning C++ could be beneficial, but it might be better to try to learn Java on your own instead.
C, C++ and Java are all different languages. If you want to learn one, pick one and go for it, it can't hurt.
The style of programming is different in C and in C++, so if you're going to learn C++, don't read the book on C style. | http://cboard.cprogramming.com/cplusplus-programming/95837-wondering-if-book-any-good.html | CC-MAIN-2015-32 | refinedweb | 633 | 82.95 |
Thanks Filip,
I've never looked at that before. I'd just looked at the class files
thinking it might be in there. I'll have a good read of it know that I have
it ;-)
Ciao
Derek.
-----Original Message-----
From: Filip Hanik (lists) [mailto:devlists@hanik.com]
Sent: Friday, 27 February 2004 12:26 PM
To: Tomcat Users List
Subject: RE: Can someone direct me to the documentation.
why document it again, when you have it in the specification :)
Filip
-----Original Message-----
From: Derek Clarkson [mailto:Derek.Clarkson@lonelyplanet.com.au]
Sent: Thursday, February 26, 2004 5:20 PM
To: Tomcat Users List
Subject: Can someone direct me to the documentation.
Hi all,
I've found over time that the Apache and associated projects web site is
either extremely good, or extremely bad when it comes to finding certain
pieces of documentation. The lastest one that's been driving me nuts is
wanting to find a reference to the tags that can be set inside the <servlet>
namespace in a web.xml file. Specifically I was hunting for details on the
<load-on-startup> tag, what it did exactly and what the numbers ment. I
seached the Apache sites, the web, everything I could find and all I got was
some references in various news groups. It was enough to tell me what I
needed to know, but I would still like to know where the offical reference
is for this part of the web.xml file.
Can anyone point me to a URL ?
Ciao
Derek.
______________________________________________________________________.
---
Incoming mail is certified Virus Free.
Checked by AVG anti-virus system ().
Version: 6.0.594 / Virus Database: 377 - Release Date: 2/24/2004
---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system ().
Version: 6.0.594 / Virus Database: 377 - Release Date: 2/24/2004
---------------------------------------------------------------------
To unsubscribe, e-mail: tomcat-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: tomcat-user-help@jakarta.apache.org | http://mail-archives.apache.org/mod_mbox/tomcat-users/200402.mbox/%3C82E30406384FFB44AFD1012BAB230B55BC39B3@shiva.au.lpint.net%3E | CC-MAIN-2014-23 | refinedweb | 327 | 64.61 |
heapq – Heap Sort Algorithm¶
A heap is a tree-like data structure in which.
A max-heap ensures that the parent is larger than or equal to both of
its children. A min-heap requires that the parent be less than or
equal to its children. Python’s
heapq module implements a
min-heap.
Example Data¶
The examples in this section use the data in
heapq_heapdata.py.
The heap output is printed using
heapq_showtree.py.
import math from io import StringIO def show_tree(tree, total_width=36, fill=' '): """Pretty-print a tree.""" output = StringIO() last_row = -1 for i, n in enumerate(tree): if i: row = int(math.floor(math.log(i + 1, 2))) else: row = 0 if row != last_row: output.write('\n') columns = 2 ** row col_width = int(math.floor(total_width / columns)) output.write(str(n).center(col_width, fill)) last_row = row print(output.getvalue()) print('-' * total_width) print()
Creating a Heap¶
There are two basic ways to create a heap:
heappush() and
heapify().
import heapq from heapq_showtree import show_tree from heapq_heapdata import data heap = [] print('random :', data) print() for n in data: print('add {:>3}:'.format(n)) heapq.heappush(heap, n) show_tree(heap)
When
heappush() is used, the heap sort order of the elements is
maintained as new items are added from a data source.
$ python3 heapq_heappush.py random : [19, 9, 4, 10, 11] add 19: 19 ------------------------------------ add 9: 9 19 ------------------------------------ add 4: 4 19 9 ------------------------------------ add 10: 4 10 9 19 ------------------------------------ add 11: 4 10 9 19 11 ------------------------------------
If the data is already in memory, it is more efficient to use
heapify() to rearrange the items of the list in place.
import heapq from heapq_showtree import show_tree from heapq_heapdata import data print('random :', data) heapq.heapify(data) print('heapified :') show_tree(data)
The result of building a list in heap order one item at a time is the
same as building an unordered list and then calling
heapify().
$ python3 heapq_heapify.py random : [19, 9, 4, 10, 11] heapified : 4 9 19 10 11 ------------------------------------
Accessing the Contents of a Heap¶
Once the heap is organized correctly, use
heappop() to remove the
element with the lowest value.
import heapq from heapq_showtree import show_tree from heapq_heapdata import data print('random :', data) heapq.heapify(data) print('heapified :') show_tree(data) print() for i in range(2): smallest = heapq.heappop(data) print('pop {:>3}:'.format(smallest)) show_tree(data)
In this example, adapted from the stdlib documentation,
heapify() and
heappop() are used to sort a list of
numbers.
$ python3 heapq_heappop.py random : [19, 9, 4, 10, 11] heapified : 4 9 19 10 11 ------------------------------------ pop 4: 9 10 19 11 ------------------------------------ pop 9: 10 11 19 ------------------------------------
To remove existing elements and replace them with new values in a
single operation, use
heapreplace().
import heapq from heapq_showtree import show_tree from heapq_heapdata import data heapq.heapify(data) print('start:') show_tree(data) for n in [0, 13]: smallest = heapq.heapreplace(data, n) print('replace {:>2} with {:>2}:'.format(smallest, n)) show_tree(data)
Replacing elements in place makes it possible to maintain a fixed-size heap, such as a queue of jobs ordered by priority.
$ python3 heapq_heapreplace.py start: 4 9 19 10 11 ------------------------------------ replace 4 with 0: 0 9 19 10 11 ------------------------------------ replace 0 with 13: 9 10 19 13 11 ------------------------------------
Data Extremes from a Heap¶
heapq also includes two functions to examine an iterable and find
a range of the largest or smallest values it contains.
import heapq from heapq_heapdata import data print('all :', data) print('3 largest :', heapq.nlargest(3, data)) print('from sort :', list(reversed(sorted(data)[-3:]))) print('3 smallest:', heapq.nsmallest(3, data)) print('from sort :', sorted(data)[:3])
Using
nlargest() and
nsmallest() is efficient only for
relatively small values of n > 1, but can still come in handy in a few
cases.
$ python3 heapq_extremes.py all : [19, 9, 4, 10, 11] 3 largest : [19, 11, 10] from sort : [19, 11, 10] 3 smallest: [4, 9, 10] from sort : [4, 9, 10]
Efficiently Merging Sorted Sequences¶
Combining several sorted sequences into one new sequence is easy for small data sets.
list(sorted(itertools.chain(*data)))
For larger data sets, this technique can use a considerable amount of
memory. Instead of sorting the entire combined sequence,
merge()
uses a heap to generate a new sequence one item at a time,
determining the next item using a fixed amount of memory.
import heapq import random random.seed(2016) data = [] for i in range(4): new_data = list(random.sample(range(1, 101), 5)) new_data.sort() data.append(new_data) for i, d in enumerate(data): print('{}: {}'.format(i, d)) print('\nMerged:') for i in heapq.merge(*data): print(i, end=' ') print()
Because the implementation of
merge() uses a heap, it consumes
memory based on the number of sequences being merged, rather than the
number of items in those sequences.
$ python3 heapq_merge.py 0: [33, 58, 71, 88, 95] 1: [10, 11, 17, 38, 91] 2: [13, 18, 39, 61, 63] 3: [20, 27, 31, 42, 45] Merged: 10 11 13 17 18 20 27 31 33 38 39 42 45 58 61 63 71 88 91 95
See also
- Standard library documentation for heapq
- Wikipedia: Heap (data structure) – A general description of heap data structures.
- Priority Queue – A priority queue implementation from
Queuein the standard library. | https://pymotw.com/3/heapq/index.html | CC-MAIN-2018-51 | refinedweb | 872 | 61.77 |
?
originally javax was intended to be for extensions, and sometimes things would be promoted out of javax into java.
One issue was Netscape (and probably IE) limiting classes that could be in the java package.
When Swing was set to “graduate” to java from javax there was sort of a mini-blow up because people realized that they would have to modify all of their imports. Given that backwards compatibility is one of the primary goals of Java they changed their mind.
At that point in time, at least for the community (maybe not for Sun) the whole point of javax was lost. So now we have some things in javax that probably should be in java… but aside from the people that chose the package names I don’t know if anyone can figure out what the rationale is on a case-by-case basis.
java packages are “base”, and javax packages are extensions.
Swing was an extension because AWT was the original UI API. Swing came afterwards, in version 1.1.
The javax namespace is usually (that’s a loaded word) used for standard extensions, currently known as optional packages. The standard extensions are a subset of the non-core APIs; the other segment of the non-core APIs obviously called the non-standard extensions, occupying the namespaces like com.sun.* or com.ibm.. The core APIs take up the java. namespace.
Not everything in the Java API world starts off in core, which is why extensions are usually born out of JSR requests. They are eventually promoted to core based on ‘wise counsel’.
The interest in this nomenclature, came out of a faux pas on Sun’s part – extensions could have been promoted to core, i.e. moved from javax.* to java.* breaking the backward compatibility promise. Programmers cried hoarse, and better sense prevailed. This is why, the Swing API although part of the core, continues to remain in the javax.* namespace. And that is also how packages get promoted from extensions to core – they are simply made available for download as part of the JDK and JRE.
Javax used to be only for extensions. Yet later sun added it to the java libary forgetting to remove the x. Developers started making code with javax. Yet later on in time suns decided to change it to java. Developers didn’t like the idea because they’re code would be ruined… so javax was kept.
java.* packages are the core Java language packages, meaning that programmers using the Java language had to use them in order to make any worthwhile use of the java language.
javax.* packages are optional packages, which provides a standard, scalable way to make custom APIs available to all applications running on the Java platform. | https://exceptionshub.com/javax-vs-java-package.html | CC-MAIN-2022-05 | refinedweb | 461 | 63.19 |
XPath is an expression language for addressing parts of an XML document. You can think of XPath expressions as sort of like regular expressions for XML. They let you "pull out" parts of an XML document based on patterns. In the case of XPath, the patterns are more concerned with structural information than with character content and the values returned may be either simple text or "live" DOM nodes. With XPath, we can query an XML document for all of the elements with a certain name or in a certain parent-child relationship. We can also apply fairly sophisticated tests or predicates to the nodes, which allows us to construct complex queries such as this one: give me all of the Animals with a Weight greater than the number 400 and a Temperament of irritable whose animalClass attribute is mammal.
The full XPath specification has many features and includes both a compact and more verbose syntax. We won't try to cover it all here, but the basics are easy and it's important to know them because XPath expressions are at the core of XSL transformations and other APIs that refer to parts of XML documents. The full specification does not make great bedtime reading but can be found at.
An XPath expression addresses a Node in an XML document tree. The node may be an element (possibly with children) like <Animal>...</Animal> or it may be a lower-level document node representing an attribute (e.g., animal), a CDATA block, or even a comment. All of the structure of an XML document is accessible through the XPath syntax. Once we've addressed the node, we can either reduce the content to a text string (as we might with a simple element like Name) or we can access it as a proper DOM tree to further read or manipulate it.
Table 24-2 shows the most basic node-related syntax.
Syntax
Example
Description
/Name
/Inventory/Animal
All Animal nodes under /Inventory
//Name
//Animal
All Animal nodes anywhere in document. A FoodRecipe/Animal would also match
Name/*
/Inventory/*
All child nodes of Inventory (Animals and any other elements directly under Inventory)
@Name
//Animal/@animalClass
All animalClass attributes of Animals
.
/Inventory/Animal/.
The current node (all Animals)
..
/Inventory/Animal/..
The parent node (Inventory)
Nodes are addressed with a slash-separated path based on name (e.g., /Inventory/Animal refers to the set of all Animal nodes under the Inventory node). If we want to list the names of all Animals, we would use /Inventory/Animal/Name. The // syntax matches a node anywhere in a document, at any level of nesting, so //Name would match the name elements of Animals, FoodRecipes, and possibly many other elements. We could be more specific, using //Animal/Name to match only Name elements whose parent is an Animal element. The at sign (@) matches attributes. This becomes much more useful with predicates, which we describe next. Finally, the familiar . and .. notation can be used to "move" relative to a node; read on to see how this is used.
Predicates let us apply a test to a node. Nodes that pass the test are included in the result set or used to select other nodes (child or parent) relative to them. There are many types of tests available in XPath. Table 24-3 lists a few examples.
[n]
/Inventory/Animal[1]
Select the nth element of a set. (Starts with 1 rather than 0.) For example, select the first Animal in the Inventory.
[@name=value]
//Animal[@animal]
Match nodes with the specified attribute value. For example, Animals with the animalClass attribute "mammal".
[element=value]
//Animal[Name="Cocoa"]
Match nodes with a child node whose text value is specified. For example, match the Animal with a Name element containing the simple text "Cocoa".
=!=><
//Animal[Weight > 400]
Predicates may also test for inequality and numeric greater-/lesser-than value.
and, or
//Animal[@animalClass= "mammal" or @]]
Predicates may use logical AND and OR to test. For example, Animals whose animalClass is mammal or reptile.
Predicates can be compounded (AND'ed) using this syntax or simply by adding more predicates, like so:
//Animal[@animal][Weight > 400]
Here, we've asked for Animals with a class attribute of "mammal" and a Weight element containing a number greater than 400.
We can now also see the usefulness of the .. operator. Suppose we want to find all of the Animals with a FoodRecipe that uses Fruit as an ingredient:
//Animal/FoodRecipe[Ingredient="Fruit"]/..
The .. means that instead of returning the matching FoodRecipe node itself, we return its parentthe Animal element. The . operator is useful in other cases where we use XPath functions to manipulate values in more refined ways. We'll say a few words about functions next.
The XPath specification includes not only the basic node traversal and predicate syntax we've shown but also the ability to invoke more open-ended functions that operate on nodes and the node context. These XPath functions cover a wide range of duties and we'll just give a couple of examples here. The functions fall into a few general categories.
Some functions select node types other than an element. For example, there is no special syntax for selecting an XML comment. Instead you invoke a special method called comment( ), like this:
/Inventory/comment( )
This expression returns any XML comment nodes that are children of the Inventory element. XPath also offers functions that duplicate all of the (compact) syntax we've discussed, including methods like child( ) and parent( ) (corresponding to . and ..).
Other functions look at the context of nodes, for example, last( ) and count( ).
/Inventory/Animal[last( )]
This expression selects the last Animal child element of Inventory in the same way that [n] selects the nth.
//FoodRecipe[count(Ingredient)>2]
This expression matches all of the FoodRecipe elements with more than two ingredients. (Cool, eh?)
Finally, there are many string-related functions. Some are useful for simple tests, but others are really useful only in the context of XSL, where they help out the language (in an awkward way) with basic formatting and string manipulation. For example, the contains( ) and starts-with( )methods can be used to look at the text values inside XML documents:
//Animal[starts-with(Name,"S")]
This expression matches Animals whose Name starts with the character S (e.g., Song Fang). The contains( ) method, similarly, can be used to look for a substring in text.
Now that we've got a taste for the syntax, let's look at how to use the API. XPath has been around for a while now, but the API for using it directly in Java was just introduced in Java 5.0. The pattern is similar to that of the Java regular expression API for strings. We use a factory to create an XPath object. We can then either evaluate expressions with it or "compile" an expression down to an XPathExpression for better performance if we're going to use it more than once.
XPath xpath = XPathFactory.newInstance( ).newXPath( ); InputSource source = new InputSource( filename ); String result = xpath.evaluate( "//Animal/Name", source ); // Song Fang
We've used the simplest form of the evaluate( ) method, which returns only the first match and takes the value as a string. This method is useful for pulling simple text values from elements. However, if we want the full set of values (the names of all the Animals matched by this expression), we need to return the results as a set of Node objects instead.
The return type of evaluate( ) is controlled by identifiers of the XPathConstants class. We can get the result as one of the following: STRING, BOOLEAN, NUMBER, NODE, or NODESET. The default is STRING, which strips out child element tags and returns just the text of the matching nodes. BOOLEAN and NUMBER are conveniences for getting primitive types. NODE and NODESET return org.w3c.dom.Node and NodeList objects, respectively. We need the NodeList to get all the values.
NodeList elements = (NodeList)xpath.evaluate( expression, inputSource, XPathConstants.NODESET );
Next, let's put this together in a useful example.
This simple example can be used as a command-line utility, such as grep, for testing XPath expressions against a file. It applies an XPath expression and then prints the resulting elements as XML text using the same technique we used in our PrintDOM example. Nodes that are not elements (e.g., attributes, comments, and so on) are simply printed with their toString( ) method, which normally serves well enough to identify them, but you can expand the example to your taste. Here it is:
import org.w3c.dom.*; import org.xml.sax.InputSource; import javax.xml.xpath.*; import javax.xml.transform.*; import javax.xml.transform.dom.DOMSource; import javax.xml.transform.stream.StreamResult; public class XMLGrep { public static void printXML( Element element ) throws TransformerException { Transformer transformer = TransformerFactory.newInstance( ).newTransformer( ); transformer.setOutputProperty( OutputKeys.OMIT_XML_DECLARATION, "yes" ); Source source = new DOMSource( element ); Result output = new StreamResult( System.out ); transformer.transform( source, output ); System.out.println( ); } public static void main( String [] args ) throws Exception { if ( args.length != 2 ) { System.out.println( "usage: PrintXPath expression file.xml" ); System.exit(1); } String expression = args[0], filename = args[1]; XPath xpath = XPathFactory.newInstance( ).newXPath( ); InputSource inputSource = new InputSource( filename ); NodeList elements = (NodeList)xpath.evaluate( expression, inputSource, XPathConstants.NODESET ); for( int i=0; i<elements.getLength( ); i++ ) if ( elements.item(i) instanceof Element ) { printXML( (Element)elements.item(i) ); } else System.out.println( elements.item(i) ); } }
There are again a lot of imports in this example. The transform code in our printXML( ) method is drawn from the PrintDOM example with one addition. We've set a property on the transformer to omit the standard XML declaration that would normally be output for us at the head of our document. Since we may print more than one element, the output is not well-formed XML anyway.
Run the example by passing an XPath expression and the name of an XML file as arguments:
% java XMLGrep "//Animal[starts-with(Name,'C')]" zooinventory.xml
This example really is useful for trying out XPath. Please give it a whirl. Mastering these expressions (and learning more) will give you great power over XML documents and, again, form the basis for learning about XSL transformations. | https://flylib.com/books/en/4.122.1.211/1/ | CC-MAIN-2019-43 | refinedweb | 1,709 | 56.76 |
In order to understand the bit manipulation operators, it is first necessary to understand how integers are represented in binary. We talked a little bit about this in section 2.4 -- Integers, and will expand upon it here.
Consider a normal decimal number, such as 5623. We intuitively understand that these digits mean (5 * 1000) + (6 * 100) + (2 * 10) + (3 * 1). Because there are 10 decimal numbers, the value of each digit increases by a factor of 10.
Binary numbers work the same way, except because there are only 2 binary numbers (0 and 1), the value of each digit increases by a factor of 2. Just like commas are often used to make a large decimal number easy to read (e.g. 1,427,435), we often write binary numbers in groups of 4 bits to make them easier to read (e.g. 1101 0101).
As a reminder, in binary, we count from 0 to 15 like this:
Converting binary to decimal
In the following examples, we assume that we’re dealing with unsigned integers.
Consider the 8 bit (1 byte) binary number 0101 1110. 0101 1110 means (0 * 128) + (1 * 64) + (0 * 32) + (1 * 16) + (1 * 8) + (1 * 4) + (1 * 2) + (0 * 1). If we sum up all of these parts, we get the decimal number 64 + 16 + 8 + 4 + 2 = 94.
Here is the same process in table format. We multiply each binary digit by its digit value (determined by its position). Summing up all these values gives us the total.
Converting 0101 1110 to decimal:
Let’s convert 1001 0111 to decimal:
1001 0111 binary = 151 in decimal.
This can easily be extended to 16 or 32 bit binary numbers simply by adding more columns. Note that it’s easiest to start on the right end, and work your way left, multiplying the digit value by 2 as you go.
Method 1 for converting decimal to binary
Converting from decimal to binary is a little more tricky, but still pretty straightforward. There are two good methods to do this.
The first method involves continually dividing by 2, and writing down the remainders. The binary number is constructed at the end from the remainders, from the bottom up.
Converting 148 from decimal to binary (using r to denote a remainder):
148 / 2 = 74 r0
74 / 2 = 37 r0
37 / 2 = 18 r1
18 / 2 = 9 r0
9 / 2 = 4 r1
4 / 2 = 2 r0
2 / 2 = 1 r0
1 / 2 = 0 r1
Writing all of the remainders from the bottom up: 1001 0100
148 decimal = 1001 0100 binary.
You can verify this answer by converting the binary back to decimal:
(1 * 128) + (0 * 64) + (0 * 32) + (1 * 16) + (0 * 8) + (1 * 4) + (0 * 2) + (0 * 1) = 148
Method 2 for converting decimal to binary
The second method involves working backwards to figure out what each of the bits must be. This method can be easier with small binary numbers.
Consider the decimal number 148 again. What’s the largest power of 2 that’s smaller than 148? 128, so we’ll start there.
Is 148 >= 128? Yes, so the 128 bit must be 1. 148 - 128 = 20, which means we need to find bits worth 20 more.
Is 20 >= 64? No, so the 64 bit must be 0.
Is 20 >= 32? No, so the 32 bit must be 0.
Is 20 >= 16? Yes, so the 16 bit must be 1. 20 - 16 = 4, which means we need to find bits worth 4 more.
Is 4 >= 8? No, so the 8 bit must be 0.
Is 4 >= 4? Yes, so the 4 bit must be 1. 4 - 4 = 0, which means all the rest of the bits must be 0.
148 = (1 * 128) + (0 * 64) + (0 * 32) + (1 * 16) + (0 * 8) + (1 * 4) + (0 * 2) + (0 * 1) = 1001 0100
In table format:
Another example
Let’s convert 117 to binary using method 1:
117 / 2 = 58 r1
58 / 2 = 29 r0
29 / 2 = 14 r1
14 / 2 = 7 r0
7 / 2 = 3 r1
3 / 2 = 1 r1
1 / 2 = 0 r1
Constructing the number from the remainders from the bottom up, 117 = 111 0101 binary
And using method 2:
The largest power of 2 less than 117 is 64.
Is 117 >= 64? Yes, so the 64 bit must be 1. 117 - 64 = 53.
Is 53 >= 32? Yes, so the 32 bit must be 1. 53 - 32 = 21.
Is 21 >= 16? Yes, so the 16 bit must be 1. 21 - 16 = 5.
Is 5 >= 8? No, so the 8 bit must be 0.
Is 5 >= 4? Yes, so the 4 bit must be 1. 5 - 4 = 1.
Is 1 >= 2? No, so the 2 bit must be 0.
Is 1 >= 1? Yes, so the 1 bit must be 1.
117 decimal = 111 0101 binary.
Adding in binary
In some cases (we’ll see one in just a moment), it’s useful to be able to add two binary numbers. Adding binary numbers is surprisingly easy (maybe even easier than adding decimal numbers), although it may seem odd at first because you’re not used to it.
Consider two small binary numbers:
0110 (6 in decimal) +
0111 (7 in decimal)
Let’s add these. First, line them up, as we have above. Then, starting from the right and working left, we add each column of digits, just like we do in a decimal number. However, because a binary digit can only be a 0 or a 1, there are only 4 possibilities:
Let’s do the first column:
0110 (6 in decimal) +
0111 (7 in decimal)
----
1
0 + 1 = 1. Easy.
Second column:
1
0110 (6 in decimal) +
0111 (7 in decimal)
----
01
1 + 1 = 0, with a carried one into the next column
Third column:
11
0110 (6 in decimal) +
0111 (7 in decimal)
----
101
This one is a little trickier. Normally, 1 + 1 = 0, with a carried one into the next column. However, we already have a 1 carried from the previous column, so we need to add 1. Thus, we end up with a 1 in this column, with a 1 carried over to the next column
Last column:
11
0110 (6 in decimal) +
0111 (7 in decimal)
----
1101
0 + 0 = 0, but there’s a carried 1, so we add 1. 1101 = 13 in decimal.
Now, how do we add 1 to any given binary number (such as 1011 0011)? The same as above, only the bottom number is binary 1.
1 (carry column)
1011 0011 (original binary number)
0000 0001 (1 in binary)
---------
1011 0100
Signed numbers and two’s complement
In the above examples, we’ve dealt solely with unsigned integers. In this section, we’ll take a look at how signed numbers (which can be negative) are dealt with.
Signed integers are typically stored using a method known as two’s complement. In two’s complement, the leftmost (most significant) bit is used as the sign bit. A 0 sign bit means the number is positive, and a 1 sign bit means the number is negative.
Positive signed numbers are stored just like positive unsigned numbers (with the sign bit set to 0).
Negative signed numbers are stored as the inverse of the positive number, plus 1.
Converting integers to binary two’s complement
For example, here’s how we convert -5 to binary two’s complement:
First we figure out the binary representation for 5: 0000 0101
Then we invert all of the bits: 1111 1010
Then we add 1: 1111 1011
Converting -76 to binary:
Positive 76 in binary: 0100 1100
Invert all the bits: 1011 0011
Add 1: 1011 0100
Why do we add 1? Consider the number 0. If a negative value was simply represented as the inverse of the positive number, 0 would have two representations: 0000 0000 (positive zero) and 1111 1111 (negative zero). By adding 1, 1111 1111 intentionally overflows and becomes 0000 0000. This prevents 0 from having two representations, and simplifies some of the internal logic needed to do arithmetic with negative numbers.
Converting binary two’s complement to integers
To convert a two’s complement binary number back into decimal, first look at the sign bit.
If the sign bit is 0, just convert the number as shown for unsigned numbers above.
If the sign bit is 1, then we invert the bits, add 1, then convert to decimal, then make that decimal number negative (because the sign bit was originally negative).
For example, to convert 1001 1110 from two’s complement into a decimal number:
Given: 1001 1110
Invert the bits: 0110 0001
Add 1: 0110 0010
Convert to decimal: (0 * 128) + (1 * 64) + (1 * 32) + (0 * 16) + (0 * 8) + (0 * 4) + (1 * 2) + (0 * 1) = 64 + 32 + 2 = 98
Since the original sign bit was negative, the final value is -98.
If adding in binary is difficult for you, you can convert to decimal first, and then add 1.
Why types matter
Consider the binary value 1011 0100. What value does this represent? You’d probably say 180, and if this were standard unsigned binary number, you’d be right.
However, if this value was stored using two’s complement, it would be -76.
And if the value were encoded some other way, it could be something else entirely.
So how does C++ know whether to print a variable containing binary 1011 0100 as 180 or -76?
Way back in section 2.1 -- Basic addressing and variable declaration, we said, “When you assign a value to a data type, the compiler and CPU takes care of the details of encoding your value into the appropriate sequence of bits for that data type. When you ask for your value back, your number is “reconstituted” from the sequence of bits in memory.”
So the answer is: it uses the type of the variable to convert the underlying binary representation back into the expected form. So if the variable type was an unsigned integer, it would know that 1011 0100 was standard binary, and should be printed as 180. If the variable was a signed integer, it would know that 1011 0100 was encoded using two’s complement (assuming that’s what it was using), and should be printed as -76.
Quiz
1) Convert 0100 1101 to decimal.
2) Convert 93 to an 8-bit unsigned binary number.
3) Convert -93 to an 8-bit signed binary number (using two’s complement).
4) Convert 1010 0010 to an unsigned decimal number.
5) Convert 1010 0010 to a signed decimal number (assume two’s complement).
6) Write a program that asks the user to input a number between 0 and 255. Print this number as an 8-bit binary number (of the form #### ####). Don’t use any bitwise operators.
Hint: Use method 2. Assume the largest power of 2 is 128.
Hint: Write a function to test whether your input number is greater than some power of 2. If so, print ‘1’ and return your number minus the power of 2.
Quiz answers
1) Show Solution
The answer is 77.
2) Show Solution
Working backwards from the remainders, 101 1101
Using method 2:
The largest power of 2 less than 93 is 64.
Is 93 >= 64? Yes, so the 64 bit is 1. 93 - 64 = 29.
Is 29 >= 32? No, so the 32 bit is 0.
Is 29 >= 16? Yes, so the 16 bit is 1. 29 - 16 = 13.
Is 13 >= 8? Yes, so the 8 bit is 1. 13 - 8 = 5.
Is 5 >= 4? Yes, so the 4 bit is 1. 5 - 4 = 1.
Is 1 >= 2? No, so the 2 bit is 0.
Is 1 >= 1? Yes, so the 1 bit is 1.
The answer is 0101 1101.
3) Show Solution
We already know that 93 is 0101 1101 from the previous example.
For two’s complement, we invert the bits: 1010 0010
And add 1: 1010 0011
4) Show Solution
Working right to left:
1010 0010 = (0 * 1) + (1 * 2) + (0 * 4) + (0 * 8) + (0 * 16) + (1 * 32) + (0 * 64) + (1 * 128) = 2 + 32 + 128 = 162.
The answer is 162.
5) Show Solution
Since we’re told this number is in two’s complement, we can “undo” the two’s complement by inverting the bits and adding 1.
First, start with out binary number: 1010 0010
Flip the bits: 0101 1101
Add 1: 0101 1110
Convert to decimal: 64 + 16 + 8 + 4 + 2 = 94
Remember that this is a two’s complement #, and the original left bit was negative: -94
The answer is -94
6) Show Solution
Thank you for the tutorial. Been following obsessively for a couple days now. It is definitively demystifying the world of programming for me.
Thought I would throw my solution into the mix as well. I think I over complicated it but it worked just the same.
umm…your code….method 2 says you do it with the highest power of 2 but less than the input…
but your code…. if for example i enter 21, it still uses 128 printing 0001 0101 rather than 1010 1000
Is 0001 0101 not the correct output?
So, I tried to use the pow function you mentioned in a previous section and ran into all kinds of problems getting my compiler to be happy with the user value (which I had as an int) and the inputs/outputs of pow (which are floats or doubles seemed to work). Then I glanced at your answer for a hint and realized that the hint should have warned me off going that route.
I wound up doing the following for the interesting part
because I was having issues with if statements (I’ve done some work with VBA recently and got used to writing code of the form if-lots of stuff-endif, but it seems C++ really just wants one thing to do after an if). It didn’t occur to me to just write two if statements with repeated conditions for printing the bit and reducing the user value.
I don’t think I saved myself any keystrokes, but it’s different so I thought I’d share it.
i recommend talking about how each digit is the base number raised to the power of 0, then 1, then 2…
it seemed kind of random how when going right to left in binary, it goes 1, 2, 4, 8, 16
maybe explain how it’s actually just 2^0 (which equals 1), 2^1, 2^2, 2^3, 2^4, etc.
Here’s an example using that approach and counting down the bit positions from L to R. I adapted it from another poster’s code:
Typo.
"However, (if) this value was stored using two’s complement, it would be -76."
Updated. Thanks!
I got it! This is a way to translate a number to binary using the first method:
I tried this as an alternative (method 1) but it wont give the correct answer. Can someone tell me why? Thanks!
Can someone help me with this? It compiles and works (almost) fine, except that the answer is raised by 1 (the binary answer). I don’t know why… and sorry I don’t know how to use the code /code thing..
#include <iostream>
int solve(int x, int pow)
{
int a = (x >= pow) ? 1:0;
std::cout << a;
if (x >= pow)
x -= pow;
return x;
}
int main()
{
using namespace std;
cout << "Choose a number to change it to binary." << endl;
int x;
cin >> x;
solve(x, 128);
solve(x, 64);
solve(x, 32);
solve(x, 16);
cout << " ";
solve(x, 8);
solve(x, 4);
solve(x, 2);
solve(x, 1);
return 0;
}
You were 99.9% of the way there. Remember, the solve function modifies x if it meets the condition x >= pow and returns x. If it does not meet the condition, x is not modified, but it still returns x. All I changed was add "x = " to the 8 calls to solve. I also added a cout << endl for formatting purposes.
To use the code highlighter, do this:
(code)just copy and paste code right from file into here(/code) (replace round brackets with square)
Anyway, your code with the small change:
Thank you, I was finding the answer for my code, it was the same as joseph’s, and now I understant the "x =" 🙂
Am I the only one using this tutorial…?…
I’m loving it and am learning so much and so quickly. Although I think at this point I’m still in the more general programming stuff, I have yet to get to the difficult C++ stuff like pointers and all that lol.
Great tutorial, thank you.
Also, I was wondering. By the time I finish this tutorial (and let’s say that I completely understand and can implement everything mentioned and taught) would I know enough C++ to actually start building applications and games that are more worthwhile? Number-guessing games aren’t exactly in my top-10 of favorite games haha.
Oh and one more thing. I look at the comments and see all these complicated codes that I only have understand. Mine seem much simpler and don’t have as much fancy stuff. Are they just more advanced or am I just slow haha.
Hey man.
First off, I’m also going through this tutorial at the moment (4.3a currently)
Once you are done with this tutorial, you will know how to use the C++ language very efficiently and understand a lot of what it has to offer. Unfortunately though you will still be a while off making games and applications, especially if you have no prior programming experience. The programming gods out there have utilized the language to create some code that will probably make your eyes bleed just looking at it, never mind trying to understand it. You will have to use that code to create apps or games. It may take a while to reach that point, but never feel discouraged and keep pushing, learning and practicing. The best way to learn how to reach god level is to get on Github and start reading other peoples’ code. If you find a project you like, start tearing through it. You will learn one hell of a lot just by reading other peoples’ code. Coding games is also very complex and platform dependent, for example an Android game will be made with Java usually, not C++ (if I am not mistaken).
As for the complexity of some of the code posted, don’t be stressed. Some, if not most, people here (me included) have some prior programming knowledge. C++ is probably one of the most difficult wide-use languages out there, so people start smaller. I learnt Delphi 7 in high school and have just applied to varsity to study software engineering, I decided to learn C++ before I start in a few months. Some of the code looks like Chinese because the tutorial, at this point, has not yet covered the topic (such as for loops). If you are able to answer the quizzes with similar code as the provided answers, it means you understand the intended lesson and you should be proud. If you don’t get it right, keep practicing =).
Learning to program is a very long shot from easy, no one expects you to be a pro tomorrow. Keep at it. My suggestion is go through a chapter (3.1 - 3.x for example) then go through it again before moving on to the next. It is a lot of information and sometimes some can be lost or new information in a later lesson can help you understand a previous lesson better.
Ah the most inspirational reply ever. Thank you haha. and it’s good to know I’m not the only one using this tutorial 7+ years late hahah
Alex, you are truly awesome! I just want to say thank you so much for these! I know it will get harder (which I am looking forward too) but the amount of detail you use as well as the way you explain each topic is very clear and precise. I searched for a couple days for C++ tutorials (textbooks, YouTube then websites) and have stuck with your site and plan on seeing it through. Again, thank you so much!
Here’s my solution for the sake of adding it:
I managed to get a working answer, it compiles, and it runs, and it works correctly, however the compiler gives me a warning and I’m not sure why.
The error is: (27): warning C4244: ‘=’ : conversion from ‘double’ to ‘int’, possible loss of data
So far as I can tell I used ints and no double so not sure why it’s throwing that up, any help/advice appreciated.
The issue is coming from this line:
The pow() function returns a double. So, looking at types, you’ve essentially said “int = (int - double)”, which resolves to “int = double”. The compiler is warning you that the double value could be truncated.
Given that you’re always taking a power of 2, this will never be a real problem, but the compiler isn’t smart enough to detect this.
You can get rid of the warning by using a static cast to tell the compiler you know that you’re converting a double to an int:
Thank you Alex, and thanks for the course overall, it’s superb. I really appreciate it.
Thank you, Alex.
Hi Alex,
Thanks for the tutorials, they are very well thought through.
I am a bit puzzled about the inverting all the bits into the opposite value. Why is this done? Am I right thinking it is only done for signed numbers?
Thanks.
Jana
(PS: There is an error in:
"Let’s convert 117 to binary using method 1:
117 / 2 = 58 r1
58 / 2 = 29 r0
29 / 2 = 14 r1
14 / 2 = 7 r1 this line should say: 14 / 2 = 7 r0
7 / 2 = 3 r1
3 / 2 = 1 r1
1 / 2 = 0 r1
Constructing the number from the remainders from the bottom up, 117 = 111 1101 binary" (this should be 111 0101)
Method 2 gives the correct answer.)
Thanks for catching the error.
Using two’s complement, negative signed numbers are stored as the inverse of the positive number, plus 1. Positive signed numbers use the same representation as unsigned numbers.
As for why we do this, we do this because it makes the math work out nicely, and allows us to add both positive and negative numbers without having to do anything special.
Consider: Any binary number plus its inverse is 1111. If we add 1, we get 0000 (due to overflow). So, by arbitrarily deciding that the binary representation for a negative number should be the inverse of the positive plus 1, we guarantee that that any positive signed number plus its negative signed number will equal binary 0. As it should!
I think it’s worth noting that the process behind subtracting 1 from binary numbers is not very clear.
I’ve figured it out, but it still managed to leave me puzzled for a short while.
I tried Solving the problem this way.
i think it works good.Just needed some feedback if its good or does it need some attention to any bad habits i might have incurred.
Hi there
thought I would share my attempt at this for others and to get feedback.
I had to look up arrays but that was the only new concept here, it was the only way I could think of to reverse the order of the bits. However doing it this way has the benefit that it can easily be extended to work for any positive number.
Thanks for these awesome tutorials!
There was a typo on line 23, -index should be -–index. not sure how that happened as I copy pasted from codeblocks and it was correct there. The correct code should be
while (index != 0) //While loop to reverse the order of the bits
{
-–index; // there was a typo here
cout << arr[index];
}
return 0;
}
I’m not sure if I’m doing these code html tags correctly ethier, if it doesn’t work properly this time can someone help me out?
Edit: it seems that a double “-” simply gets coverted to a single one, not sure why that is.
I’ve stylized your code using [code] brackets. This fixes the double dash issue as well.
Thanks,
Also line 20 lost the "\" before the n, I think they’re the only bugs…
Thanks again for these awesome tutorials!
Thank you for your help.I was looking for something that implement stack with vector for decimal to binary conversion.
Hi Alex,
I discovered a little mistake in the answer of quiz question No. 6:
Instead of if (x > pow), the if-statements should be if (x >= pow). If the power of 2 is equal to the number, we should set the according bit to 1 (and substract).
Cheers
Stefan
Quite right. Thanks for the note.
the representation of -76 is coinciding with the one of 180.So,how to differentiate?
10110100
You can’t differentiate just by looking at the binary representation. You have to know how to interpret the representation (this is why variables have types). If 10110100 is an unsigned number, then it means 180. If it is a signed number, it means -76.
Assuming we’re talking about an 8-bit number, you’ll notice that both signed and unsigned numbers have the same binary representation for 0 - 127 (the first 7 bits). However, after that point, the meanings diverge. 1000 0000 is 128 unsigned, but -128 signed. Every additional bit adds 1 to both signed and unsigned numbers.
Or, another way to think about it:
Unsigned: 1011 0100 = 1000 0000 + 0011 0100 = 128 + 52 = 180 unsigned
Signed: 1011 0100 = 1000 0000 + 0011 0100 = -128 + 52 = -76 signed
To convert Decimal number to Binary easy method is to divide the decimal number by two just like LCM method and write the remainder (1 and 0) from bottom i.e from last remainder.
i figured out an another way to make a decimal -> binary converter :
for (int i=7; i>0; i-){ (a >= pow(2,i)) ? a -= pow(2,i) + ++b * 0 : 0;b *= 10;}
b += a;
a : decimal number
b : binary result
I didn’t know that negative binary numbers use overflow :/
Alex,
I just want to say these are the best C++ (maybe even programming tutorials) I have ever come across, including books. The way you organize these tutorials combined with contributions from these comment sections put together some really impressive stuff. Thanks so much!
-Alex (haha)
There is one thing I’m not getting here. I assume our variable is signed int x=180; It will be stored as 1011 0100. When our program calls x to print to console, how does program decide which one (180 or -76) will be printed to console? (1011 0100 = -76 and 1011 0100 = 180)
If your integer is signed, it knows to print -76. If it’s unsigned, it prints 180.
Wait, so the smallest negative binary number (signed) is 127, or am I missing something?
There are 10 kinds of people in this world…
Those who know binary, and those who don’t.
(I didn’t make this one up but it’s great!)
Phil
So I tried to create a program to convert a decimal number into a binary number based on your thoughts. It doesn’t seem to work for some reason. Here it is:
what’s up with it?
i didn’t understood the use of npower ….it is declared only not defined..
thanks koi
There is no ^ exponent operator in C++.
and use the pow(base,power) function from it.
In this case he’d have been close had he replaced the (2^nPower) instances with for instance:
For all Windows users, the default “Calculator” application can help with conversion: Change the mode to scientific, then, enter the decimal number, then hit “Bin” to convert to binary. Also works in reverse and with Hexadecimal and Octal.
In Win7 Home Premium it’s under ‘Programmer’ mode now.
I found a website where you could buy doormats with the word “Welcome” printed on them in binary.
I’m guessing it just took the ASCII code for each letter in “Welcome” and converted those numbers to binary, which turns out to be “01110111011001010110110001100011011011110110110101100101”
which was four rows of numbers on the doormat 😛
The caption said “Welcome your friends & family into your hi-tech culture!
Lol.
There’s one thing I’m not quite getting here. Let’s take the last example for instance.
-76 in binary is 1011 0100
but…
180 in binary is also 1011 0100
how do you determine which one is which? Or am I right in assuming that this is why there is a need for signed and unsigned?
You are correct that 1011 0100 is both -76 and 180. As you surmise, which value you actually get is determined by whether your variable is signed or unsigned. If your variable is signed, you will get -76. If it’s unsigned, you’ll get 180.
as a practice on further learning how to write code. I made this program that converts decimal to binary. It uses the same technique as the tutorial shows. I mainly did this because most the programs I have made have had a guideline to it. This one I made up all on my own and it is effective for decimal 0-255. I plan on expanding it further include bigger numbers.
Code
#include "stdafx.h"
#include <iostream>
int main();
void DecToBin(int y)
{
using namespace std;
for(int x = 256; x /= 2;)
if(y >= x)
(cout << "1") && (y -= x);
else
cout << "0";
cout << "nn";
main();
}
int main()
{
using namespace std;
cout << "enter a number: ";
int y;
cin >> y;
DecToBin(y);
return 0;
}
I should prob use an unsigned int though huh?
I’d say it’s not particularly relevant in this case since the range of x falls within both the signed and unsigned int range.
This is very cool! This piece of code works like a charm! Thank you! I will se this for my gaming and simlation classes!
two int main()s, I think you only need the second int main(). You are creating a function before before the int main() where your program begins.
As int main() is the program’s starting point, the forward declaration of it is pointless.
Thanks!!Thats a good revision of what i had learnt in school 3 yrs ago……
But I prefer this method for decimal to binary conversion::
Short division by two with remainder
This method is much easier to understand when visualized on paper. It relies only on division by two.
1. For this example, let’s convert the decimal number 156 to binary. Write the decimal number as the dividend inside an upside-down “long division” symbol. Write the base of the destination system (in our case, “2” for binary) as the divisor outside the curve of the division symbol.
2)156
2. Write the integer answer (quotient) under the long division symbol, and write the reminader (0 or 1) to the right of the dividend.
2)156 0
78
3.
4. Starting with the bottom 1, read the sequence of 1’s and 0’s upwards to the top. You should have 10011100. This is the binary equivalent of the decimal number 156.
source:
That’s definitely a faster way to do it in practice. An even faster way is to use windows calculator (or another program). 😉
(To use windows calculator in this manner, go to the View menu and choose “scientific” -- then you can access the Dec and Bin buttons, which are decimal and binary).
The wordpress spam filter does a really good job of catching stuff, but it occasionally gets false positives. I restored the version of your article that it marked as spam and told it that it wasn’t spam. It’s supposed to learn from past behavior, so hopefully next time it will be smarter about how to treat wikihow links.
Thanks…….I thought u r gonna put me in a jail for spamming!! lol
Name (required)
Website | http://www.learncpp.com/cpp-tutorial/37-converting-between-binary-and-decimal/ | CC-MAIN-2017-30 | refinedweb | 5,411 | 72.56 |
WL#11393: Implement an interface to suppress error logs of type warning or note.
Affects: Server-8.0 — Status: Complete — Priority: Medium
The MySQL server shall accept a new optional list at start-up or at runtime. Messages for error-codes on this list are not included in the error-log when thrown with a severity of WARNING or INFORMATION, but will be included when thrown with a severity of ERROR or SYSTEM. This functionality will be built-in (no need to install or load a component or plugin) and available by default.
Func-Req 1 - global configuration variable Func-Req 1.1 A new system variable, @@global.log_error_suppression_list, shall be added to the server. Func-Req 1.2 The variable is of string-type and may be set to an empty string, or to a comma- separated list of error-codes. Func-Req 1.3 A comma is valid only between two error-codes, not at the start or end of the list or immediately following another comma. Func-Req 1.4 Error-codes may be expressed in the forms understood and output by the perror tool ("MY-12345", "85555", etc.). Only error-codes registered in the error-log range (index 10,000+) and in the "global errors" range (shared between MySQL server and client applications, range [1,1000[ shall be accepted. Func-Req 1.5 By default, the error logging pipeline @@global.log_error_services is configured to use the basic filter, log_filter_internal. If (and only if) log_filter_internal is enabled in this manner, the suppression list will be applied. This gives log_error_suppression_list the same behavior as log_error_verbosity. Func-Req 1.6 "Application" for the purposes of this WL means that any event that was reported with a severity lower than ERROR (that is to say, a severity of WARNING or INFORMATION) and whose error-code is in the suppression list will be discarded when log_filter_internal is called. Conversely, events with a severity of SYSTEM or ERROR will not be discarded by this mechanism even if listed in this variable. Func-Req 1.7 In the MySQL server, an event's severity is not registered alongside its message; instead, the severity is set at the time the logger is called. The same message/error-code may be used with different severities at different locations in the source code. (For messages intended for the client, severity may also depend on e.g. sql_mode.) Therefore at the time the suppression list is assigned, we cannot know which items will be impossible to suppress later (ERROR, SYSTEM), and so we can not throw a warning to that effect. Moreover, we also do not throw such a warning when at run-time, a message thrown as ERROR or SYSTEM is logged despite of being in the suppression list, as that would defeat the original goal of despamming the error log. Func-Req 1.8 The default value of the variable shall be "" (an empty list, and therefore, no suppressions by default). Func-Req 2 - command-line option Func-Req 2.1 The variable shall also be available as a command-line option, --log-error- suppression-list. Func-Req 2.2 Failure to set a value passed at start-up shall be non-fatal; the default value shall be used in such cases, and an error logged.
I-1 Semantics NO CHANGE I-2 Instrumentation NO CHANGE I-3 Error and Warnings NO CHANGE I-4 Install/Upgrade NO CHANGE I-5 Commercial plugins NO CHANGE. I-6 Replication NO CHANGE (Variable is not binlogged) I-7 XProtocols NO CHANGE I-8 Protocols NO CHANGE I-9 Security NO CHANGE Future filters may elect to obfuscate parts of plain-text messages however, etc. I-10 Log Files YES. [LOG01] Events with a priority of WARNING or INFORMATION whose error-code is in the suppression list will not be sent to the error log. (Specifically, the default filter "log_filter_internal" that also handles --log-error-verbosity must be configured in log_error_services, as it is by default; it marks the point in the pipeline where these events are removed. All subsequent log- services in the pipeline, usually log-sinks/log-writers, will not receive the events in question.) I-11 MySQL clients NO CHANGE I-12 Auth plugins NO CHANGE I-13 Globalization NO CHANGE I-14 Configuration YES. [CONF02] The filter engine shall use a new system variable for its configuration; log_error_suppression_list. This global system variable shall reside in the MySQL server's default namespace. [CONF01] This variable shall also be available on the command-line. I-15 File formats NO CHANGE (see log writers for that) I-16 Plugins/APIs NO CHANGE. I-17 Other programs NO CHANGE I-18 Storage engines NO CHANGE I-19 Data types NO CHANGE
log_filter_internal implements this functionality, as well as that of log_error_verbosity. Both may be used concurrently (e.g. setting log_error_verbosity to "discard information-items", then setting log_error_suppression_list to a list of warnings to discard additionally, leaving a result set of "no info items, only warnings of interest, and all errors and system items". Both system variables serve as simplified user interfaces to the built-in filtering engine WL#9342 that is fully exposed by the filtering language implemented in WL#9651. They convert their input into rules for said engine; both variables inject their rules into the same rule-set. When updating the suppression list, all previous suppression rules are discarded (while leaving any log_error_verbosity rules in place); then rules for each error-code in the suppression list are added to the rule-set.
Copyright (c) 2000, 2018, Oracle Corporation and/or its affiliates. All rights reserved. | https://dev.mysql.com/worklog/task/?id=11393 | CC-MAIN-2018-47 | refinedweb | 939 | 52.39 |
Hi
My elementary programming skills which I acquired not long ago are very much rusty these days. I was practicing with the code below but I received those two errors about the line at #17 in the source code. Would you please help me?
I remember when I started posting here people would tell me that my code isn't properly organized, like braces are not aligned etc. They used to use a certain term. What is it? Please tell me. Thank you for all the help.
Errors:Errors:Code://define_pi.cpp // learning to use #define on Pi #include <iostream> #include <cstdlib> #include <cmath> using namespace std; int main() { float r, area; #define pi = 3.142; cout << "enter radius"; cin >> r; area = 2*pi*r; cout << "area is: " << area << endl; system("pause"); return 0; }
Code:17|error: expected primary-expression before '=' token| 17|error: invalid type argument of 'unary *'| | http://cboard.cprogramming.com/cplusplus-programming/141212-learning-use-sharpdefine.html | CC-MAIN-2015-22 | refinedweb | 149 | 74.59 |
TL;DR: Moving from etcd v2 to v3 is in general well documented, however there are a few gotchas you might wanna be aware of.
I’m ATM working on ReShifter—a tool for backing up and restoring Kubernetes clusters—and in the context of this work I came across a few things related to etcd that did cost me some cycles to sort out, so I thought I share it here to spare you the pain ;)
In general, the etcd v2 to etcd v3 migration story is well documented, see this blog post here as well as the official docs. Here are a couple of things to be aware of, both from a CLI perspective (i.e. when using
etcdctl) as well as from an API perspective (i.e. moving from the Go client lib v2 to v3):
- The v2 data model is a tree, that is, a key identifies either a directory (potentially serving as the root for a sub-tree) or a leaf in the tree, in which case the payload can actually be a value. A key can not at the same time be a leaf note and a directory. In v3, the data model has been flattened, that is, there’s no hierarchy information anymore available amongst entries. So, while you can pretend that, the following is true, in v3 you really are dealing with (flat) key ranges:
/kubernetes.io/namespaces/kube-system -->
/kubernetes.io
└──namespaces
└── kube-system
- One consequence of the data model change is that code that queries and manipulates etcd2 and etcd3 looks different. In the former case, you can, for example utilize the hierarchy information to recursively traverse the tree; in case of etcd3 you effectively determine the range (start and end key, pretty similar to what you’d do in HBase) and then iterate over the result set; see for example
discovery.Visit2()and
discovery.Visit3()in the ReShifter code base.
- There is a difference between the wire-protocol used (HTTP vs. gRPC) and the API version/data model in use. For example, you might have an etcd3 server running, but using it in an etcd2 mode. Be aware of how you’ve configured etcd and in which mode you’re communicating with it.
- One thing really caused me some pain: forgetting to set the environment variable
ETCDCTL_API=3. This unremarkable switch causes
etcdctlto switch from talking v2 to v3. Run
etcdctlbefore and after setting the env variable and compare the commands you’ve got available, for example
get/setin v2 vs.
get/putin v3 (see also the screen shot at the top of this post, showing that
lsis only available in the v2 API).
- In an etcd3 server, the v2 and v3 data stores exist in parallel and are independent, see also the terminal session, below.
Let’s have a look now at a simple interaction with etcd3 and how to use the v2 and v3 API. First we launch etcd3, containerized:
$ docker run --rm -p 2379:2379 --name test-etcd \
--dns 8.8.8.8 quay.io/coreos/etcd:v3.1.0 /usr/local/bin/etcd \
--advertise-client-urls \
--listen-client-urls \
--listen-peer-urls
Now, let’s put a value into etcd, using the v2 API:
curl -XPUT -d value="value for v2"
Next, we switch to the v3 API:
$ export ETCDCTL_API=3
And now, we first check if we can read the value we’ve previously set using the v2 API:
$ etcdctl --endpoints= get \
/kubernetes.io/namespaces/kube-system
Which returns empty, so no way to write to the etcd2 datastore and read it out via v3. Now, let’s put something into etcd using the v3 API and query it right after it to confirm the write:
$ etcdctl --endpoints= put \
/kubernetes.io/namespaces/kube-system "value for v3"
$ etcdctl --endpoints= get \
/kubernetes.io/namespaces/kube-system
/kubernetes.io/namespaces/kube-system
value for fv3
With that I’ll wrap up this post and hope you’re successful in migrating from etcd v2 to etcd v3! If you’ve got additional insights or comments on the above, please do share them here, hit me up on Twitter (DMs are open), or come and join us on the Kubernetes Slack where I’m usually hanging out on #sig-cluster-lifecycle and #sig-apps.
Last but not least, I’d like to give Serg of CoreOS huge kudos: he patiently helped me through issues I experienced around using the v3 API. Thank you and I owe you<< | https://hackernoon.com/notes-on-moving-from-etcd2-to-etcd3-dedb26057b90 | CC-MAIN-2019-35 | refinedweb | 749 | 57.2 |
CodeGuru Forums
>
.NET Programming
>
C-Sharp Programming
> convert *.cs file to object
PDA
Click to See Complete Forum and Search -->
:
convert *.cs file to object
dannystommen
September 4th, 2008, 06:14 AM
I want to select a .cs file and convert it into an object.
for example the file test.cs:
using System;
namespace Test{
public class TestClass {
public TestClass() {
}
}
}
Is it possible (in a completly different solution) to convert this into an object.
I want to get propertys and stuff to generate automatically an XML file.
object o; // i need to fill the object here
Type t = o.GetType();
PropertyInfo[] info = t.GetProperties();
So is it somehow possible to dynamically generate an object of TestClass (in test.cs)??
boudino
September 4th, 2008, 06:33 AM
You have to compile it to the assembly (using VS or csc.exe). Then you can reference it in you project regardless the original solution was. But you term "convert .cs file" is confusing, so I'm not sure is I understood you well.
As well, I don't understood what do you mean by "dynamically generate an object"? To instantinate it? Use new keyword. To create a class at runtime? Use reflection and emiting; look e.g. here ().
dannystommen
September 4th, 2008, 08:35 AM
What I simply want to do is:
object o = new TestClass();
but the TestClass is in a file and I want to do this at runtime. For example, you select the file test.cs via an OpenDialogBox. Do I have to compile this at runtime?
Maybe an example would help me a lot.
torrud
September 4th, 2008, 09:50 AM
I believe I understood the question but why you want to do it? Is there any good reason to parse a .cs file, compile it at runtime and use the generated IL code? For me it sounds you are try to make scripting inside of a .NET application but for that there are other ways.
The easier way is to compile the .cs file and load the created assembly and use the classes as normal. Why you can not do it?
dannystommen
September 4th, 2008, 10:02 AM.
It would be much easier if it was genereted automatically. I have that part working. But at this moment I just call the method with de corresponding class (for example, I have the method CreateSqlMap(string savePath, object o) and filling in the correct parameters gives CreateSqlMap("C:\\temp\\", new TestClass() ). But now de TestClass is in my solution. I want to make an application where you can select a random .cs file, and then the SqlMap is created. So in this case, I select test.cs, and the application should 'convert' to an object of TestClass, and after that I call the method CreateSqlMap().
It should be something like this in the end
object o = ConvertCSfileToObject(file CSfile); //just made the name up
CreateSqlMap("C:\\temp\\", o );
So I need the implementation of the ConvertCSfileToObject() method
MadHatter
September 4th, 2008, 10:02 AM
as boudino stated you must compile it. from there you can reflect into the dll and do what you want to do...
C# is not a scripting or an interpreted language, so you must compile it in order to use it. once you compile and load it, you can do what you are wanting to do (through reflection though).
there are (or at least were) api's in the framework that allow you to compile a string of text, or a file, and generate an assembly on the fly, but I read somewhere that thats been depreciated.
here's an old article on it, I have no idea if it still applies:
Arjay
September 4th, 2008, 02:08 PM.Why not leverage the XmlSerializer to create these xml files for you? You could create an adapter that takes the class(es) containing the properties and use reflection and XmlSerializer.Serialize method to generate the xml.
boudino
September 5th, 2008, 02:27 AM
Just for fun, I've done some investigation. You can utilize compiler in run-time like this:
},
src);
Type t = r.CompiledAssembly.GetType("Test.TestClass");
object tInstance = Activator.CreateInstance(t);
dannystommen
September 8th, 2008, 03:03 AM
I found the same example and it works. But just for a class that doesn't use other classes. I added a custom attribute to the properties:
[DBColumn("t_ID", true)]
public int ID {
get { return _ID; }
set { _ID = value; }
}
the attribute 'DBColumn' is in another file and class (but in the same namespace)
public class DBColumn: Attribute {
private string _ColumnName;
private bool _IsKey;
....
}
So when I try to compile now I get the error:
"TestClass.cs(13,8): CS0246: The type or namespace name 'DBColumn' could not be found (are you missing a using directive or an assembly reference?)"
So I tried first to compile the class DBColumn and after that the TestClass. But still the same error. How can I fix this?
boudino
September 8th, 2008, 03:18 AM
You need to add reference to the assembly where DBColumn is defined into the CompilerParameters. Add ReferencedAssemblies = {"YourAssembly.dll"}, to the initialization block.
dannystommen
September 8th, 2008, 03:31 AM
string extAssembly = @"C:\Documents and Settings\Danny\Bureaublad\YieldManagerPlus\YieldManagerPlus.Domain\bin\Debug\YieldManagerPlus.Domain.dll";
compilerParams.ReferencedAssemblies.Add(extAssembly);
I did and it compiled.
But now when I do the next (after succesfull compilation):
Type[] types = result.CompiledAssembly.GetTypes();
if (types.Length == 1) {
Type t = types[0]; //debugger: 'name=TestClass fullname=Test.TestClass'
PropertyInfo[] properties = t.GetProperties();
foreach (PropertyInfo p in properties) {
//get my custom attribute 'DBColumn'
object[] attributes = p.GetCustomAttributes(false);
}
}
on the line p.GetCustomAttributes I get an FileNotFoundException:
Could not load file or assembly 'Test, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null' or one of its dependencies. Het systeem kan het opgegeven bestand niet vinden.'
The last sentence in Dutch means that the system could not find the file (like the exception already says)
dannystommen
September 8th, 2008, 09:14 AM
I forgot to copy the DLL to the directory where the executable runs. Anoher problem is solved, but a new one occurs. After I call 'object[] attributes = p.GetCustomAttributes(false);' attributes is filled with 1 object, just like I want it to. It is filled with an object of type 'Test.DBColumn'.
In the namespace where everything is compiled is called IbatisCreater. In this namespace I have an exact copy of de DBColumn class, just with a different namespace.
So when I try to do the next thing, I get an exception:
DBColumn Colomn = (DBColumn )attributes [0];//where DBColumn is type of IbatisCreater.DBColumn
InvalidCastException
Unable to cast object of type Test.DBColumn' to type 'IbatisCreater.DBColumn'.
Is it possible to cast this? The content of the 2 classes are exactly the same.
boudino
September 8th, 2008, 09:52 AM
InvalidCastException
Unable to cast object of type Test.DBColumn' to type 'IbatisCreater.DBColumn'.
Is it possible to cast this? The content of the 2 classes are exactly the same.
No, unless both class are in same hierarchy with common predecestor.
JonnyPoet
September 11th, 2008, 05:10 PM
If you have two identic classes and want to use them both and they have the same pattern why not creating an interface with that pattern and adding this interface to both classes. Maybe you may need a wrapperclass for getting this done, but afterwards this should work.
dannystommen
September 12th, 2008, 02:32 AM
I made a DLL of the class. everything works fine now
codeguru.com | http://forums.codeguru.com/archive/index.php/t-460469.html | crawl-003 | refinedweb | 1,255 | 59.19 |
Roads, Towers, and Online Legal Help
Online legal help systems have been a major force for good. Millions of people without the luxury of a personal lawyer have benefited from them. They come in many shapes and sizes, from many different worlds:
· Legal aid programs and other nonprofit legal service organizations
· Courts and government agencies
· Law schools and universities
· Commercial providers and startups
· Private law firms and departments
We have an embarrassment of riches when it comes to tools, platforms, and methodologies for applications that address legal needs. The diversity is a sign of health. But there’s also a lot of suboptimal duplication of effort and missed opportunities around scale and synergy. The map of coverage also remains very sparse. For most people, in most situations, there’s no immediately useful online resource, free or paid.
Where is this all going? Where should it be going?
(My focus here is on not-for-profit efforts, in North America. Commercial and international developments of course make this even more interesting.)
A thought experiment
Those who venture to supply online legal applications share many challenges.
Getting content correct and keeping it current with limited human and financial resources is a big one. Applications need care and feeding. Users and developers need support. But there’s a shortage of organizational bandwidth.
Imagine if we organized a shared resource to ‘mutalize’ some of the infrastructure needed by multiple content providers. What might such a resource look like?
It would supply secure and scalable servers that application providers could use, without having to source, configure, and manage their own.
It would provide an organized and accessible content collection, optimized for distribution and sharing, and cover more than one development platform.
It would offer accounts for end users, developers, and content managers, with associated data storage and sharing (across time, apps, and people). Users’ answers to questions would be securely stored as tagged sets of data that can be accessed and edited by multiple applications.
It would support users via email and live chat, bug tracking, and ‘ticket’ management.
Such a service would provide training, continuing education, webinars, and navigable knowledge bases of relevant materials for developers and other project participants.
It would help build community by hosting discussion fora, regular online meetings, and other arrangements that facilitate collaboration.
It would offer one-on-one technical support to developers and managerial support to project teams. Custom statistical reports about usage would be available via a self-help dashboard.
This imagined resource would support multiple human languages, integrate with external systems such as case management and e-filing, and offer specialized configurations for in-person and virtual clinics at which groups can be served simultaneously or asynchronously.
You might think of it as an ecumenical collection of free legal apps with answer-saving and other valuable forms of automated assistance.
LawHelp Interactive
It turns out that we already have at least one example of a service that exemplifies all of the above qualities — LawHelp Interactive (LHI).
LHI was first envisioned over 18 years ago. Planning began in late 2001. It arose in part from the ashes of AmeriCounsel, the dot-com adventure in which my colleague Bart Earle and I first gained experience with a large scale online document assembly deployment. LHI started out as National Public ADO (Automated Documents Online. Get it?) Pro Bono Net, a national nonprofit dedicated to access to justice, assumed responsibility for the service in 2006 and soon came up with a better name.
(For more about the ambitious AmeriCounsel venture, including its Open Practice Tools initiative, check out my keynote at the 2001 CALI conference.)
The federal Legal Services Corporation has played a central role in supporting LHI. It provided seed funding for initial R&D, and later acted as a strategic investor, supporting ongoing operations and improvements, as innovations in service delivery to unrepresented litigants and others were proven out. This has been a successful public/private model; today LSC provides only about half of LHI’s budget.
LHI has been foremost — at least in terms of scale and impact — among free US resources that leverage the web and intelligent technology to advance access to justice and legal wellness. It has accumulated some 5000 modules, is used in over forty states, and offers its interface in seven languages. Millions of sessions have resulted in millions of customized documents. All without charge. (LHI is free for end users and nonprofit legal aid programs funded by LSC. Courts can subscribe for access.)
LHI has been a real catalyst for innovation. The program and community have pioneered new models of access to justice centered on online forms in many states that have dramatically improved the ability for those without lawyers to achieve justice on their own, including unbundled, limited scope services and remote services. (See and.)
A substantial percentage of LHI usage is by lawyers, paralegals, advocates, court staff, and other professionals.
From the beginning LHI was conceived as including a content commons, a collection of codified legal know-how that could be freely copied and remixed by participating contributors. A partnership with the Center for Computer-Aided Instruction (CALI) cemented an early determination to support multiple interfaces for end users. A2J ‘guided interviews’ have been part of LHI since the get go.
Driven by a spirit of continuous improvement, many enhancements have been delivered. There’s been steadily increasing geographic and functional scope, including custom integrations with legal aid and court software systems. (See e.g..)
More advanced forms of process guidance, via dynamic ‘pathfinders’ and ‘next steps,’ are in the works. A recent survey identified over thirty further potential enhancements for community prioritization.
So LHI is going strong and still growing. Yet it faces challenges. Its UI is periodically refreshed, but aspects feel old-fashioned. Part of that is due to needing to attend to an installed base, and gingerly manage legacy code. Complex services tend to accumulate technical debt. That makes re-architecting things a tall order. (For example, presently all files for each LHI module live in their own container, making it difficult to do quick global updates of textual or logical components used by more than one.)
Another concern is LHI’s primary document assembly engine.
Two months after the 9/11 attacks a white paper on online document automation commissioned by the Legal Services Corporation was circulated. It described the already mature and variegated market of products and their potential use for legal assistance purposes. Twelve technology providers were then asked to complete a detailed questionnaire about their solutions. Six responded.
The next February (2002), several dozen folks from around the country gathered in a conference room at Davis Polk in New York City to hear extended presentations from four vendors. Alternatives were reviewed and assessed. HotDocs emerged as a solid candidate and was adopted by the initial team. It has proved reliable for a couple decades. It still has a huge domestic and international customer base. But there are downsides.
HotDocs has undergone several changes in ownership since it was generously donated for use on LHI by LexisNexis. The latest acquisition, by AbacusNext, has reminded us of the vicissitudes of depending on a commercial vendor. Pricing has become more aggressive, although the company has confirmed a commitment to a 70% discount for nonprofits. And most of its development energy has been allocated to next-generation products that aren’t yet a good fit for LHI’s content and community, which depend on its ‘classic’ line.
The HotDocs authoring tools are not free, and require Windows. That’s a particular constraint for law students, many of whom use devices only running the Mac operating system. (You can run HotDocs interviews on just about any device, but need Windows to create and edit them.)
Also, while the JavaScript interface offered by HotDocs is highly functional and stable, it looks increasingly outdated.
LawHelp Interactive may seem like an incumbent surrounded by disruptors, but the future needs something like it. Like a caterpillar, LHI has morphed several times from early conceptions. What butterfly might emerge next? Where does it go from here?
(The rest of this article lays out a bigger context in which LHI will likely play a role. I don’t purport to speak for the project or Pro Bono Net.)
The spiraling ecosystem
In the meantime, others have brought new ideas and energies to this space.
Odyssey Guide & File from Tyler Technologies emerged as a commercial alternative to CALI’s A2J, aimed at courts that want to field interviews and electronic forms to simplify the filing process for self-represented litigants.
For its part, CALI has implemented native document automation features (it previously relied on HotDocs to assemble documents), and moved to independently hosting guided interviews on its own A2J.org site.
An excellent open source alternative finally arrived, in the form of Jonathan Pyle’s docassemble. (See Making Mischief With Open-Source Legal Tech for an example of its power.) And several nimble players jumped in to make available easier interfaces for building and maintaining docassemble applications, Community Lawyer and Documate. The former now boasts it own content collection —.
Legal document automation tools have long been a commodity. I’ve used dozens of them. Deep, reliable functionality combined with provider longevity is still rare. Free and easy go a long way, but maybe not far enough. HotDocs, for instance, includes so-far-unmatched facilities for automating complex sets of PDF forms. And history is littered with brilliant alternatives that failed to survive.
Spaces adjacent to document services have likewise been busy. The Civil Resolution Tribunal’s Solution Explorer, a pioneering expert system offering free legal information and tools, has been used over 100,000 times. New players are emerging regularly — see e.g..
Law schools have jumped into the game, with courses in which students build applications as part of their course work using tools like A2J Author, Community Lawyer, HotDocs, Neota Logic, and QnA Markup. (I’ve taught such courses at five different schools myself, and fellow teachers are active around the world.)
Commercial players and startups have also been making waves. Neota is seeing worthy competition from entrants like Bryter; Legal Zoom may be noticing ascent by disrupters like DoNotPay. There’s a bit of a space race going on. Major investments are happening.
Other platforms offer features and functions that LHI does not, but few have comparably robust fabrics of surrounding support. We’ve ended up in multiple camps, with mutually inconsistent tools and skills that are not readily transferred. This fragmentation is discussed more below.
New frontiers
There’s no lack of new things that LHI and related services can and should do.
One perpetual desire is to ease the authoring and maintenance of interactive content, especially by non-programmers such as domain experts. Some products and platforms have made authoring much simpler, at least for basic applications. This piece illustrates how vendors can trumpet ease-of-development advantages. Low-code and no-code are more than buzz words.
Many of these services fall short in terms of accessibility. Providing genuinely usable applications for those with serious cognitive or perceptual limitations via browser-agnostic Web sessions that need to present, elicit, and generate complex texts — even on smart phones — is an enormous challenge.
All services could expand their interoperability with each other and with common third party tools like Clio, DocuSign, Google Drive, and Legal Server. (LHI was one of the earliest to work with the latter. Other services have also done impressive work in this area.)
We should remain alert to new tools and paradigms for dynamic questioning, fact-specific guidance, and document generation. Those will include new forms of interaction — bots, text messages, and other conversational approaches. ‘Push’ — proactive communication of warnings and reminders to users — will also have a role.
And we should be on the lookout for tools that support new kinds of assistance, ones that go beyond interactive questionnaires, custom instructions, and assembled documents.
One of my candidates for a new field of endeavor has long been decision support. See A Decision Space for Legal Services and The Centrality of Choice in Legal Work. Tools that promote effective choices naturally also have usefulness in the online dispute resolution (ODR) context.
We can clearly find ways to introduce more artificial intelligence into our online legal help environments, both with ‘good old fashioned AI’ like expert systems and next generation deep learning and pattern recognition systems. We could intelligently parse documents both to help users in specific situations and to infer models for use more generally.
Open source and other forms of openness of course present great opportunities. See Opening Legal Knowledge Automation. Among other things, that could involve standardization of data elements and structures, shared ontologies, and variable namespaces.
One aspect of all this that I’ve been particularly vocal about is quality. We could use more fanatical attention to it, maybe in a six sigma, zero-defects spirit. See The High Cost of Quality in Legal Apps and Substantive Legal Software Quality: A Gathering Storm?. Quality assurance — regarding both platforms and their content — would be facilitated by greater transparency and inspectability.
The above is a very incomplete list. Which is both exhausting and encouraging. This piece from 2007 describes frontiers in legal document automation more generally, some of which remain uncrossed: Current Frontiers in Legal Drafting Systems.
Strategic choices
All of the providers face the challenge of funding and sustaining their efforts. And it’s hard enough just to keep things going; trying to rebuild while operating at capacity can be like changing engines on a plane while in flight.
One other shared concern is the specter of intensified regulatory scrutiny, e.g. by bar groups contending that assistance via software is tantamount to the unauthorized practice of law. My own view, articulated in places like Safe Harbors and Blue Oceans, is that there should be a bright line rule that making software available is not ‘practice.’ Which is not to say that we shouldn’t be concerned about the quality of some of the published content. However, we can deal with bad actors without resorting to prior restraint. Even if such constraint were constitutional it would be bad policy. That a work of authorship can be made to do useful cognitive work doesn’t deprive it of 1st Amendment protection. And freedom of expression is an empty promise without freedom to distribute what is expressed.
One key question is how we might best work together? Should we try to be less disintegrated than we presently are? What’s the right balance of centrality and distribution? What kinds of things are best accomplished together, and which happen best on the edge? Can we reap the benefits of cross-organizational thinking without losing those of autonomy?
Do we need a mother ship, or is a loosely coupled federation better? Are big shared environments good things? Is there a place for a neutral “Switzerland” of free interactive content, one that supports many application categories/types? Should content itself be platform independent?
Clearly we don’t want to put all of our eggs in one basket, and no organization should spread itself too thin or try to be all things to all people. Organizations of course should focus on their key commitments and core competencies. They also need to maintain continuity of teams.
Towering
As things now stand, centrifugal forces are in play, and lots of folks are off doing their own things, using different languages and approaches in multiple ‘towers.’ (We would resemble the tower of Babel if there was only one! I guess we’re more like a medieval Italian city-state, with wealthy families competing to have the tallest structure.) Lots of hard-won wisdom remains trapped in silos.
Some of this comes from ‘not invented here’ attitudes; some from welcome entrepreneurial zeal. But competition is generally healthy, even at the expense of wasteful duplication and reinvention.
And there are clear benefits of diversity in our ecosystems. (See Knowledge Gardening and Civil Justice Engineering.) It’s going to take a lot of villages (and villagers) to ameliorate the access to justice crisis.
But shared resources can offer economies of scale. Some of those economies can be tapped through loosely coupled arrangements. There would seem to be much positive opportunity in better connective tissue. That of course raises governance challenges. And any arrangement will naturally involve costs and compromises.
A facilitator of shared resources, collaborations, integrations, and interoperations could function as a benevolent natural monopoly. It could supply participants with valuable insight into each other’s collections, and maybe even interchangeable parts.
In the widening gyre perhaps such a center cannot hold, but let’s hope we won’t need to settle for mere anarchy.
What topology would make the most sense? No center, one center, or multiple centers? If center(s), what form could such entities take? Or is this all best left to the market?
These would be good topics for a hackathon, or at least a designathon. Perhaps this socio-technical challenge will come up at this year’s SubTech conference.
On the Road
We tend to take streets and highways for granted. They’re actually quite impressive structures, albeit not very high. Ribbons of pavement wind through our neighborhoods, cities, and countrysides. They provide the immensely valuable commodity of safe, level ground on which arbitrary kinds of vehicles can move. Heterogeneous traffic flows on the interstate highway system with relatively little friction.
Road building and maintenance don’t offer much glory, but their effective practice is critical to modern life. Commerce needs a commodious and reliable substrate.
We’re blessed with lots of bright ideas and cool tools in the access-to-justice world. Inventors and visionaries remain essential. But we also need planners and managers. Institutions can help. To revisit an old trope, we may want a cathedral as well as a bazaar. Nimble authoring systems are great, but we also need distribution systems.
Engineering a Better Tomorrow
Last spring researchers at Cambridge University announced that they had synthesized the complete genetic material of the bacterium Escherichia coli — four million base pairs of DNA — and inserted it into functioning cells, which reproduced and survived. Quite a feat of engineering. Could legal technologists accomplish something as impressive?
We’ve long had the tools and knowledge to help under-resourced people deal effectively with their legal needs. But we haven’t yet made a major dent in the problem. We could do SO much more. The need for these kinds of services is easily 100 times greater than what is presently provided. We’ve only scratched the surface of positive potential.
Unsolved legal problems cause immense suffering. With adequate help, that suffering can often be avoided, or at least minimized. Yet many folks get little or no help, even to help themselves. Many go unrepresented in formal proceedings; even more are totally ‘unhelped’ across the vast range of law-related problems and opportunities.
Vendors come and go; projects rise and fall. But aching needs remain.
How about an Apollo program to end poverty of legal help? Imagine we were taking the best emerging technologies and accumulated know-how to address this need. How might we sculpt and nurture a system of systems that stands a chance of achieving that result?
Our goal should be nothing less than the eradication of legal helplessness. A vibrant market of high-quality, reasonably priced services, supplemented by equally high-quality resources for those who can’t afford to pay. Those who need and want help should be able to get help.
That moonshot will require decent roads as well as sturdy towers. | https://medium.com/@MarcLauritsen01/roads-towers-and-online-legal-help-57957a25767 | CC-MAIN-2020-16 | refinedweb | 3,284 | 56.66 |
- Headline it:
pip install
Sample XML
Here’s a simple small XML snippet representing a person. The XML contains a persons name and address.
<persons> <person id="54321"> <name> <first>Troy</first> <middle>J</middle> <last>Grosfield</last> </name> <address> <street>123 North Lane</street> <state>CO</state> <city>Denver</city> <zip>12345</zip> </address> </person> </persons>
Read XML
Here’s a simple script to read and print the sample XML from above.
from elementtree import ElementTree as et # Load the xml content from a string content = et.fromstring(xml_from_above) # Get the person or use the .findall method to get all # people if there's more than person person = content.find("person") first_name = person.find("name/first") middle_name = person.find("name/middle") last_name = person.find("name/last") # Get the persons address address = person.find("address") street = address.find("street") state = address.find("state") city= address.find("city") zip = address.find("zip") # Print output print "id: " + person.attrib.get('id') print "first name: " + first_name.text print "middle name: " + middle_name.text print "last name: " + last_name.text print "street: " + street.text print "state: " + state.text print "city: " + city.text print "zip: " + zip.text
Output
id: 54321 first name: Troy middle name: J last name: Grosfield street: 123 North Lane state: CO city: Denver zip: 12345
This is a fairly basic XML parsing demo, but should give you insight on how to get element values as well as element attribute values from XML content.
- 5 Comments »
5 Comments
Agree with Killer, too many people get carried away with this, Keep-it-simple for the newbies!
Thanks Troy.
… [Trackback]…
[…] There you will find 83780 more Infos: blog.troygrosfield.com/2010/12/18/parsing-xml-with-python-using-elementtree/ […]…
Thanks very much for this simple example, I believe this is the best example for a beginner like me.
… [Trackback]…
[…] Read More Infos here: blog.troygrosfield.com/2010/12/18/parsing-xml-with-python-using-elementtree/ […]…
this is the best and the most simplest xml parser I saw on the net… those on the oreilly website contains lot of codes…
good job… python = simplicity :) | http://blog.troygrosfield.com/2010/12/18/parsing-xml-with-python-using-elementtree/ | CC-MAIN-2021-21 | refinedweb | 347 | 61.63 |
Cleanup: isolate UDIM parameters into a struct Passing multiple UDIM arguments into the packing function is awkward especially since the caller may not be using UDIM. Use an argument to store UDIM packing parameters which can be NULL, which operates without any UDIM support. Add a function that extracts these parameters out of the image space allowing for multiple functions to take UDIM parameters in the future.
UV: Pack to closest/active UDIM Implements T78397 Extends the functionality of pack islands operator to allow packing UVs to either the closest or active UDIM tile. This provides 2 new options for packing UVs : * Closest UDIM: Selected UVs will be packed to the UDIM tile they were placed on. If not present on a valid UDIM tile, the UVs will be packed to the closest UDIM in UV space * Active UDIM: Selected UVs will be packed to the active UDIM image tile In case, no image is present in the UV editor, then UVs will be packed to the tile on the UDIM grid where the 2D cursor is located. Reviewed By: campbellbarton Maniphest Tasks: T78397 Ref D12680
WM: only return PASS_THROUGH on PRESS for selection operators Some selection operators return (PASS_THROUGH & FINISHED) so the tweak event isn't suppressed from the PRESS event having been handled. This is now restricted to events with a PRESS action. Without this, using CLICK for selection was passing the event through which could run other actions unintentionally.
UI: enable the depend-on-cursor flag for some operators - Bend (Transform). - Extrude to Cursor. - Lasso Select (related operators such as node-cut links, mask.. etc). - Rip Mesh / UV's. - Vertex/Edge Slide.
WM: don't store selection properties typically set in the key-map While this was already the case for the most part some selection operators stored common settings for reuse such as "toggle", "extend" & "deselect". Disabling storing these settings for later execution as it means failure to set these options in the key-map re-uses the value of the shortcut that was last called. Skip saving these settings since this is a case where reusing them isn't helpful. Resolves T90275.
Cleanup: consistent use of tags: NOTE/TODO/FIXME/XXX Also use doxy style function reference `#` prefix chars when referencing identifiers.
Cleanup: Spelling Mistakes This patch fixes many minor spelling mistakes, all in comments or console output. Mostly contractions like can't, won't, don't, its/it's, etc. Differential Revision: Reviewed by Harley Acheson
Performance: Limit recounting during selection mode flushing. This patch ensures that selection mode flushing updates total selection counts internally. This reduces recounting when we are sure that the input total selection counts were up to date. For example for circle selection the total selection counts were correct. But during flushing the selection could have been changed and therefore the selection was always recounted. This increased the performance on selected system from 6.90 FPS to 8.25 FPS during circle selection operations. Before: {F10179981} After: {F10179982} Reviewed By: mano-wii Differential Revision: | https://git.blender.org/gitweb/gitweb.cgi/blender.git/atom?f=source/blender/editors/uvedit | CC-MAIN-2021-43 | refinedweb | 504 | 54.02 |
Opened 5 years ago
Closed 4 years ago
#25670 closed Bug (fixed)
`dictsort` does not work when `arg` parameter is numeric
Description
According to
dictsort documentation, it orders given list of dictionaries, using
arg as property in each dictionary:
@register.filter(is_safe=False) def dictsort(value, arg): """ Takes a list of dicts, returns that list sorted by the property given in the argument. """ try: return sorted(value, key=Variable(arg).resolve) except (TypeError, VariableDoesNotExist): return ''
However, it is not possible to order list of dictionaries by a numeric key. Let's consider the following test case:
def test_sort_list_of_tuple_like_dicts(self): data = [{'0': 'a', '1': '42'}, {'0': 'c', '1': 'string'}, {'0': 'b', '1': 'foo'}] sorted_data = dictsort(data, '0') self.assertEqual([{'0': 'a', '1': '42'}, {'0': 'b', '1': 'foo'}, {'0': 'c', '1': 'string'}], sorted_data)
This test fails with the following message:
Traceback (most recent call last): File ".../django/tests/template_tests/filter_tests/test_dictsort.py", line 50, in test_sort_list_of_tuple_like_dicts {'0': 'c', '1': 'string'}], sorted_data) AssertionError: Lists differ: [{'0': 'a', '1': '42'}, {'0': 'b', '1': 'foo'}, {'0': 'c', '1': 'string'}] != [{'0': 'a', '1': '42'}, {'0': 'c', '1': 'string'}, {'0': 'b', '1': 'foo'}] First differing element 1: {'0': 'b', '1': 'foo'} {'0': 'c', '1': 'string'} - [{'0': 'a', '1': '42'}, {'0': 'b', '1': 'foo'}, {'0': 'c', '1': 'string'}] + [{'0': 'a', '1': '42'}, {'0': 'c', '1': 'string'}, {'0': 'b', '1': 'foo'}]
The
dictsort uses
sorted function with
key=Variable(arg).resolve. When
arg is
'0',
key function should behave like
operator.itemgetter('0'), but
Variable('0').resolve(context) returns
0 regardless of given
context.
There are five usages of
dictsort with
"0" as
arg in debug.py
As mentioned by bmispelon, this may be some kind of regression in Django 1.3.
Change History (8)
comment:1 Changed 5 years ago by
comment:2 Changed 5 years ago by
comment:3 Changed 5 years ago by
comment:4 Changed 4 years ago by
I'm not sure about this. It seems like the old behavior was somewhat accidental. Having a filter called "dictsort" work on a list of lists seems a bit odd and unintuitive. Maybe we should ask for other opinions on the django-developers mailing list. Maybe there is a common third-party filter that would do the job that we could add to builtins.
comment:5 Changed 4 years ago by
After some further thought, I changed my mind given the fact that dictsort doesn't work with numeric string keys. That's definitely a bug that should be fixed.
comment:6 Changed 4 years ago by
I left comments for improvement on the pull request.
This ticket is a spinoff of #25646. | https://code.djangoproject.com/ticket/25670 | CC-MAIN-2020-29 | refinedweb | 437 | 61.56 |
For the October 2010 TAG F2F
Jonathan Rees
13 October 2010
15 and 25 October 2010:
deletions, {additions}
This document is {in very rough form and is} likely to be revised. When citing please look for the latest revision.
There has been considerable debate and confusion on the topic of what constitutes an adequate persistent reference on the Web. This memo attempts a first-principles analysis of the question in order to establish a framework in which solutions can be compared rationally.
To ensure an inclusive treatment, "reference" here means both traditional "pre-Web" references such as
Mary-Claire van Leunen.
A Handbook for Scholars.
Knopf, New York, 1985.
and Web references such as
<a href="">HTML 4.01 Specification</a>
I'll call the document referenced by a reference its target.
A reference can (but needn't) indirect through a specific catalog, as any of
K. 626
PMID:16899496
doi:10.1093/bib/bbl025
{15 Oct added 'doi:'}
The targets of these references are: a composition by Mozart that is indexed in the Köchel catalog; a scholarly article that is indexed in the US National Library of Medicine bibliographic database; and the same article, which happens to be known the handle system. Each of these forms is well recognized inside a substantial community (musicians, biologists, librarians[?]). Similarly we have
which refers to a technical specification by indirecting through a well-known catalog system known as the Web..
References vary greatly in the efficiency with which they can be chased using well-known methods. To illustrate this here are some points in this spectrum:
I'll define persistence as survival beyond events that you would expect would imply extinction, such as the survival of an author's works beyond his/her death, or the survival of a product beyond the life of the company making it. As properties go persistence is somewhat peculiar: because it refers to the future, there is no test for it, so any assessment of persistence is speculative.
By persistent reference I mean the persistent ability to chase a reference. In order for a reference to be persistent, therefore, the reference itself and the target must be persistent, and during its lifetime there has to some well-known apparatus, perhaps different ones at different times, that is competent to chase the reference.
Time scales of 10 to 100 years are typically cited in discussions of document and reference persistence. Usually what's of interest is survival beyond the particular projects or people that created the document, or the apparatus that initially enabled the reference.
The ideal reference is both fast (can be chased quickly and automatically) and persistent (can be chased over the long run). This section surveys failure modes and ways that failures can be either prevented or remedied.
Persistent reference requires that there be a working reference-chasing apparatus through time. In today's architecture the apparatus might consist of server(s), network(s), and client(s). To maintain function the apparatus needs to either not break, or it needs to be replaced or repaired when it does breaks. Since any apparatus typically has many parts, prevention and repair can be effected in many different places.
Following are major failure modes for persistent reference, and with each a grab-bag of techniques proposed or in use, intended as illustration. No endorsement of any technique is intended.
See Masinter 2006 [tbd: link, see below] for a thorough treatment of document persistence.
{That is, it exists somewhere, but the party seeking it doesn't know where.}
For example, search engines are in many ways the ideal apparatus for reference-chasing; the problem is that the results delivered are not necessarily either unique or correct, so a manual postpass is required to locate the desired document among the many 'hits' returned.
Server-side reference maintenance and mapping may be the dominant persistent-reference strategy on the Web today. A dead reference (404) is often fixed at the source, not in the apparatus connecting the source to the target. We send email to the webmaster, or the webmaster does an audit, and often the desired document can be found somewhere, and the reference is updated.
Similarly, generic tactics such as adding a prefix to convert a handle to an http: URI are widely deployed. This is fine if either the http: reference itself is durable, or if the source-side mapping can change over time to track changes in available services. (handle.net works now, but a different service may have to take over in the future.)
Unfortunately a reference that is stored in such a way that it can't be "improved" before presentation to a user is a common case. For example, the reference may be stored in a private file that does not enjoy constant maintenance, or it may be stored in an archive that by policy or license terms must not be modified.
File formats go extinct, and with them the documents that use them. - TBD.}
In choosing one form of reference over another - traditional vs. electronic, urn: vs. http:, and so on - one is placing a bet that the form you chose will be adequately persistent, or at least more persistent than the one you didn't choose. Communicating the reference is an act of faith, and an assessment of persistence includes an assessment of the interests and prospects of all institutions that would be involved in stewardship of the reference and its target.
If responsibility for different parts of a reference is spread among multiple institutions - as it is, at present, in the case of URIs - then persistence is only as good as the weakest participant.
Some common considerations when assessing potential persistence bets:
Ubiquity
Competence
Values
Safety net
Let's look at some examples of persistent reference schemes to see how they size up as persistence risks.
Köchel listing: The listing itself is small and well-known among musicians (it's even in Wikipedia), and Mozart is so popular that all of his works are well indexed and highly replicated. This system requires no particular institutional support. But the number itself (K. 626) is not automatically actionable.
MIT Dspace repository: (example: hdl:1721.1/36048) {hdl: added} Chasing these references relies on the handle system and on MIT, both of which seem pretty good bets (the details would take us too far afield I think). The competence to chase bare handle strings is not widespread, but this deficit is remedied by mapping the handle to an elaborated reference that contains a corresponding http: URI (). If http: falls into disfavor in the future, or handle.net is threatened, a different server-side mapping can be substituted.
Crossref DOIs: {The Digital Object Identifier (DOI) system
has wide penetration in academic publishing and is trusted as a
source of reliable identifiers. The primary force behind DOIs is
Crossref, which is funded by the publishing industry.
(Traditionally the publishing industry has left responsibility for
persistence up to libraries and archives, so this arrangement
is an experiment.) Because of their success, a careful
threat analysis of Crossref DOIs is warranted, examining the same
issues that affect any putatively persistent identifier system. In
particular, provisioning depends on Crossref's publisher/members,
so we should ask what happens when they withdraw from the system;
and Crossref itself may have
organizational vulnerabilities related to, say, database replica
licensing or organizational succession plan, that threaten
longevity. Of course, as with any such system, as these identifiers
become increasingly important to the community, they gain resistance
to failure, since even if Crossref suddenly
disappeared, another organization could step in to
recover the index and provide the needed services.}
{from earlier draft:} Crossref is a service provided to the
publishing industry, which by tradition, market forces, and
appropriate division of labor is not expected to be involved in
persistence. While the Crossref management very likely considers
persistence to be very important, this is not the primary purpose or
function of Crossref, so DOIs should be handled carefully as
references. However, the DOI has become so important that
it probably falls under the category of "too big to fail" - if
something went wrong with the system, there would be a scramble
among its customers to repair it.
HTTP URIs at w3.org:... is similar to the Dspace case. IETF, which nominally controls the URI scheme namespace (http:), has persistence and stability in general as a core value and has gained widespread trust; anyhow they wouldn't be respected if they tried to change the meaning of 'http:' in an unfriendly way. The weak link is probably ICANN, which does not have persistence as a core value, and a nagging feeling that W3C might lose its domain name registration, although if anyone can keep a domain name registered indefinitely then one would think W3C can. If ICANN were to unilaterally end access to W3C's document via the domain name, it is likely that the community of users of URIs would rally to recover this access by bypassing ICANN. w3.org is probably "too big to fail".
Webcite:... has earned the favor
of the publishing
industry as an archive of selected Web documents and a source of
stable URIs for them. webcitation.org likely has many of the same properties as, such as, appropriate core values (sorry, I do not have many
details at present). If its
current steward fails, it will probably be taken over by its
customers.
OCLC PURL server:... Of course the "p" in the acronym is as usual wishful thinking; most purl.org URIs have proven ephemeral. That does not mean that some of them aren't good bets. While OCLC is a stable, central institution, vulnerabilities to purl.org come from the uncertain business model for the service (distraction), from the usual discomfort of the OCLC/ICANN relationship, from the fragility of purl.org's authorization framework (what if authority to repair redirection is lost through death or bankruptcy) and of course from all the vulnerabilities of the organizations responsible for the secondary reference that is the target of the redirect.
Those not comfortable with my "too big to fail" analysis of ICANN dependence {i.e. w3.org references are not vulnerable to w3.org losing its domain name registration because the safety net would arrange for ICANN to be bypassed in that event} might be motivated to attempt to carve out a new part of the domain name system that really has institutional backing from ICANN for persistence, not just leases requiring perpetual re-registration. For example - I am not really proposing this, just conducting a thought experiment - one might convince ICANN to create a top-level 'urn' domain that has multilateral community backing to operate according to the URN scheme registrations on deposit with IETF, i.e. would by universal agreement have the same meaning as urn:foo:123, whether the author of the URN registration (or whoever) has paid their registration fees or not.
Conservative practitioners will probably not want to rely on a URI or any other short "identifier" string alone for reference - they would continue to provide references as a combination of conventional metadata (slow to act on) and machine-actionable forms such as some http: URIs. It might be nice if this practice of hybrid references were codified a little bit, perhaps using RDFa; at present the metadata is rarely parseable by machine and each publisher has its own way to present it, and you can't even tell automatically where one reference ends and the next starts.
The notion that URNs and handles are persistent references and http: URIs aren't is easily seen as simplistic when it is recognized that persistence is a function not of technology but of the intentions and competence of the institutions involved in access. Legions of URNs have already failed as persistent references, and many http: URIs are likely to succeed.
On the other hand, http: is the recommended way to "identify" something according to Web architecture, and has the obvious advantage over other kinds of references of being directly actionable using today's infrastructure. http: therefore deserves careful consideration as a persistent reference mechanism.
Today's http: apparatus relies on the ICANN-based domain name system, and therefore seems to have an inherent weakness due to ICANN's lack of commitment to stability and persistence. I've mentioned several workarounds for this, including taking ICANN out of the loop, negotiating with ICANN, and vigilance around registrations.
I've suggested that domain names that are "too big to fail", such
as w3.org, ought to be good persistence bets. But what about
smaller operators, such as minor institutional archives that are
competently managed and replicated, but which in the event of an
accessibility compromise (domain name registration loss) would not
be able to rally any of the remedies given above?
These
organizations (and their successors) are the primary market of the
handle system and URNs.
Can they can advocate simple http: URIs as
adequate references?
Well, obviously they can rely on..., which is probably another "too big to fail" domain. Another solution would be reliance on a new "persistent" DNS zone as sketched above.
{Remaining questions: Is persistent reference an appropriate use of bare http: URIs? If not, what is the recommended alternative? If so, what advice to we give to the community regarding assessment of persistence of any particular http: URI?}
TBD: Expand into a longer 'discussion' section
TAG issue 50 (URNs and registries) is not primarily about persistence, but understanding persistence seems to remain the most challenging barrier to the formulation of any kind of policy recommendation for persistent references. I hope this memo sheds some light on ISSUE-50, or at least helps to organize the problem space.
Larry Masinter and Michael Welch.
A system for long-term document preservation.
IS&T Archiving 2006 Conference.
Larry Masinter.
Problems URIs Don't Solve.
Presentation at TWIST 99, The Workshop on Internet-scale Software Technologies, Internet Scale Naming.
TBD: expand reading list
Thanks to MacKenzie Smith for insight into the minds of archivists and librarians, to Henry Thompson for his prior work on ISSUE-50, to Larry Masinter for general advice, and to Alan Ruttenberg for draft comments. | http://www.w3.org/2001/tag/doc/persistent-reference/ | CC-MAIN-2017-17 | refinedweb | 2,373 | 50.87 |
Extractive Text Summarization Using spaCy in Python
- 2599
Extractive Text Summarization Using spaCy in Python.We started off with a simple explanation of TF-IDF and the difference in our approach. Then, we moved on to install the necessary modules and language model.
Traditionally, TF-IDF (Term Frequency-Inverse Data Frequency) is often used in information retrieval and text mining to calculate the importance of a sentence for text summarization.
The TF-IDF weight is composed of two terms:
- TF: Term Frequency — Measures how frequently a term occurs in a document. Since every document is different in length, it is possible that a term would appear many more times in long documents than shorter ones. Thus, the term frequency is often divided by the document length, such as the total number of terms in the document, as a way of normalization.
TF(t) = (Number of times term t appears in a document) / (Total number of terms in the document)
- IDF: Inverse Document Frequency — Measures how important a term is. While computing the term frequency, all terms are considered equally important. However, it is known that certain terms may appear a lot of times but have little importance in the document. We usually term these words stopwords. For example: is, are, they, and so on.
IDF(t) = log_e(Total number of documents / Number of documents with term t in it)
TF-IDF is not a good choice if you are dealing with multiple domains. An unbalanced dataset tends to be biased and it will greatly affect the result.
A common term in a domain might be an important term in another domain. As the saying goes: “One man’s meat is another man’s poison.”
Let’s try to do it differently by using just the important terms. This piece focuses on identifying the top sentences in an article as follows:
- Tokenize the article using spaCy’s language model.
- Extract important keywords and calculate normalized weight.
- Calculate the importance of each sentence in the article based on keyword appearance.
- Sort the sentences based on the calculated importance.
Let’s move on to the next section to start installing the necessary modules.
1. Setup
We will be using
pip install to download the spaCy module. It is highly recommended to create a virtual environment before you proceed.
Run the terminal in administrator mode as we will need admin privilege to create a symlink when we download the language model later on.
pip install -U spacy
Once you are done with the installation, the next step is to download the language model. I will be using the large English model for this tutorial. Feel free to check the official website for the complete list of available models.
python -m spacy download en_core_web_lg
It should take some time to download the model as it is about 800MB in size. If you experienced issues while downloading the model, you can try to use the smaller language models.
When you’re done, run the following command to check whether spaCy is working properly. It also indicates the models that have been installed.
python -m spacy validate
Let’s move on to the next section. We will be writing some code in Python.
2. Implementation
Import
Add the following import declaration at the top of your Python file.
import spacy from collections import Counter from string import punctuation
Counter will be used to count the frequency while
punctuation contains the most commonly-used punctuation.
Load spaCy Model
The spaCy model can be loaded in two ways. The first one is via the built-in load function.
nlp = spacy.load("en_core_web_lg")
If you experience issues with not being able to load the model, even though it’s installed, you can load the model via the second method. You need to import the module directly and you can use it to load the model.
import en_core_web_lg nlp = en_core_web_lg.load()
Top Sentence Function
We will be writing our code inside a function. It is always a good idea to modularize your code whenever possible.
def top_sentence(text, limit):
The function accepts two input parameters:
text— The input text. Can be a short paragraph or a big chuck of text.
limit— The number of sentences to be returned.
The first part is to tokenize the input text and find out the important keywords in it.
keyword = [] pos_tag = ['PROPN', 'ADJ', 'NOUN', 'VERB'] doc = nlp(text.lower()) #1 for token in doc: #2 if(token.text in nlp.Defaults.stop_words or token.text in punctuation): continue #3 if(token.pos_ in pos_tag): keyword.append(token.text) #4
#1— Convert the input text to lower case and tokenize it with spaCy’s language model.
#2— Loop over each of the tokens.
#3— Ignore the token if it is a stopword or punctuation.
#4— Append the token to a list if it is the part-of-speech tag that we have defined.
I have covered a tutorial on extracting keywords and hashtags from text previously. Feel free to check it out.
The next step it to normalize the weightage of the keywords.
freq_word = Counter(keyword) #5 max_freq = Counter(keyword).most_common(1)[0][1] #6 for w in freq_word: freq_word[w] = (freq_word[w]/max_freq) #7
#5—
Counterwill convert the list into a dictionary with their respective frequency values.
#6— Get the frequency of the top most-common keyword.
#7— Loop over each item in the dictionary and normalize the frequency. The top most-common keyword will have frequency value of 1.
If you were to print out the
max_freq variable, you should see a
Counter object that contains a dictionary with a normalized value range from 0 to 1.
Counter({'ghosn': 1.0, 'people': 0.625, 'equipment': 0.625, 'japan': 0.625, 'yamaha': 0.5, 'musical': 0.5, 'escape': 0.5, 'cases': 0.375, 'japanese': 0.375})
Let’s proceed to calculate the importance of the sentences by identifying the occurrence of important keywords and sum up the value.
sent_strength={} for sent in doc.sents: #8 for word in sent: #9 if word.text in freq_word.keys(): #10 if sent in sent_strength.keys(): sent_strength[sent]+=freq_word[word.text]#11 else: sent_strength[sent]=freq_word[word.text]#12
sent_strength.py
#8— Loop over each sentence in the text. The sentences are split by the spaCy model based on full-stop punctuation.
#9— Loop over each word in a sentence based on spaCy’s tokenization.
#10— Determine if the word is a keyword based on the keywords that we extracted earlier.
#11— Add the normalized keyword value to the key-value pair of the sentence.
#12— Create a new key-value in the
sent_strengthdictionary using the sentence as key and the normalized keyword value as value.
You should be able to get a cumulative normalized value for each sentence. We will use this value to determine the top sentences.
summary = [] sorted_x = sorted(sent_strength.items(), key=lambda kv: kv[1], reverse=True) #13 counter = 0 for i in range(len(sorted_x)): #14 summary.append(str(sorted_x[i][0]).capitalize()) #15 counter += 1 if(counter >= limit): break #16 return ' '.join(summary) #17
#13— Sort the dictionary based on the normalized value. Set the reverse parameter to
Truefor descending order.
#14— Loop over each of the sorted items.
#15— Append the results to a list. The first letter of the sentence is capitalized since we have converted the text to lower case during the tokenization process. Please note that all other words in the sentence are still in lowercase. Kindly implement your own mapping system if you intend to keep the upper case in sentences.
#16— Break out of the loop if the counter exceeds the limit that we have set. This determines how many sentences are to be returned from the function.
#17— Return the list as a string by joining each element with a space.
Your function should be as follows.
def top_sentence(text, limit): keyword = [] pos_tag = ['PROPN', 'ADJ', 'NOUN', 'VERB'] doc = nlp(text.lower()) for token in doc: if(token.text in nlp.Defaults.stop_words or token.text in punctuation): continue if(token.pos_ in pos_tag): keyword.append(token.text) freq_word = Counter(keyword) max_freq = Counter(keyword).most_common(1)[0][1] for w in freq_word: freq_word[w] = (freq_word[w]/max_freq) sent_strength={} for sent in doc.sents: for word in sent: if word.text in freq_word.keys(): if sent in sent_strength.keys(): sent_strength[sent]+=freq_word[word.text] else: sent_strength[sent]=freq_word[word.text] summary = [] sorted_x = sorted(sent_strength.items(), key=lambda kv: kv[1], reverse=True) counter = 0 for i in range(len(sorted_x)): summary.append(str(sorted_x[i][0]).capitalize()) counter += 1 if(counter >= limit): break return ' '.join(summary)
Result
Let’s test the function that we have just created. Feel free to use any kind of text for your test. I will be using the following text as example for this tutorial.
example_text = '' (YAMCY) (NSANF).'''
Remember to pass in the number of sentences to be returned for the
top_sentence function.
print(top_sentence(example_text, 3))
I obtained the following result:
."
Future Improvement
Although we have implemented a simple extractive text summarization function, there are still a lot of improvements that can be made based on your use cases. Examples include:
- Retaining the case-sensitivity of the result.
- Connection between sentences as the order of sentences will be out of order for the final output. The result will not be good for story-based text.
- Normalization between long sentences and short sentences.
3. Conclusion
Let’s recap what we have learned today.
We started off with a simple explanation of TF-IDF and the difference in our approach. Then, we moved on to install the necessary modules and language model.
Next, we implemented a custom function to get the top sentences from a chunk of text.
During each function call, we will extract the keywords, normalize the frequency value, calculate the importance of the sentence, sort the sentence based on its importance value, and return the result based on the limit that we have passed to the function.
Finally, we explored how we can improve it further based on our own use cases.
Thanks for reading and hope to see you again in the next article! | https://geekwall.in/p/ktzyCdr8/extractive-text-summarization-using-spacy-in-python | CC-MAIN-2020-40 | refinedweb | 1,694 | 59.9 |
> Maybe this is a FAO, but I'll give it a try anyway...
>
> I'm using the <xsl:attribute> to add new namspaces to my
> definitions tag.
The spec explicitly says you can't do this. Namespace declarations are
not attributes in the XSLT data model.
It works fine when I specify the namspace
> like this: <xsl:attribute""
> </xsl:attribute>
If it works fine, then your XSLT processor has a bug.
I would be interested to know why you are trying to add namespaces
dyamically. This is partly so that I can advise you how to solve your
problem, but it is also because the XML Query working group is currently
debating the requirements for creating namespaces in the result
document.
Michael Kay
Software AG
home: Michael.H.Kay@xxxxxxxxxxxx
work: Michael.Kay@xxxxxxxxxxxxxx
>
>:
>
>
XSL-List info and archive: | http://www.oxygenxml.com/archives/xsl-list/200207/msg01486.html | crawl-002 | refinedweb | 139 | 75.5 |
Update 12/21/10 - Removed extra _FLTA call. thanks jaberwocky6669
- Added extra styles to Listview. Changed formating of Drive Labels. Thanks Yashied
- TAB stop still not working and still searching for memory leak pointed out by Yashied.
- fixed another potential bug when trying to navigate to empty media like CDROM. Thanks Melba23
*- New GUIFrame has been updated to support x64 so be sure to get the new one.
Updated 12/19/10 - fixed bug displaying empty drives in 'My Computer'. (Thanks Melba23
Ever since Melba23 released GUIFrame I have been wanting to rewrite my Explorer Listview UDF for frames. So here it is.
This UDF takes any frame created by GUIFrame and turns it into a MS-Windows like Explorer window. Each frame is populated with a Listview, address bar, back button and status bar.
These are the tasks the UDF will handle for the following Ctrls:
Listview:
- Double click navigation
- Column Header Right Click menu for selecting which columns to display
- Sorting functions for all columns
- formating of date displays
- Parent folder ( [..] ) is automatically added to each directory navigated to.
Comboboxes:
- type any directory into the address bar and press enter to navigate
- directory navigation is auto added to the combobox drop down menu
Status bars:
- Status bar folder stats are automatically updated
All history navigation is also handeled. In future I wont to work on a way to have back and fwd navigation. currently only back.
Special thanks go to Melba23 for GUIFrame. Thank You!
Zip file includes two examples and UDF. Please let me know if you have any trouble of any kind or any ideas. I love all feedback. Thanks for looking.
Previous Downloads:125
GUIFrame can be found HERE
Example 1
#include "ExpFrame.au3" ;Create GUI $hGUI_1 = GUICreate("GUI Frames", 600, 200, 100, 100, $WS_OVERLAPPEDWINDOW) ;Create Frames $iFrame_A = _GUIFrame_Create($hGUI_1) ;Set min sizes for the frames _GUIFrame_SetMin($iFrame_A, 100, 100) ;Create Explorer Listviews _GUIFrame_Switch($iFrame_A, 1) $LocalList = _ExpFrame_Create(_GUIFrame_GetHandle($iFrame_A, 1)) _GUIFrame_Switch($iFrame_A, 2) $LocalList2 = _ExpFrame_Create(_GUIFrame_GetHandle($iFrame_A, 2), 'c:\') ;Register functions for Windows Message IDs needed. GUIRegisterMsg($WM_SIZE, "_ExpFrame_WMSIZE_Handler") GUIRegisterMsg($WM_NOTIFY, "_ExpFrame_WMNotify_Handler") GUIRegisterMsg($WM_COMMAND, '_ExpFrame_WMCOMMAND_Handler') ; Set resizing flag for all created frames _GUIFrame_ResizeSet(0) GUISetState(@SW_SHOW, $hGUI_1) While 1 $nMsg = GUIGetMsg() _ExpFrame_GUIGetMsg($nMsg);<< Dont forget to pass msg to _ExpFrame_GUIGetMsg!! Switch $nMsg Case $GUI_EVENT_CLOSE Exit EndSwitch WEnd
Edited by Beege, 22 January 2011 - 12:23 AM. | http://www.autoitscript.com/forum/topic/123409-explorer-frame-udf-updated-1-21-11/ | CC-MAIN-2014-35 | refinedweb | 392 | 56.45 |
TL;DR: In this article, you will learn the basic concepts of React. After that, you will have the chance to see React in action while creating a simple Q&A (Questions & Answers) app that relies on a backend API. If needed, you can check this GitHub repository to check the code that supports this article. Have fun!
Prerequisites
Although not mandatory, you should know a few things about JavaScript, HTML, and CSS before diving into this React app tutorial. If you do not have previous experience with these technologies, you might not have an easy time following the instructions in this article, and it might be a good idea to step back and learn about them first. If you do have previous experience with web development, then stick around and enjoy the article.
Also, you will need to have Node.js and NPM installed in your development machine. If you don't have these tools yet, please, read and follow the instructions on the official documentation to install Node.js. NPM, which stands for Node Package Manager, comes bundled into the default Node.js installation.
Lastly, you will need access to a terminal in your operating system. If you are using MacOS or Linux, you are good to go. If you are on Windows, you will probably be able to use PowerShell without problems.
"Learn how to create your first React application with ease."
React Introduction
React is a JavaScript library that Facebook created to facilitate the development of Single-Page Applications (a.k.a. SPAs). Since Facebook open-sourced and announced React, this library became extremely popular all around the world and gained mass adoption by the developer community. Nowadays, although still mainly maintained by Facebook, other big companies (like Airbnb, Auth0, and Netflix) embraced this library and are using it to build their products. If you check this page, you will find a list with more than a hundred companies that use React.
In this section, you will learn about some basic concepts that are important to keep in mind while developing apps with React. However, you have to be aware that the goal here is not to give you a complete explanation of these topics. The goal is to give you enough context so you can understand what is going on while creating your first React application.
For more information on each topic, you can always consult the official React documentation.
React and the JSX Syntax
First and foremost, you need to know that React uses a funny syntax called JSX. JSX, which stands for JavaScript XML, is a syntax extension to JavaScript that enables developers to use XML (and, as such, HTML) to describe the structure of the user interface. This section won't get into the details of how JSX really works. The idea here is to give you a heads up, so you don't get surprised when you see this syntax in the next sections.
So, when it comes to JSX, it is perfectly normal to see things like this:
function showRecipe(recipe) { if (!recipe) { return <p>Recipe not found!</p>; } return ( <div> <h1>{recipe.title}</h1> <p>{recipe.description}</h1> </div> ); }
In this case, the
showRecipe function is using the JSX syntax to show the details of a
recipe (i.e., if the recipe is available) or a message saying that the recipe was not found. If you are not familiar with this syntax, don't worry. You will get used to it quite soon. Then, if you are wondering why React uses JSX, you can read their official explanation here.
"React embraces the fact that rendering logic is inherently coupled with other UI logic: how events are handled, how the state changes over time, and how the data is prepared for display." - Introducing JSX
React Components
Components in React are the most important pieces of code. Everything you can interact with in a React application is (or is part of) a component. For example, when you load a React application, the whole thing will be handled by a root component that is usually called
App. Then, if this application contains a navigation bar, you can bet that this bar is defined inside a component called
NavBar or similar. Also, if this bar contains a form where you can input a value to trigger a search, you are probably dealing with another component that handles this form.
The biggest advantage of using components to define your application is that this approach lets you encapsulate different parts of your user interface into independent, reusable pieces. Having each part on its own component facilitates reasoning, testing, and reusing each piece easily. When you start finding your bearings with this approach, you will see that having a tree of components (that's what you get when you divide everything into components) also facilitates state propagation.
"The biggest advantage of using components to define your application is that this approach lets you encapsulate different parts of your user interface into independent, reusable pieces."
Defining Components in React
Now that you learned that React applications are nothing more than a tree of components, you have to learn how to create components in React. So, basically, there are two types of React components that you can create: Functional Components and Class Components.
The difference between these two types is that functional components are simply "dumb" components that do not hold any internal state (making them great to handle presentation), and class components are more complex components that can hold internal state. For example, if you are creating a component that will only show the profile of the user that is authenticated, you can create a functional component as follows:
function UserProfile(props) { return ( <div className="user-profile"> <img src={props.userProfile.picture} /> <p>{props.userProfile.name}</p> </div> ); }
There is nothing particularly interesting about the component defined above as no internal state is handled. As you can see, this component simply uses a
userProfile that was passed to it to define a
div element that shows the user's picture (the
img element) and their name (inside the
p element).
However, if you are going to create a component to handle things that need to hold some state and perform more complex tasks, like a subscription form, you will need a class component. To create a class component in React, you would proceed as follows:
class SubscriptionForm extends React.Component { constructor(props) { super(props); this.state = { acceptedTerms: false, email: '', }; } updateCheckbox(checked) { this.setState({ acceptedTerms: checked, }); } updateEmail(value) { this.setState({ email: value, }); } submit() { // ... use email and acceptedTerms in an ajax request or similar ... } render() { return ( <form> <input type="email" onChange={(event) => {this.updateEmail(event.target.value)}} value={this.state.email} /> <input type="checkbox" checked={this.state.acceptedTerms} onChange={(event) => {this.updateCheckbox(event.target.checked)}} /> <button onClick={() => {this.submit()}}>Submit</button> </form> ) } }
As you can see, this new component is handling way more stuff than the other one. For starters, this component is defining three input elements (actually, two
input tags and one
button, but the button is also considered an input element). The first one enables users to input their email addresses. The second one is a checkbox where users can define if they agree or not to some arbitrary terms. The third one is a button that users will have to click to end the subscription process.
Also, you will notice that this component is defining an internal state (
this.state) with two fields:
acceptedTerms and
acceptedTerms field to represent the choice of the users in relation to the fictitious terms and the
So, basically speaking, if you need a component to handle dynamic things that depend on an internal state, like user input, you will need a class component. However, if you need a component that won't perform any logic internally that relies on an internal state, you can stick with a functional component.
Note: This is just a brief explanation about the different components and how they behave. In fact, the last component created in this section,
SubscriptionForm, could be easily transformed into a functional component too. In this case, you would have to move its internal state up in the component tree and pass to it these values and functions to trigger state changes. To learn more about React components, please, check this article.
Re-Rendering React Components
Another very important concept that you have to understand is how and when React re-renders components. Luckily, this is an easy concept to learn. There are only two things that can trigger a re-render in a React component: a change to the
props that the component receives or a change to its internal state.
Although the previous section didn't get into the details about how to change the internal state of a component, it did show how to achieve this. Whenever you use a stateful component (i.e., a class component), you can trigger a re-render on it by changing its state through the
setState method. What is important to keep in mind is that you should not change the
state field directly. You have to call the
setState method with the new desired state:
// this won't trigger a re-render: updateCheckbox(checked) { this.state.acceptedTerms = checked; } // this will trigger a re-render: this.setState({ acceptedTerms: checked, });
In other words, you have to treat
this.state as if it were immutable.
Note: To achieve a better performance, React does not guarantee that
setState()will update
this.stateimmediately. The library may wait for a better opportunity when there are more things to update. So, it is not reliable to read
this.stateright after calling
setState(). For more information, check the official documentation on
setState().
Now, when it comes to a stateless component (i.e., a functional component), the only way to trigger a re-render is to change the
props that are passed to it. In the last section, you didn't have the chance to see the whole context of how a functional component is used nor what
props really are. Luckily again, this is another easy topic to grasp. In React,
props are nothing more than the properties (thus its name) passed to a component.
So, in the
UserProfile component defined in the last section, there was only one property being passed/used:
userProfile. In that section, however, there was a missing piece that was responsible for passing properties (
props) to this component. In that case, the missing piece was where and how you use that component. To do so, you just have to use your component as if it were an HTML element (this is a nice feature of JSX) as shown here:
import React from 'react'; import UserProfile from './UserProfile'; class App extends () { constructor(props) { super(props); this.state = { user: { name: 'Bruno Krebs', picture: '', }, }; } render() { return ( <div> <UserProfile userProfile={this.state.user} /> </div> ); } }
That's it. This is how you define and pass
props to a child component. Now, if you change the
user in the parent component (
App), this will trigger a re-render in the whole component and, subsequently, it will change the
props being passed to
UserProfile triggering a re-render on it as well.
Note: React will also re-render class components if their
propsare changed. This is not a particular behavior of functional components.
What You Will Build with React
All right! With the concepts describe in the last section in mind, you are ready to start developing your first React application. In the following sections, you will build a simple Q&A (Question & Answer) app that will allow users to interact with each other asking and answering questions. To make the whole process more realistic, you will use Node.js and Express to create a rough backend API. Don't worry if you are not familiar with developing backend apps with Node.js. This is going to be a very straightforward process, and you will be up and running in no time.
At the end of this tutorial, you will have a React app supported by a Node.js backend that looks like this:
Developing a Backend API with Node.js and Express
Before diving into React, you will quickly build a backend API to support your Q&A app. In this section, you will use Express alongside with Node.js to create this API. If you don't know what Express is or how it works, don't worry, you don't need to get into its details now. Express, as stated by its official documentation, is an unopinionated, minimalist web framework for Node.js. With this library, as you will see here, you can quickly build apps to run on servers (i.e., backend apps).
So, to get things started, open a terminal in your operating system, move to a directory where you create your projects, and issue the following commands:
# create a directory for your project mkdir qa-app # move into it cd qa-app # create a directory for your Express API mkdir backend # move into it cd backend # use NPM to start the project npm init -y
The last command will create a file called
package.json inside your
backend directory. This file will hold the details (like the dependencies) of your backend API. Then, after these commands, run the following one:
npm i body-parser cors express helmet morgan
This command will install five dependencies in your project:
body-parser: This is a library that you will use to convert the body of incoming requests into JSON objects.
cors: This is a library that you will use to configure Express to add headers stating that your API accepts requests coming from other origins. This is also known as Cross-Origin Resource Sharing (CORS).
express: This is Express itself.
helmet: This is a library that helps to secure Express apps with various HTTP headers.
morgan: This is a library that adds some logging capabilities to your Express app.
Note: As the goal of this article is to help you develop your first React application, the list above contains a very brief explanation of what each library brings to the table. You can always refer to the official web pages of these libraries to learn more about their capabilities.
After installing these libraries, you will be able to see that NPM changed your
package.json file to include them in the
dependencies property. Also, you will see a new file called
package-lock.json. NPM uses this file to make sure that anyone else using your project (or even yourself in other environments) will always get versions compatible with those that you are installing now.
Then, the last thing you will need to do is to develop the backend source code. So, create a directory called
src inside your
backend directory and create a file called
index.js inside this new directory. In this file, you can add the following code:
//import dependencies const express = require('express'); const bodyParser = require('body-parser'); const cors = require('cors'); const helmet = require('helmet'); const morgan = require('morgan'); // define the Express app const app = express(); // the database const questions = []; // enhance your app security with Helmet app.use(helmet()); // use bodyParser to parse application/json content-type app.use(bodyParser.json()); // enable all CORS requests app.use(cors()); // log HTTP requests app.use(morgan('combined')); // retrieve all questions app.get('/', (req, res) => { const qs = questions.map(q => ({ id: q.id, title: q.title, description: q.description, answers: q.answers.length, })); res.send(qs); }); // get a specific question app.get('/:id', (req, res) => { const question = questions.filter(q => (q.id === parseInt(req.params.id))); if (question.length > 1) return res.status(500).send(); if (question.length === 0) return res.status(404).send(); res.send(question[0]); }); // insert a new question app.post('/', (req, res) => { const {title, description} = req.body; const newQuestion = { id: questions.length + 1, title, description, answers: [], }; questions.push(newQuestion); res.status(200).send(); }); // insert a new answer to a question app.post('/answer/:id', (req, res) => { const {answer} = req.body; const question = questions.filter(q => (q.id === parseInt(req.params.id))); if (question.length > 1) return res.status(500).send(); if (question.length === 0) return res.status(404).send(); question[0].answers.push({ answer, }); res.status(200).send(); }); // start the server app.listen(8081, () => { console.log('listening on port 8081'); });
To keep things short, the following list briefly explains how things work in this file (also, be sure to check the comments in the code above):
Everything starts with five
requirestatements. These statements load all libraries you installed with NPM.
After that, you use Express to define a new app (
const app = express();).
Then, you create an array that will act as your database (
const questions = [];). In a real-world app, you would use a real database like Mongo, PostgreSQL, MySQL, etc.
Next, you call the
usemethod of your Express app four times. Each one to configure the different libraries you installed alongside with Express.
Right after it, you define your first endpoint (
app.get('/', ...);). This endpoint is responsible for sending the list of questions back to whoever requests it. The only thing to notice here is that instead of sending the
answersas well, this endpoint compiles them together to send just the number of answers that each question has. You will use this info in your React app.
After your first endpoint, you define another endpoint. In this case, this new endpoint is responsible for responding to requests with a single question (now, with all the answers).
After this endpoint, you define your third endpoint. This time you define an endpoint that will be activated whenever someone sends a POST HTTP request to your API. The goal here is to take the message sent in the
bodyof the request to insert a
newQuestionin your database.
Then, you have the last endpoint in your API. This endpoint is responsible for inserting answers into a specific question. In this case, you use a route parameter called
idto identify in which question you must add the new answer.
Lastly, you call the
listenfunction in your Express app to run your backend API.
With this file in place, you are good to go. To run your app, just issue the following command:
# from the qa-app/backend directory node src
Then, to test if everything is really working, open a new terminal and issue the following commands:
# issue an HTTP GET request curl localhost:8081 # issue a POST request curl -X POST -H 'Content-Type: application/json' -d '{ "title": "How do I make a sandwich?", "description": "I am trying very hard, but I do not know how to make a delicious sandwich. Can someone help me?" }' localhost:8081 curl -X POST -H 'Content-Type: application/json' -d '{ "title": "What is React?", "description": "I have been hearing a lot about React. What is it?" }' localhost:8081 # re-issue the GET request curl localhost:8081
If you don't know,
curlis a command-line interface that lets you issue HTTP requests with ease. In the code snippet above, you can see that, with just a few key strikes, you can issue different types of HTTP requests, define what headers (
-H) they need, and pass data (
-d) to backends.
The first command will trigger an HTTP GET request that will result in an empty array being printed out (
[]). Then, the second and the third commands will issue POST requests to insert two questions into your API, and the fourth command will issue another GET request to verify if these questions were properly inserted.
If you manage to get the expected results, leave your server running and move on to the next section.
Developing Applications with React
With your backend API up and running, you are finally ready to start developing your React application. Not that long ago, developers willing to create apps with React would have a hard time setting up all the tools needed (e.g., webpack) to scaffold a React application. However (and luckily), the scenario has changed after Facebook published a tool called Create React App.
With this tool, you can scaffold a new React application with just one command. As such, to create your React app, open a new terminal and go to the same directory where you created the
backend Node.js app (i.e., from the
qa-app directory). From there, issue the following command:
# the npx command was introduced on npm@5.2.0 npx create-react-app frontend
This will make NPM download and run
create-react-app in a single command, passing to it
frontend as the desired directory for your new application. The process involved in scaffolding a new application, as you will see after running the command above, is not that simple. The tool even needs a bunch of seconds (or a couple of minutes depending on your internet connection) to create the whole thing. However, when this tool finishes, you can issue the following commands to run your React app:
# move into the new directory cd frontend # start your React app npm start
Note: If you have Yarn installed, the
create-react-apptool will use it to bootstrap your project. As such, you will need to use
yarn startor you will have to run
npm installbefore running
npm start.
The last command issued above will start a development server that listens on port
3000 and will open the new app in your default web browser.
After seeing your app, you can stop the server by hitting
Ctrl +
C so you can install a couple dependencies that you will need in your application. So, back in your terminal and after stopping the server, run the following command:
npm i react-router react-router-dom
This command will install two libraries to help you handle the navigation in your app. The first one,
react-router, is the main library that enables seamless navigation. The second one,
react-router-dom, provides DOM bindings for React Router.
Note: If you were using React Native to develop an app for mobile devices, you would install
react-router-nativeinstead.
Then, after installing these libraries, you can open your React project in your preferred IDE so you can start the real work.
Cleaning Up your React App
Well, actually, before start developing your app, you can remove a few files from it and clean up its code a little bit. For starter, you can remove the
./src/App.test.js file because you won't create automated tests in this tutorial. Although this is an important topic, you will skip it for now so you can focus on learning React.
Note: After learning about React, you might get interested into learning about how to add automated tests to your app. A good resource to help you on that matter is the Testing React Applications with Jest blog post.
Besides that, you can also remove two other files as you won't use them:
./src/logo.svg and
./src/App.css. Then, after removing these files, open the
./src/App.js file and replace its code with this:
import React, { Component } from 'react'; class App extends Component { render() { return ( <div> <p>Work in progress.</p> </div> ); } } export default App;
You won't really use the new version of your
App component as you will soon replace the contents of this file again. However, to avoid having code that won't compile, it is a good idea to refactor your
App component.
Configuring the React Router in Your App
After cleaning things up, you will need to configure React Router in your app. This will be a pretty simple step, as you will see. However, keep in mind that to master React Router you would need to read at least one other article that specifically introduces the subject and all its features.
The thing is, React Router is a very complete solution and, in your first React app, you will touch only the tip of the iceberg. If you do want to learn more about React Router, please, head to the official documentation.
"React Router is a powerful solution that can help you build amazing applications."
Having that in mind, open the
./src/index.js file and replace its contents with this:();
In the new version of this file, you are just importing
BrowserRouter from the
react-router-dom library, and encapsulating your
App component inside this router. That's all you need to start using React Router.
Note: If you haven't seen this file before, this is the piece of logic that makes your React app to be rendered. More specifically, the
document.getElementById('root')defines on which HTML element React must render your app. You can find this
rootelement inside the
./public/index.htmlfile.
Configuring Bootstrap in Your React App
To make your React app more appealing from the User Interface (UI) point of view, you are going to configure Bootstrap on it. If you don't know Bootstrap, this is an extremely popular library that helps developers create good-looking, responsive web apps with ease.
There are multiple ways to integrate React and Bootstrap together. However, as the requirements for your first application will be quite simple and as you won't need any of its interactive components (i.e., you are just interested into the basic styles that this library provides), you are going to follow the easiest strategy available. That is, you are simply going to open your
./public/index.html file and update it as follows:
<!DOCTYPE html>
<html lang="en">
<head>
<!-- ... tags above the title stay untouched ... -->
<title>Q&App</title>
<link rel="stylesheet" href="">
</head>
<!-- ... body definition stays untouched ... -->
</html>
In this case, you are actually doing two things: you are changing the
title of your React app to Q&App, and you are making your app load a variation of Bootstrap called
flatly. If you are interested, you can use any variation available at Bootswatch, or you can also use the default flavor of Bootstrap. However, you will probably find the variations available on Bootswatch more appealing.
Creating a Navigation Bar in Your React App
Now that you have configured your app to use Bootstrap, you are ready to create your first React component. In this section, you will create a component called
NavBar (which stands for Navigation Bar), and you will add it to your React app.
To do so, create a new directory called
NavBar inside the
src directory of your application and insert a new file called
NavBar.js inside it. In this file, input the following code:
import React from 'react'; import {Link} from 'react-router-dom'; function NavBar() { return ( <nav className="navbar navbar-dark bg-primary fixed-top"> <Link className="navbar-brand" to="/"> Q&App </Link> </nav> ); } export default NavBar;
As you can see, the navigation bar component that you are creating is a functional component. You can create it like a stateless (i.e., functional) component because you don't really need to hold any internal state.
Now, to use your new component, you can open your
./src/App.js file and update it as follows:
import React, { Component } from 'react'; import NavBar from './NavBar/NavBar'; class App extends Component { render() { return ( <div> <NavBar/> <p>Work in progress.</p> </div> ); } } export default App;
Then, if you run your app by issuing
npm start from a terminal, you will see the navigation bar at the top of it. However, what you won't see is the "work in progress" message that your
App component contains. The problem here is that the navigation bar that you created is using a CSS class (
fixed-top) provided by Bootstrap that makes it fixed to the top. This means that this component is not taking the default vertical space as it would if it were a normal
div element.
To fix this situation, open the
./src/index.css file and add a
margin-top rule, as shown here:
body { /* ... other rules ... */ margin-top: 100px; } /* ... other rules ... */
Now, if you check your app again, you will see your navigation bar and the "work in progress" message.
Creating a Class Component with React
After creating the navigation bar, what you can do next is to create a stateful component (a class component) to fetch questions from your backend and to show it to your users. To fetch these questions, you will need the help of another library, Axios. In a few words, Axios is a promise-based HTTP client for the browser and for Node.js. In this tutorial, you will only use it in the browser (i.e., in your React app).
To install Axios, stop the React development server and issue the following command:
npm i axios
Then, create a new directory called
Questions inside
src and a new file called
Questions.js inside it. In this file, you can insert the following code:
import React, {Component} from 'react'; import {Link} from 'react-router-dom'; import axios from 'axios'; class Questions extends Component { constructor(props) { super(props); this.state = { questions: null, }; } async componentDidMount() { const questions = (await axios.get('')).data; this.setState({ questions, }); } render() { return ( <div className="container"> <div className="row"> {this.state.questions === null && <p>Loading questions...</p>} { this.state.questions && this.state.questions.map(question => ( <div key={question.id} <Link to={`/question/${question.id}`}> <div className="card text-white bg-success mb-3"> <div className="card-header">Answers: {question.answers}</div> <div className="card-body"> <h4 className="card-title">{question.title}</h4> <p className="card-text">{question.description}</p> </div> </div> </Link> </div> )) } </div> </div> ) } } export default Questions;
There are a few important things going on in this file. First, as mentioned before, you are creating a stateful component that will hold the questions available in your backend API. So, to do it properly, you are starting your component with the
questions property set to
null and, when React finishes mounting your component (which triggers the
componentDidMount method), you are issuing a GET request (through the
axios.get call) to your backend. In the meantime between your request and the response from the backend, React renders your component with a message saying "loading questions..." (it does so because you instructed it to behave like that by adding
this.state.questions === null && before the message).
Note: This component is touching a topic that was not addressed in this article, the Lifecycle of React Components. In this case, you are just using one of the extension points provided by React, the
componentDidMountmethod. You don't really need to understand how this works to follow this tutorial but, after finishing it, make sure you learn about this topic.
Then, whenever Axios gets a response from the backend, you put the
data returned inside a constant called
questions, and you update the state of the component (
this.setState) with it. This update, as you already learned, triggers a re-render and makes React show all the questions retrieved.
Now, in relation to how your questions are shown, you are using a bunch of
div elements with CSS classes provided by Bootstrap to create a nice Card component. If you want to tweak how this card is shown, make sure to check the docs.
Besides that, note that you are using a component called
Link (from
react-router-dom) to make this redirect users to the following path when clicked:
/question/${question.id}. In the next section, you will create a component to show the answers to a question chosen by the user.
So, as you already understand how your component behaves, the next thing you will need to do is to update the code of your
App component to use your new component:
import React, { Component } from 'react'; import NavBar from './NavBar/NavBar'; import Questions from './Questions/Questions'; class App extends Component { render() { return ( <div> <NavBar/> <Questions/> </div> ); } } export default App;
Then, if you run your app again (
npm start), you will see this nice page:
Routing Users with React Router
With all these features in place, one important step that you have to learn about is how to handle routing in your React app. In this section, you will learn about this topic while creating a component that shows the details of the questions available in your backend.
For starters, you can create a new directory called
Question (singular now) and a file called
Question.js (also singular) inside it. Then, you can insert the following code into this file:
import React, {Component} from 'react'; import axios from 'axios'; class Question extends Component { constructor(props) { super(props); this.state = { question: null, }; } async componentDidMount() { const { match: { params } } = this.props; const question = (await axios.get(`{params.questionId}`)).data; this.setState({ question, }); }" /> <p>Answers:</p> { question.answers.map((answer, idx) => ( <p className="lead" key={idx}>{answer.answer}</p> )) } </div> </div> </div> ) } } export default Question;
The way this new component works is actually very similar to the way the
Questions component works. This is a stateful component that uses Axios to issue a GET request to the endpoint that retrieves the whole details of a question, and that updates the page whenever it gets a response back.
Nothing really new here. What is going to be new is the way this component gets rendered.
So, open the
App.js file, and replace its contents with this:
import React, { Component } from 'react'; import {Route} from 'react-router-dom'; import NavBar from './NavBar/NavBar'; import Question from './Question/Question'; import Questions from './Questions/Questions'; class App extends Component { render() { return ( <div> <NavBar/> <Route exact path='/' component={Questions}/> <Route exact path='/question/:questionId' component={Question}/> </div> ); } } export default App;
In the new version of your
App component, you are using two
Route elements (provide by
react-router-dom) to tell React when you want the
Questions component rendered and when you want the
Question component rendered. More specifically, you are telling React that if your users navigate to
/ (
exact path='/') you want them to see
Questions and, if they navigate to
/question/:questionId, you want them to see the details of a specific question.
Note that the last route defines a parameter called
questionId. When you created the
Questions (plural) component, you added a link that uses the
id of the question. React Router uses this
id to form the link and then gives it to your
Question component (
params.questionId). With this
id, your component uses Axios to tell the backend what question exactly is being requested.
If you check your application now, you will be able to see all your questions in the home page, and you will be able to navigate to a specific question. However, you probably won't see any answer in your new component because you never added one. For now, to add answers to your questions, you can issue requests similar to the following one:
curl -X POST -H 'Content-Type: application/json' -d '{ "answer": "Just spread butter on the bread, and that is it." }' localhost:8081/answer/1
After that, if you reload your app and go to, you will see a page similar to this:
Securing your React App
Your application has reached a state where it has almost everything it needs for prime time. There are just a few features missing. For example, right now, your users have no means of creating questions nor answering them through your app. Another example is that there is no way to log into your application. Besides that, the questions and answers do not provide information about their authors.
In this section, you will learn how to implement all these features with ease. You will start by subscribing to Auth0 to help you with the authentication feature, then you will secure your backend and, to wrap things up, you will secure your React app and refactor the
Question component so that authenticated users can answer questions.
Configuring an Auth0 Account
For starters, you will need to sign up to Auth0 so you can integrate it in your application. If you already have an existing account, you can use it without a problem. If you do not have one, now is a good time to sign up for a free Auth0 account. With your free account, you will have access to the following features:
- Passwordless authentication
- Lock for Web, iOS & Android
- Up to 2 social identity providers (like Twitter and Facebook)
- Unlimited Serverless Rules
- Community Support
After signing up, you will have to create an Auth0 Application to represent your app. So, in your dashboard, click on the Applications section on the vertical menu and then click on Create Application.
On the dialog shown, you will have to insert a name for your application (for example, "Q&App") and then you will have to choose Single Page Application as its type. Then, when you click on the Create button, Auth0 will create your Application and redirect you to its Quick Start section. From there, you will have to click on the Settings tab to change the configuration of your Auth0 Application and to copy some values from it.
So, after heading to the Settings tab, search for the Allowed Callback URLs field and insert on it.
You are probably wondering what this URL means and why you need it. The reason why you need this URL is that, while authenticating through Auth0, your users will be redirected to its Universal Login Page and, after the authentication process (successful or not), they will be redirected back to your application. For security reasons, Auth0 will redirect your users only to URLs registered on this field.
With this value in place, you can click on the Save Changes button and leave this page open.
Securing your Backend API with Auth0
To secure your Node.js API with Auth0, you will have to install and configure only two libraries:
express-jwt: A middleware that validates a JSON Web Token (JWT) and set the
req.userwith its attributes.
jwks-rsa: A library to retrieve RSA public keys from a JWKS (JSON Web Key Set) endpoint.
To install these libraries, stop your backend API by hitting
Ctrl +
C and issue the following command:
# from the backend directory npm i express-jwt jwks-rsa
After that, open its
./src/index.js file and import these libraries as follows:
// ... other require statements ... const jwt = require('express-jwt'); const jwksRsa = require('jwks-rsa');
Then, still on this file, create the following constant right before the first POST endpoint (
app.post):
// ... require statements ... // ... app definitions ... // ... app.get endpoints ..._CLIENT_ID>', issuer: `https://<YOUR_AUTH0_DOMAIN>/`, algorithms: ['RS256'] }); // ... app.post endpoints ... // ... app.listen ...
This constant is actually an Express middleware that will validate ID tokens. Note that, to make it work, you will have to replace the
<YOUR_AUTH0_CLIENT_ID> placeholder with the value presented in the Client ID field of your Auth0 Application. Also, you will have to replace
<YOUR_AUTH0_DOMAIN> with the value presented in the Domain field (e.g.
bk-tmp.auth0.com).
Then, you will have to make your two POST endpoints use the
checkJwt middleware. To do this, replace these endpoints with this:
// insert a new question app.post('/', checkJwt, (req, res) => { const {title, description} = req.body; const newQuestion = { id: questions.length + 1, title, description, answers: [], author: req.user.name, }; questions.push(newQuestion); res.status(200).send(); }); // insert a new answer to a question app.post('/answer/:id', checkJwt, (req, res) => { const {answer} = req.body; const question = questions.filter(q => (q.id === parseInt(req.params.id))); if (question.length > 1) return res.status(500).send(); if (question.length === 0) return res.status(404).send(); question[0].answers.push({ answer, author: req.user.name, }); res.status(200).send(); });
Both endpoints introduce only two changes. First, both of them declare that they want to use
checkJwt, which makes them unavailable to unauthenticated users. Second, both add a new property called
author on questions and answers. These new properties receive the name (
req.user.name) of the users issuing requests.
With these changes in place, you can start your backend API again (
node src) and start refactoring your React application.
Note: You are not adding the
checkJwtmiddleware to your GET endpoints because you want them to be publicly accessible. That is, you want unauthenticated users to be able to see questions and answers, but you don't want them to create new questions nor to answer existing ones.
Second Note: As your backend API is just holding data in memory, restarting it makes you lose all the questions and answers you inserted previously. To add new questions through
curl, you would have to fetch an ID Token from Auth0. However, you can wait until you finish the whole app to add questions and answers through your app's interface.
Securing your React App with Auth0
To secure your React application with Auth0, you will have to install only one library:
auth0-js. This is the official library provided by Auth0 to secure SPAs like yours. To install it, stop the development server and issue this command:
# from the frontend directory npm install auth0-js
After that, you can create a class to help you with the authentication workflow. For that, create a new file called
Auth.js inside the
src directory, and insert the following code:: 'id_token', scope: 'openid profile' }); this.getProfile = this.getProfile.bind(this); this.handleAuthentication = this.handleAuthentication.bind(this); this.isAuthenticated = this.isAuthenticated.bind(this); this.signIn = this.signIn.bind(this); this.signOut = this.signOut.bind(this); } getProfile() { return this.profile; } getIdToken() { return this.idToken; } isAuthenticated() { return new Date().getTime() < this.expiresAt; } signIn() { this.auth0.authorize(); } handleAuthentication() { return new Promise((resolve, reject) => { this.auth0.parseHash((err, authResult) => { if (err) return reject(err); if (!authResult || !authResult.idToken) { return reject(err); } this.idToken = authResult.idToken; this.profile = authResult.idTokenPayload; // set the time that the id token will expire at this.expiresAt = authResult.idTokenPayload.exp * 1000; resolve(); }); }) } signOut() { // clear id token, profile, and expiration this.idToken = null; this.profile = null; this.expiresAt = null; } } const auth0Client = new Auth(); export default auth0Client;
Note: Just like before, you will have to replace
<YOUR_AUTH0_CLIENT_ID>and
<YOUR_AUTH0_DOMAIN>with the values extracted from your Auth0 Application.
As you can see, in this file, you are creating a module that defines the
Auth class with seven methods:
constructor: Here, you create an instance of
auth0.WebAuthwith your Auth0 values and define some other important configurations. For example, you are defining that Auth0 will redirect users (
redirectUri) to the (the same one you inserted in the Allowed Callback URLs field previously).
getProfile: This method returns the profile of the authenticated user, if any.
getIdToken: This method returns the
idTokengenerated by Auth0 for the current user. This is what you will use while issuing requests to your POST endpoints.. In other words, this method sends your users to the Auth0 login page.
signOut: This method signs a user out by setting the
profile,
id_token, and
expiresAtto
null.
Lastly, this module creates an instance of the
Auth class and exposes it to the world. That is, in your app, you won't have more than one instance of the
Auth class.
After defining this helper class, you can refactor your
NavBar component to allow users to authenticate. So, open the
NavBar.js file and replace its code with the following one:
import React from 'react'; import {Link, withRouter} from 'react-router-dom'; import auth0Client from '../Auth'; function NavBar(props) { const signOut = () => { auth0Client.signOut(); props.history.replace('/'); }; return ( <nav className="navbar navbar-dark bg-primary fixed-top"> <Link className="navbar-brand" to="/"> Q&App </Link> { !auth0Client.isAuthenticated() && <button className="btn btn-dark" onClick={auth0Client.signIn}>Sign In</button> } { auth0Client.isAuthenticated() && <div> <label className="mr-2 text-white">{auth0Client.getProfile().name}</label> <button className="btn btn-dark" onClick={() => {signOut()}}>Sign Out</button> </div> } </nav> ); } export default withRouter(NavBar);
The new version of your navigation bar component imports two new elements:
withRouter: This a component provided by React Router to enhance your component with navigation capabilities (e.g., access to the
historyobject).
auth0Client: This is the singleton instance of the
Authclass you just defined.
With the
auth0Client instance, the
NavBar decides if it must render a Sign In button (which it does for unauthenticated users) or a Sign Out button (for authenticated users). If the user is properly authenticated, this component also shows its name. And, if an authenticated user hits the Sign Out button, your component calls the
signOut method of
auth0Client and redirects the user to the home page.
After refactoring the
NavBar component, you will have to create a component to handle the callback route (). To define this component, create a new file called
Callback.js inside the
src directory and insert the following code into it:
import React, {Component} from 'react'; import {withRouter} from 'react-router-dom'; import auth0Client from './Auth'; class Callback extends Component { async componentDidMount() { await auth0Client.handleAuthentication(); this.props.history.replace('/'); } render() { return ( <p>Loading profile...</p> ); } } export default withRouter(Callback);
The component you just defined is responsible for two things. First, it calls the
handleAuthentication method to fetch the user information sent by Auth0. Second, it redirects your users to the home page (
history.replace('/')) after it finishes the
handleAuthentication process. In the meantime, this component shows the following message: "Loading profile".
Then, to wrap the integration with Auth0, you will have to open the
App.js file and update it as follows:
// ... other import statements ... import Callback from './Callback'; class App extends Component { render() { return ( <div> <!-- ... NavBar and the other two Routes ... --> <Route exact path='/callback' component={Callback}/> </div> ); } } export default App;
Now, if you run your React app again (
npm start), you will be able to authenticate yourself through Auth0. After the authentication process, you will be able to see your name on the navigation bar.
Adding Features to Authenticated Users
Now that you have finished integrating Auth0 into your React application, you can start adding features that only authenticated users will have access to. To conclude this tutorial, you will implement two features. First, you will enable authenticated users to create new questions. Then, you will refactor the
Question (singular) component to show a form so authenticated users can answer these questions.
For the first feature, you will create a new route in your application,
/new-question. This route will be guarded by a component that will check if the user is authenticated or not. If the user is not authenticated yet, this component will redirect them to Auth0 so they can do so. If the user is already authenticated, the component will let React render the form where new questions will be created.
So, for starters, you will create a new directory called
SecuredRoute and create a file called
SecuredRoute.js inside it. Then, in this file, you will insert the following code:
import React from 'react'; import {Route} from 'react-router-dom'; import auth0Client from '../Auth'; function SecuredRoute(props) { const {component: Component, path} = props; return ( <Route path={path} render={() => { if (!auth0Client.isAuthenticated()) { auth0Client.signIn(); return <div></div>; } return <Component /> }} /> ); } export default SecuredRoute;
The goal of this component is to restrict access to whatever route you configure on it. The implementation of this is quite simple. In this case, you are creating a functional component that takes two properties: another
Component, so it can render it in case the user is authenticated; and a
path, so it can configure the default
Route component provided by React Router. However, before rendering anything, this component checks if the user
isAuthenticated. If they are not, this component triggers the
Then, after creating the
SecuredRoute component, you can create the component that will render the form where users will create questions. For that, create a new directory called
NewQuestion and a file called
NewQuestion.js inside it. Then, insert this code in the file:
import React, {Component} from 'react'; import {withRouter} from 'react-router-dom'; import auth0Client from '../Auth'; import axios from 'axios'; class NewQuestion extends Component { constructor(props) { super(props); this.state = { disabled: false, title: '', description: '', }; } updateDescription(value) { this.setState({ description: value, }); } updateTitle(value) { this.setState({ title: value, }); } async submit() { this.setState({ disabled: true, }); await axios.post('', { title: this.state.title, description: this.state.description, }, { headers: { 'Authorization': `Bearer ${auth0Client.getIdToken()}` } }); this.props.history.push('/'); } render() { return ( <div className="container"> <div className="row"> <div className="col-12"> <div className="card border-primary"> <div className="card-header">New Question</div> <div className="card-body text-left"> <div className="form-group"> <label htmlFor="exampleInputEmail1">Title:</label> <input disabled={this.state.disabled} </div> <div className="form-group"> <label htmlFor="exampleInputEmail1">Description:</label> <input disabled={this.state.disabled} </div> <button disabled={this.state.disabled} className="btn btn-primary" onClick={() => {this.submit()}}> Submit </button> </div> </div> </div> </div> </div> ) } } export default withRouter(NewQuestion);
Although long, the code for this component is not complex. As you can see, in this case, you needed to create a class component so it can hold the following state:
disabled: You are using this to disable the input elements after the user hits the Submit button.
title: You are using this to let users define the title of the question being asked.
description: You are using this to let users define the description of the question.
Also, you can see that you needed three methods besides
constructor and
render:
updateDescription: This method is responsible for updating the
descriptionon the component's state.
updateTitle: This method is responsible for updating the
titleon the component's state.
submit: This method is responsible for issuing the new question to the backend and to block the input fields while the request is being made.
Note that, in the
submit method, you are using
auth0Client to get the ID Token of the current user to add it in the request. Without this token, the backend API would deny the request.
From the UI perspective, this component is using a bunch of Bootstrap classes to produce a nice form. Be sure to check this resource after finishing the tutorial if you need to learn about forms on Bootstrap.
Now, to see this working, you will have to update two files. First, you will have to register the new route in your
App.js file:
// ... other import statements ... import NewQuestion from './NewQuestion/NewQuestion'; import SecuredRoute from './SecuredRoute/SecuredRoute'; class App extends Component { render() { return ( <div> <!-- ... navbar and other routes ... --> <SecuredRoute path='/new-question' component={NewQuestion} /> </div> ); } } export default App;
Then, you will have to add a link in the
Questions.js file to this new route. To do so, open this file and update it as follows:
// ... import statements ... class Questions extends Component { // ... constructor and componentDidMount ... render() { return ( <div className="container"> <div className="row"> <Link to="/new-question"> <div className="card text-white bg-secondary mb-3"> <div className="card-header">Need help? Ask here!</div> <div className="card-body"> <h4 className="card-title">+ New Question</h4> <p className="card-text">Don't worry. Help is on the way!</p> </div> </div> </Link> <!-- ... loading questions message ... --> <!-- ... questions' cards ... --> </div> </div> ) } } export default Questions;
With these changes in place, you will be able to create new questions after authenticating.
Then, to finish your app's features, you can refactor the
Question component to include a form where users will be able to answer questions. To define this form, create a new file called
SubmitAnswer.js inside the
Question directory with the following code:
import React, {Component, Fragment} from 'react'; import {withRouter} from 'react-router-dom'; import auth0Client from '../Auth'; class SubmitAnswer extends Component { constructor(props) { super(props); this.state = { answer: '', }; } updateAnswer(value) { this.setState({ answer: value, }); } submit() { this.props.submitAnswer(this.state.answer); this.setState({ answer: '', }); } render() { if (!auth0Client.isAuthenticated()) return null; return ( <Fragment> <div className="form-group text-center"> <label htmlFor="exampleInputEmail1">Answer:</label> <input type="text" onChange={(e) => {this.updateAnswer(e.target.value)}} </Fragment> ) } } export default withRouter(SubmitAnswer);
This component works in a similar fashion to the
NewQuestion component. The difference here is that instead of handling the POST request by itself, the component delegates it to someone else. Also, if the user is not authenticated, this component renders nothing.
To use this component, open the
Question.js file and replace its contents with this:
import React, {Component} from 'react'; import axios from 'axios'; import SubmitAnswer from './SubmitAnswer'; import auth0Client from '../Auth'; class Question extends Component { constructor(props) { super(props); this.state = { question: null, }; this.submitAnswer = this.submitAnswer.bind(this); } async componentDidMount() { await this.refreshQuestion(); } async refreshQuestion() { const { match: { params } } = this.props; const question = (await axios.get(`{params.questionId}`)).data; this.setState({ question, }); } async submitAnswer(answer) { await axios.post(`{this.state.question.id}`, { answer, }, { headers: { 'Authorization': `Bearer ${auth0Client.getIdToken()}` } }); await this.refreshQuestion(); }" /> <SubmitAnswer questionId={question.id} submitAnswer={this.submitAnswer} /> <p>Answers:</p> { question.answers.map((answer, idx) => ( <p className="lead" key={idx}>{answer.answer}</p> )) } </div> </div> </div> ) } } export default Question;
Here, you can see that you are defining the
submitAnswer method that will issue the requests to the backend API (with the user's ID Token), and that you are defining a method called
refreshQuestion. This method will refresh the contents of the question in two situations, on the first time React is rendering this component (
componentDidMount) and right after the backend API respond to the POST request of the
submitAnswer method.
After refactoring the
Question component, you will have a complete version of your app. To test it, you can go to and start using your full React app. After signing in, you will be able to ask questions, and you will be able to answer them as well. How cool is that?
Keeping Users Signed In after a Refresh
Although you have a fully-functional app, after signing into your application, if you refresh your browser, you will notice that you will be signed out. Why is that? Because you are saving you tokens in memory (as you should do) and because the memory is wiped out when you hit refresh. Not the best behavior, right?
Luckily, solving this problem is easy. You will have to take advantage of the Silent Authentication provided by Auth0. That is,.
To use the silent authentication, you will have to refactor two classes:
Auth and
App. However, before refactoring these classes, you will have to change a few configurations in your Auth0 account.
For starters, you will have to go to the Applications section in your Auth0 dashboard, open the application that represents your React app, and change two fields:
- Allowed Web Origins: As your app is going to issue an AJAX request to Auth0, you will need to add this field. Without this value there, Auth0 would deny any AJAX request coming from your app.
- Allowed Logout URLs: To enable users to end their session at Auth0, you will have to call the logout endpoint. Similarly to the authorization endpoint, the log out endpoint only redirects users to whitelisted URLs after the process. As such, you will have to add this field too.
After updating these fields, you can hit the Save Changes button. Then, the last thing you will have to do before focusing in your app's code is to replace the development keys that Auth0 is using to enable users to authenticate through Google.
You might not have noticed but, even though you didn't configure anything related to Google in your Auth0 account, the social login button is there and works just fine. The only reason this feature works out of the box is because Auth0 auto-configure all new accounts to use development keys registered at Google. However, when developers start using Auth0 more seriously, they are expected to replace these keys with their own. And, to force this, every time an app tries to perform a silent authentication, and that app is still using the development keys, Auth0 returns that there is no session active (even though this is not true).
So, to change these keys, move to the Social Connections on your dashboard, and click on Google. There, you will see two fields among other things: Client ID and Client Secret. This is where you will insert your keys. To get your keys, please, read the Connect your app to Google documentation provided by Auth0.
Note: If you don't want to use your Google keys, you can deactivate this social connection and rely only on users that sign up to your app through Auth0's Username and Password Authentication.
Now that you have finished configuring your Auth0 account, you can move back to your code. There, open the
./src/Auth.js file of your React app and update it as follows:
import auth0 from 'auth0-js'; class Auth { // ... constructor, getProfile, getIdToken, isAuthenticated, signIn ... handleAuthentication() { return new Promise((resolve, reject) => { this.auth0.parseHash((err, authResult) => { if (err) return reject(err); if (!authResult || !authResult.idToken) { return reject(err); } this.setSession(authResult); resolve(); }); }) } setSession(authResult) { this.idToken = authResult.idToken; this.profile = authResult.idTokenPayload; // set the time that the id token will expire at this.expiresAt = authResult.idTokenPayload.exp * 1000; } signOut() { this.auth0.logout({ returnTo: '', clientID: '<YOUR_AUTH0_CLIENT_ID>', }); } silentAuth() { return new Promise((resolve, reject) => { this.auth0.checkSession({}, (err, authResult) => { if (err) return reject(err); this.setSession(authResult); resolve(); }); }); } } // ... auth0Client and export ...
Note: You will have to replace
<YOUR_AUTH0_CLIENT_ID>with the client ID of your Auth0 Application. You will have to use the same value you are using to configure the
audienceof the object passed to
auth0.WebAuthin the constructor of this class.
In the new version of this class, you are:
- adding a method to set up users' detail:
setSession;
- refactoring the
handleAuthenticationmethod to use the
setSessionmethod;
- adding a method called
silentAuthto call the
checkSessionfunction provided by
auth0-js(this method also uses
setSession);
- and refactoring the
signOutfunction to make it call the logout endpoint at Auth0 and to inform where users must be redirected after that (i.e.,
returnTo: '');
Then, to wrap things up, you will have to open the
./src/App.js file, and update it as follows:
// ... other imports ... import {Route, withRouter} from 'react-router-dom'; import auth0Client from './Auth'; class App extends Component { async componentDidMount() { if (this.props.location.pathname === '/callback') return; try { await auth0Client.silentAuth(); this.forceUpdate(); } catch (err) { if (err.error !== 'login_required') console.log(err.error); } } // ... render ... } export default withRouter(App);
As you can see, the new version of this file is defining what to do when your app loads (
componentDidMount):
- If the requested route is
/callback, the app does nothing. This is the correct behavior because, when users are requesting for the
/callbackroute, they do so because they are getting redirected by Auth0 after the authentication process. In this case, you can leave the
Callbackcomponent handle the process.
- If the requested route is anything else, the app wants to try a
silentAuth. Then, if no error occurs, the app calls
forceUpdateso the user can see whatever they asked for.
- If there is an error on the
silentAuth, the app checks if the error is different than
login_required. If this is the case, the app logs the problem. Otherwise, the app does nothing because it means the user is not signed in (or that you are using development keys, which you shouldn't).
By the way, you are enclosing your
Appclass inside the
withRouterfunction so you can check what route is being called (
this.props.location.pathname). Without
withRouter, you wouldn't have access to the
locationobject.
Avoiding Redirecting Authenticated Users to Auth0
Before you call it a day, there is one last thing that you will have to do. The solution above will work smoothly if you are on any route but the protected one. If you are on the protected route (i.e., on
/new-question) and you refresh your browser, you will get redirected to Auth0 to sign in again. The problem here is that the
SecuredRoute component checks whether users are authenticated or not (
if (!auth0Client.isAuthenticated())) before your app get a response from the silent authentication process. As such, the app thinks that your users is not authenticated and redirects them to Auth0 (
auth0Client.signIn();) so they can sign in.
To fix this misbehavior, you will have to open your
SecuredRoute.js file and update it as follows:
// ... import statements ... function SecuredRoute(props) { const {component: Component, path, checkingSession} = props; return ( <Route path={path} render={() => { if (checkingSession) return <h3 className="text-center">Validating session...</h3>; // ... leave the rest untouched ... }} /> ); } export default SecuredRoute;
The difference now is that your
SecuredRoute component will verify a boolean called
checkingSession that comes from
props and, if this boolean is set to
true, it will show an
h3 element saying that the app is validating session. If this property is set to
false, the component will behave just like before.
Now, to pass this property to
SecuredRoute, you will have to open the
App.js file and update it as follows:
// ... import statements ... class App extends Component { constructor(props) { super(props); this.state = { checkingSession: true, } } async componentDidMount() { if (this.props.location.pathname === '/callback') { this.setState({checkingSession:false}); return; } // ... leave try-catch untouched this.setState({checkingSession:false}); } render() { // ... leave other routes untouched ... // replace SecuredRoute with this: <SecuredRoute path='/new-question' component={NewQuestion} checkingSession={this.state.checkingSession} /> } } export default withRouter(App);
That's it! After these changes, you finally finished developing your React application. Now, if you sign in and refresh your browser (no matter which route you are in), you will see that you won't lose your session and that you won't have to sign in again. Hurray!
"I just built my first React application."
Conclusion
In this article, you had the chance to play with a lot of cool technologies and concepts. First, you learned about some important concepts that React introduces (like the component architecture and the JSX syntax). Then, you briefly learned how to create a backend API with Node.js and Express. After that, you learned how to create a nice React application and how to secure the whole thing with Auth0.
As the article introduced a lot of different topics, you didn't really have the chance to grasp all of them fully. For example, you barely touched the tip of the iceberg on some important concepts like the Component Lifecycle. You also didn't have the chance to learn what gives React a solid foundation when it comes to manipulating HTML elements. Unfortunately, diving deep into these topics is not possible as it would make the article massive (more than it is already).
So, now that you finished developing your first React application, be sure to check the links and references left throughout the tutorial and, to learn more about how React works, be sure to check the Virtual DOM and Internals article.
Also, if you need help, do not hesitate to leave a message on the comments section down below. Cheers!
- Auth0 Docs
Implement Authentication in Minutes
- Auth0 Community
Join the Conversation | https://auth0.com/blog/react-tutorial-building-and-securing-your-first-app/ | CC-MAIN-2019-18 | refinedweb | 10,454 | 56.15 |
-vfs
Overview
The ServiceMix VFS component provides support for reading from and writing to virtual file systems via the enterprise service bus by using the Apache commons-vfs library.
Namespace and xbean.xml
The namespace URI for the servicemix-bean JBI component is. This is an example of an xbean.xml file with a namespace definition with prefix bean.
<beans xmlns: <!-- add vfs:poller or vfs:sender here --> </beans>
Endpoint types
The servicemix-vfs component defines two endpoint types:
vfs:poller :: Periodically polls a directory on one of the VFS-supported file systems for files and sends an exchange for every file
vfs:sender :: Writes the contents of an exchange to a file on one of the VFS-supported file systems | http://servicemix.apache.org/docs/4.4.x/jbi/components/servicemix-vfs.html | CC-MAIN-2018-05 | refinedweb | 121 | 52.9 |
Introduction
In this tutorial, we will build a Slack messaging system for a web app using WayScript. We integrate Slack in a demo web app built using Flask with Python and pushing the data to Slack using WayScript.
If you prefer to watch vs. read, check out the video at the bottom of this post received,
import requests import json def send_to_slack( message ): url = '' params = { 'api_key' : '', #TODO, Insert API KEY 'program_id' : 0, #TODO, Insert Program ID 'variables' : json.dumps( [ message ] ) } requests.post( url, params = params ) #This sends the message to your WayScript Script
Your API Key and Program ID are available to copy and paste from the Trigger Module (see image above).
Now every time your code executes, your bot will post your message to Slack!.
Let us know your thoughts in the comments below or to our Discord channel.
Here's a video of this tutorial featuring Jesse a co-founder of WayScript.
Discussion (0) | https://practicaldev-herokuapp-com.global.ssl.fastly.net/wayscript/tutorial-integrating-your-app-with-slack-in-3-minutes-c10 | CC-MAIN-2021-21 | refinedweb | 156 | 71.24 |
19 March 2009 12:22 [Source: ICIS news]
SINGAPORE (ICIS news)--India's Reliance Industries expects to start up a refurbished high density polyethylene (HDPE) plant at Gandhar, in Gujarat state, in the next two to three days, a source close to the company said on Thursday.
“The plant is now going through the stabilisation stage and should achieve commercial output once it is fully stabilised,” the source said.
Reliance officials were not available for comment.
The 60,000 tonne/year plant, which was acquired from the now-defunct National Organic Chemicals Industries Ltd (NOCIL) in 2005, was later moved to Gandhar from Thane, in ?xml:namespace>
Ethylene feedstock for the HDPE plant would be sourced from Reliance’s 430,000 tonne/year cracker at Gandhar, the | http://www.icis.com/Articles/2009/03/19/9201772/reliance-to-start-refurbished-gandhar-hdpe-unit-in-2-3-days.html | CC-MAIN-2015-06 | refinedweb | 127 | 56.18 |
Christine Dodrill - Blog - Contact - Gallery - Resume - Talks - Signal Boost - Feeds | GraphViz - When Then Zen
Before I built my current desktop, I had been using a 2013 Mac Pro for at least 7 years. This machine has seen me through living in a few cities (Bellevue, Mountain View and Montreal), but it was starting to show its age. Its 12 core Xeon is really no slouch (scoring about 5 minutes in my "compile the linux kernel" test), but with Intel security patches it was starting to get slower and slower as time went on.
So in March (just before the situation started) I ordered the parts for my new tower and built my current desktop machine. From the start, I wanted it to run Linux and have 64 GB of ram, mostly so I could write and test programs without having to worry about ram exhaustion.
When the parts were almost in, I had decided to really start digging into NixOS. Friends on IRC and Discord had been trying to get me to use it for years, and I was really impressed with a simple setup that I had in a virtual machine. So I decided to jump head-first down that rabbit hole, and I'm honestly really glad I did.
NixOS is built on a more functional approach to package management called Nix. Parts of the configuration can be easily broken off into modules that can be reused across machines in a deployment. If Ansible or other tools like it let you customize an existing Linux distribution to meet your needs, NixOS allows you to craft your own Linux distribution around your needs.
Unfortunately, the Nix and NixOS documentation is a bit more dense than most other Linux programs/distributions are, and it's a bit easy to get lost in it. I'm going to attempt to explain a lot of the guiding principles behind Nix and NixOS and how they fit into how I use NixOS on my desktop.
Earlier, I mentioned that Nix is a functional package manager. This means that Nix views packages as a combination of inputs to get an output:
This is how most package managers work (even things like Windows installer files), but Nix goes a step further by disallowing package builds to access the internet. This allows Nix packages to be a lot more reproducible; meaning if you have the same inputs (source code, build script and patches) you should always get the same output byte-for-byte every time you build the same package at the same version.
Let's consider a simple example, my gruvbox-inspired CSS file's
default.nix file':
{ pkgs ? import <nixpkgs> { } }: pkgs.stdenv.mkDerivation { pname = "gruvbox-css"; version = "latest"; src = ./.; phases = "installPhase"; installPhase = '' mkdir -p $out cp -rf $src/gruvbox.css $out/gruvbox.css ''; }
This creates a package named
gruvbox-css with the version
latest. Let's
break this down its
default.nix line by line:
{ pkgs ? import <nixpkgs> { } }:
This creates a function that either takes in the
pkgs object or tells Nix to
import the standard package library nixpkgs as
pkgs. nixpkgs
includes a lot of utilities like a standard packaging environment, special
builders for things like snaps and Docker images as well as one of the largest
package sets out there.
pkgs.stdenv.mkDerivation { # ... }
This runs the
stdenv.mkDerivation function with some arguments in an
object. The "standard environment" comes with tools like GCC, bash, coreutils,
find, sed, grep, awk, tar, make, patch and all of the major compression tools.
This means that our package builds can build C/C++ programs, copy files to the
output, and extract downloaded source files by default. You can add other inputs
to this environment if you need to, but for now it works as-is.
Let's specify the name and version of this package:
pname = "gruvbox-css"; version = "latest";
pname stands for "package name". It is combined with the version to create the
resulting package name. In this case it would be
gruvbox-css-latest.
Let's tell Nix how to build this package:
src = ./.; phases = "installPhase"; installPhase = '' mkdir -p $out cp -rf $src/gruvbox.css $out/gruvbox.css '';
The
src attribute tells Nix where the source code of the package is stored.
Sometimes this can be a URL to a compressed archive on the internet, sometimes
it can be a git repo, but for now it's the current working directory
./..
This is a CSS file, it doesn't make sense to have to build these, so we skip the build phase and tell Nix to directly install the package to its output folder:
mkdir -p $out cp -rf $src/gruvbox.css $out/gruvbox.css
This two-liner shell script creates the output directory (usually exposed as
$out) and then copies
gruvbox.css into it. When we run this through Nix
with
nix-build, we get output that looks something like this:
$ nix-build ./default.nix these derivations will be built: /nix/store/c99n4ixraigf4jb0jfjxbkzicd79scpj-gruvbox-css.drv building '/nix/store/c99n4ixraigf4jb0jfjxbkzicd79scpj-gruvbox-css.drv'... installing /nix/store/ng5qnhwyrk9zaidjv00arhx787r0412s-gruvbox-css
And
/nix/store/ng5qnhwyrk9zaidjv00arhx787r0412s-gruvbox-css is the output
package. Looking at its contents with
ls, we see this:
$ ls /nix/store/ng5qnhwyrk9zaidjv00arhx787r0412s-gruvbox-css gruvbox.css
For a more complicated package, let's look at the build directions of the website you are reading right now:
{ pkgs ? import (import ./nix/sources.nix).nixpkgs }: with pkgs; assert lib.versionAtLeast go.version "1.13"; buildGoPackage rec { pname = "christinewebsite"; version = "latest"; goPackagePath = "christine.website"; src = ./.; goDeps = ./nix/deps.nix; allowGoReference = false; preBuild = '' export CGO_ENABLED=0 buildFlagsArray+=(-pkgdir "$TMPDIR") ''; ''; }
Breaking it down, we see some similarities to the gruvbox-css package from above, but there's a few more interesting lines I want to point out:
{ pkgs ? import (import ./nix/sources.nix).nixpkgs }:
My website uses a pinned or fixed version of nixpkgs. This allows my website's deployment to be stable even if nixpkgs changes something that could cause it to break.
with pkgs;
With expressions are one of the more interesting parts of Nix. Essentially, they let you say "everything in this object should be put into scope". So if you have an expression that does this:
let foo = { ponies = "awesome"; }; in with foo; "ponies are ${ponies}!"
You get the result
"ponies are awesome!". I use
with pkgs here to use things
directly from nixpkgs without having to say
pkgs. in front of a lot of things.
assert lib.versionAtLeast go.version "1.13";
This line will make the build fail if Nix is using any Go version less than 1.13. I'm pretty sure my website's code could function on older versions of Go, but the runtime improvements are important to it, so let's fail loudly just in case.
buildGoPackage { # ... }
buildGoPackage builds a Go
package into a Nix package. It takes in the Go package path, list
of dependencies and if the resulting package is allowed to depend on the Go
compiler or not.
It will then compile the Go program (and all of its dependencies) into a binary
and put that in the resulting package. This website is more than just the source
code, it's also got assets like CSS files and the image earlier in the post.
Those files are copied in the
postInstall phase: '';
This results in all of the files that my website needs to run existing in the right places.
For more kinds of packages that you can build, see the Languages and Frameworks chapter of the nixpkgs documentation.
If your favorite language isn't shown there, you can make your own build script and do it more manually. See here for more information on how to do that.
nix-envAnd Friends
Building your own packages is nice and all, but what about using packages
defined in nixpkgs? Nix includes a few tools that help you find, install,
upgrade and remove packages as well as
nix-build to build new ones.
nix search
When looking for a package to install, use
$ nix search name to see if it's
already packaged. For example, let's look for graphviz, a popular
diagramming software:
$ nix search graphviz * nixos.graphviz (graphviz) Graph visualization tools * nixos.graphviz-nox (graphviz) Graph visualization tools * nixos.graphviz_2_32 (graphviz) Graph visualization tools
There are several results here! These are different because sometimes you may want some features of graphviz, but not all of them. For example, a server installation of graphviz wouldn't need X windows support.
The first line of the output is the attribute. This is the attribute that the package is imported to inside nixpkgs. This allows multiple packages in different contexts to exist in nixpkgs at the same time, for example with python 2 and python 3 versions of a library.
The second line is a description of the package from its metadata section.
The
nix tool allows you to do a lot more than just this, but for now this is
the most important thing.
nix-env -i
nix-env is a rather big tool that does a lot of things (similar to pacman in
Arch Linux), so I'm going to break things down into separate sections.
Let's pick an instance graphviz from before and install it using
nix-env:
$ nix-env -iA nixos.graphviz installing 'graphviz-2.42.2' these paths will be fetched (5.00 MiB download, 13.74 MiB unpacked): /nix/store/980jk7qbcfrlnx8jsmdx92q96wsai8mx-gts-0.7.6 /nix/store/fij1p8f0yjpv35n342ii9pwfahj8rlbb-graphviz-2.42.2 /nix/store/jy35xihlnb3az0vdksyg9rd2f38q2c01-libdevil-1.7.8 /nix/store/s895dnwlprwpfp75pzq70qzfdn8mwfzc-lcms-1.19 copying path '/nix/store/980jk7qbcfrlnx8jsmdx92q96wsai8mx-gts-0.7.6' from ''... copying path '/nix/store/s895dnwlprwpfp75pzq70qzfdn8mwfzc-lcms-1.19' from ''... copying path '/nix/store/jy35xihlnb3az0vdksyg9rd2f38q2c01-libdevil-1.7.8' from ''... copying path '/nix/store/fij1p8f0yjpv35n342ii9pwfahj8rlbb-graphviz-2.42.2' from ''... building '/nix/store/r4fqdwpicqjpa97biis1jlxzb4ywi92b-user-environment.drv'... created 664 symlinks in user environment
And now let's see where the
dot tool from graphviz is installed to:
$ which dot /home/cadey/.nix-profile/bin/dot $ readlink /home/cadey/.nix-profile/bin/dot /nix/store/fij1p8f0yjpv35n342ii9pwfahj8rlbb-graphviz-2.42.2/bin/dot
This lets you install tools into the system-level Nix store without affecting other user's environments, even if they depend on a different version of graphviz.
nix-env -e
nix-env -e lets you uninstall packages installed with
nix-env -i. Let's
uninstall graphviz:
$ nix-env -e graphviz
Now the
dot tool will be gone from your shell:
$ which dot which: no dot in (/run/wrappers/bin:/home/cadey/.nix-profile/bin:/etc/profiles/per-user/cadey/bin:/nix/var/nix/profiles/default/bin:/run/current-system/sw/bin)
And it's like graphviz was never installed.
Notice that these package management commands are done at the user level because they are only affecting the currently logged-in user. This allows users to install their own editors or other tools without having to get admins involved.
NixOS builds on top of Nix and its command line tools to make an entire Linux distribution that can be perfectly crafted to your needs. NixOS machines are configured using a configuration.nix file that contains the following kinds of settings:
At a high level, machines are configured by setting options like this:
# basic-lxc-image.nix { config, pkgs, ... }: { networking.hostName = "example-for-blog"; environment.systemPackages = with pkgs; [ wget vim ]; }
This would specify a simple NixOS machine with the hostname
example-for-blog
and with wget and vim installed. This is nowhere near enough to boot an entire
system, but is good enough for describing the base layout of a basic LXC
image.
For a more complete example of NixOS configurations, see here or repositories on this handy NixOS wiki page.
The main configuration.nix file (usually at
/etc/nixos/configuration.nix) can also
import other NixOS modules using the
imports attribute:
# better-vm.nix { config, pkgs, ... }: { imports = [ ./basic-lxc-image.nix ]; networking.hostName = "better-vm"; services.nginx.enable = true; }
And the
better-vm.nix file would describe a machine with the hostname
better-vm that has wget and vim installed, but is also running nginx with its
default configuration.
Internally, every one of these options will be fed into auto-generated Nix packages that will describe the system configuration bit by bit.
nixos-rebuild
One of the handy features about Nix is that every package exists in its own part
of the Nix store. This allows you to leave the older versions of a package
laying around so you can roll back to them if you need to.
nixos-rebuild is
the tool that helps you commit configuration changes to the system as well as
roll them back.
If you want to upgrade your entire system:
$ sudo nixos-rebuild switch --upgrade
This tells nixos-rebuild to upgrade the package channels, use those to create a new base system description, switch the running system to it and start/restart/stop any services that were added/upgraded/removed during the upgrade. Every time you rebuild the configuration, you create a new "generation" of configuration that you can roll back to just as easily:
$ sudo nixos-rebuild switch --rollback
As upgrades happen and old generations pile up, this may end up taking up a lot
of unwanted disk (and boot menu) space. To free up this space, you can use
nix-collect-garbage:
$ sudo nix-collect-garbage < cleans up packages not referenced by anything > $ sudo nix-collect-garbage -d < deletes old generations and then cleans up packages not referenced by anything >
The latter is a fairly powerful command and can wipe out older system states. Only run this if you are sure you don't want to go back to an older setup.
Each of these things builds on top of eachother to make the base platform that I built my desktop environment on. I have the configuration for my shell, emacs, my window manager and just about every program I use on a regular basis defined in their own NixOS modules so I can pick and choose things for new machines.
When I want to change part of my config, I edit the files responsible for that part of the config and then rebuild the system to test it. If things work properly, I commit those changes and then continue using the system like normal.
This is a little bit more work in the short term, but as a result I get a setup that is easier to recreate on more machines in the future. It took me a half hour or so to get the configuration for zathura right, but now I have a zathura module that lets me get exactly the setup I want every time.
Nix and NixOS ruined me. It's hard to go back.
This article was posted on M04 25. | https://christine.website/blog/nixos-desktop-flow-2020-04-25 | CC-MAIN-2021-17 | refinedweb | 2,467 | 62.98 |
0
I'm supposed to create a program to read in word by word into a vector. And print out the words connected with '-'... so if the input was hello world the output would be hello-world
this is the code ive made so far... right now my input can be hello world but my output would be
hello-
world-
i dont want the - after world and i want it printed out on one line... any help would be appreciated.
#include <iostream> #include <iomanip> #include <vector> using namespace std; using std::vector; int main() { vector<string>svect; string word; while( cin >> word ){ word += '-'; svect.push_back(word); } for( int i = 0; i < svect.size(); i++ ) cout << svect[i] << endl; return 0; }
Edited by Dewey1040: n/a | https://www.daniweb.com/programming/software-development/threads/226408/printing-vectors-new-at-c | CC-MAIN-2017-22 | refinedweb | 125 | 82.24 |
:
9 thoughts on “ArcMap Field Calculator: Create a Unique ID”
Nice article, thanks for posting the link to those field calculator examples.
great post. exactly what i needed. here is a code snippet for use in a python script (just output from tool results but might help somewhere)
arcpy.CalculateField_management(zonePoly, “ZONE_ID”,”uniqueID()”,”PYTHON_9.3″,”counter = 0/ndef uniqueID():/n global counter/n counter +=1/n return counter”)
A small detail I uncovered while trying this – you can’t name your field – ex. UniqueID, the same thing as the python function, ex. UniqueID(). Throw a random character in front of the function name – ex. zUniqueID(). Hope that helps people who tried the same thing I did at first and couldn’t figure out why it wouldn’t run.
Field would be:
UniqueID
Codeblock would be:
counter = 0
def zUniqueID():
global counter
counter += 1
return counter
Formula would be
UniqueID =
zUniqueID()
Thanks for sharing that tip, Bryon. I can see why that would be problematic but wouldn’t have thought it until I accidentally “discovered” it.
This is not completely random, it is ordered in the same sequence as the FID. Is there anyway to create a unique ID while still preserving the selected order of records?
Nice idea, I haven’t had a need to do so yet so I’m not sure if there is directly. If you’re sorting by a field that has a unique value, I can see a work-around: summarize by that field, creating a new table. Add the new field to this new table, use the method described and join the new table to the existing one based on the unique field & copy the values over. Make sense?
This Thread answers most questions on this topic:
See especially Post #58 and the link there for a 10.x toolbox that works great>
Thank you so much! I’ve been searching the web for a practical solution and the only threads were from before version 10, and for some reason all of those scripts were returning errors whenever I ran them. Something so simple like this should definitely be a built-in function or a geoprocessing tool.
You are a GOD! I had Googled my ass trying to find this simple function until I bumped here. Other sites with a “solution” say “ArcGIS makes an OBJECTID field. Its values are unique values for you”. Well, they’re bloody not. I mean they are unique, but they don’t take into account the features you deleted during or after digitizing. So, I might have 5 features with 1, 10, 15, 30, 40 for their OBJECTID…
Is there a way to add leading zeroes? I might want to list my unique values in a program which alphabetizes correctly only with leading zeroes. For example I wouldn’t want the list to go “1, 10, 2” and lead zeroes should fix that (i.e. “01, 02, 03 … 10”).
Thanks again! | http://milesgis.com/2011/07/28/using-arcpy-in-arcgis-10-field-calculater-to-create-a-unique-id/ | CC-MAIN-2017-51 | refinedweb | 493 | 71.44 |
Next, we open three databases ("color" and "fruit" and "cats"), in the database environment. Again, our DB database handles are declared to be free-threaded using the DB_THREAD flag, and so may be used by any number of threads we subsequently create.. */ if (db_open(dbenv, &db_fruit, "fruit", 0)) return (1); /* Open database: Key is a color; Data is an integer. */ if (db_open(dbenv, &db_color, "color", 0)) return (1); /* * Open database: * Key is a name; Data is: company name, cat breeds. */ if (db_open(dbenv, &db_cats, "cats", 1)) return (1); return (0); } int db_open(DB_ENV *dbenv, DB **dbp, char *name, int dups) { DB *db; int ret; /* Create the database handle. */ if ((ret = db_create(&db, dbenv, 0)) != 0) { dbenv->err(dbenv, ret, "db_create"); return (1); } /* Optionally, turn on duplicate data items. */ if (dups && (ret = db->set_flags(db, DB_DUP)) != 0) { (void)db->close(db, 0); dbenv->err(dbenv, ret, "db->set_flags: DB_DUP"); return (1); } /* * Open a database in the environment: * create if it doesn't exist * free-threaded handle * read/write owner only */ if ((ret = db->open(db, NULL, name, NULL, DB_BTREE, DB_AUTO_COMMIT | DB_CREATE | DB_THREAD, S_IRUSR | S_IWUSR)) != 0) { (void)db->close(db, 0); dbenv->err(dbenv, ret, "db->open: %s", name); return (1); } *dbp = db; return (0); }
After opening the database, we can use the db_stat utility to display information about a database we have created:
prompt> db_stat -h TXNAPP -d color 53162 Btree magic number. 8 Btree version number. Flags: 2 Minimum keys per-page. 8192 Underlying database page size. 1 Number of levels in the tree. 0 Number of unique keys in the tree. 0 Number of data items in the tree. 0 Number of tree internal pages. 0 Number of bytes free in tree internal pages (0% ff). 1 Number of tree leaf pages. 8166 Number of bytes free in tree leaf pages (0.% ff). 0 Number of tree duplicate pages. 0 Number of bytes free in tree duplicate pages (0% ff). 0 Number of tree overflow pages. 0 Number of bytes free in tree overflow pages (0% ff). 0 Number of pages on the free list.
The database open must be enclosed within a transaction in order to be recoverable. The transaction will ensure that created files are re-created in recovered environments (or do not appear at all). Additional database operations or operations on other databases can be included in the same transaction, of course. In the simple case, where the open is the only operation in the transaction, an application can set the DB_AUTO_COMMIT flag instead of creating and managing its own transaction handle. The DB_AUTO_COMMIT flag will internally wrap the operation in a transaction, simplifying application code.
The previous example is the simplest case of transaction protection for database open. Obviously, additional database operations can be done in the scope of the same transaction. For example, an application maintaining a list of the databases in a database environment in a well-known file might include an update of the list in the same transaction in which the database is created. Or, an application might create both a primary and secondary database in a single transaction.
DB handles that will later be used for transactionally protected database operations must be opened within a transaction. Specifying a transaction handle to database operations using DB handles not opened within a transaction will return an error. Similarly, not specifying a transaction handle to database operations that will modify the database, using handles that were opened within a transaction, will also return an error. | http://docs.oracle.com/cd/E17275_01/html/programmer_reference/transapp_data_open.html | CC-MAIN-2017-09 | refinedweb | 582 | 56.35 |
What will we cover in this tutorial?
In this tutorial we will look into how you can track an object with a specific color and replace it with a new object. The inserted new object will be scaled to the size of the object tracked. This will be done on a live stream from the webcam.
Understand the process from webcam and feeding it to a window
First thing to understand is that when processing a live stream from a webcam you are actually processing it frame by frame.
Hence, the base code is as follows.
import cv2 # Get the webcam cap = cv2.VideoCapture(0) while True: # Step 1: Capture the frame _, frame = cap.read() # Step 2: Show the frame with blurred background cv2.imshow("Webcam", frame) # If q is pressed terminate if cv2.waitKey(1) == ord('q'): break # Release and destroy all windows cap.release() cv2.destroyAllWindows()
First we import the OpenCV library cv2. If you need help to install it read this tutorial. Then you capture the webcam by calling the cv2.VideoCapture(0), where we assume you have 1 webcam and it is the first one (0).
The the while-loop where you capture the video stream frame by frame. It is done calling the cap.read(), which returns a return code and the frame (we ignore the return code _).
To show the frame we read from the webcam, we call the cv2.imshow(“Webcam”, frame), which will create a window with the frame (image from your webcam).
The final part of the while-loop is checking if the key q has been pressed, if so, break out of the while-loop and release webcam and destroy all windows.
That is how processing works for webcam flow. The processing will be between step 1 and step 2 in the above code. Pro-processing and setup is most often done before the while-loop.
The process flow to identify and track object to insert scaled logo
In the last section we looked at how a webcam stream is processed. Then in this section we will explain the process for how to identify a object by color, scale the object we want to insert, and how to insert it into the frame.
The process is depicted in the image below followed by an explanation of all the steps.
The steps are described here.
- This is the step where we capture the raw frame from the webcam.
- To easier identify a specific color object in the frame, we convert the image to the HSV color model. It contains of Hue, Saturation, and Volume.
- Make a mask with all object of the specific color. This is where the HSV color model makes it easy.
- To make it more visible and easier for detection, we dilate the mask.
- Then we find all the contours in the mask.
- We loop over all the contours found. Ideally we only find one, but there might be small objects, which we will discard.
- Based on the contour found, get the size of it, which we use to scale (resize) the logo we want to insert.
- Resize the logo to fit the size of the contour.
- As the logo is not square, we need to create a mask to insert it.
- To insert it easily, we create a RIO (region of image) where the contour is. This is nothing needed, just makes it easier to avoid a lot of extra calculations. If you know NumPy, it is a view into it.
- Then we insert the logo using the mask.
- Finally, time to show the frame.
The implementation
The code following the steps described in the previous section is found here.
import cv2 import time import imutils import numpy as np # Get the webcam cap = cv2.VideoCapture(0) # Setup the width and the height (your cam might not support these settings) width = 640 height = 480 cap.set(cv2.CAP_PROP_FRAME_WIDTH, width) cap.set(cv2.CAP_PROP_FRAME_HEIGHT, height) # Read the logo to use later logo_org = cv2.imread('logo.png') # # - Hue 8-10 is about orange, which we will use # - These values can be changed (the lower ones) to fit your environment mask = cv2.inRange(hsv, (8, 180, 180), (10, 255, 255)) # Step 4: This dilates with two iterations (makes it more visible) thresh = cv2.dilate(mask, None, iterations=2) # Step 5: Finds contours and converts it to a list contours = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) contours = imutils.grab_contours(contours) # Step 6: Loops over all objects found for contour in contours: # Skip if contour is small (can be adjusted) if cv2.contourArea(contour) < 750: continue # Step 7: Get the box boundaries (x, y, w, h) = cv2.boundingRect(contour) # Compute size size = (h + w)//2 # Check if logo will be inside frame if y + size < height and x + size < width: # Step 8: Resize logo logo = cv2.resize(logo_org, (size, size)) # Step 9: Create a mask of logo img2gray = cv2.cvtColor(logo, cv2.COLOR_BGR2GRAY) _, logo_mask = cv2.threshold(img2gray, 1, 255, cv2.THRESH_BINARY) # Step 10: Region of Image (ROI), where we want to insert logo roi = frame[y:y+size, x:x+size] # Step 11: Mask out logo region and insert roi[np.where(logo_mask)] = 0 roi += logo # (Extra) Add a FPS label to image text = f"FPS: {int(1 / (time.time() - last_time))}" last_time = time.time() cv2.putText(frame, text, (10, 20), cv2.FONT_HERSHEY_PLAIN, 2, (0, 255, 0), 2) # Step 12: Show the frame cv2.imshow("Webcam", frame) # If q is pressed terminate if cv2.waitKey(1) == ord('q'): break # Release and destroy all windows cap.release() cv2.destroyAllWindows()
Time to test it.
Testing the code
When using your webcam, you might need to change the colors. I used the following setting for the blue marker in my video.
mask = cv2.inRange(hsv, (110, 120, 120), (130, 255, 255))
The two 3-tuples are HSV color space representation. The item of the tuples is setting the Hue. Here is 110 and 130. That means the color range we want to mask out is from 110-130, which you can see is in the blue range (image below). The other two are Saturation from 120-255 and Value from 120-255. To fit your camera and light settings, you need to change that range.
Where you can see the HSV color specter here.
You might need to choose different values. | https://www.learnpythonwithrune.org/opencv-python-webcam-how-to-track-and-replace-object/ | CC-MAIN-2021-25 | refinedweb | 1,060 | 75.71 |
Playing with OpenCV
• Mark Eschbach
I am investigating OpenCV for my aging server as an alternative to Tensorflow for facial recognition and hopefully GPU accelerated image down sampling. Tensorflow is a fine library however my server doesn’t have the AVX or AVX2 instruction sets and the GTX 570 only supports CUDA Compute Capability 2.0, both of which are required for Tensorflow. My approach is to first look at scaling the images, then see how to move it onto the GPU, then finally start looking at facial recognition.
First step is getting it installed. Although the most recent version is the 4 series it appears as though most of the material out there is still for the 3 series. On the release page there is not an OSX release unfortunately. Consulting the general internet people have installed it via Homebrew, which I am still scared by watching machines being bricked by that. So to the source!
The package is built using CMake. There was a rather old version of CMake on my laptop. Easy to update. Language bindings to Python were intentionally disabled as well as compiling examples.
cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=$HOME/tools/opencv-4.0.1 -D INSTALL_PYTHON_EXAMPLES=OFF -D INSTALL_C_EXAMPLES=OFF -D OPENCV_ENABLE_NONFREE=ON -D BUILD_EXAMPLES=OFF ..
While that is compiling I was wondering what Ubuntu 18.10 had available for OpenCV. Looks like the most recent version available through the package management system is 3.2. Looks like the library dates back to 2016, so it is fairly old. I will probably elect to compile from source to ensure I have reasonable parity between my laptop and server. Of course the server won the compilation race by a long shot, being able to compile up to 12 units concurrently.
Building Against the installed OpenCV library
I elected to use CLion since I have a license and I was hoping it would reduce the
time to implement with it’s project templates. The project template produces a
CMake compatible
environment with C++17. Out of the box I had the following file:
cmake_minimum_required(VERSION 3.12) project(opencv_play) set(CMAKE_CXX_STANDARD 17) add_executable(opencv_play main.cpp)
The
main.cpp file contained a
Hello World example in C++. Not bad. The target can be configured with
cmake . -B build which will produce the relative directory
build. A
make within the
build directory will
produce the executable artifact
opencv_play.
Next step was to get the OpenCV project properly linked in. The failing test case should look something like the following example:
#include<iostream> #include <opencv2/opencv.hpp> using namespace std; int main( int argc, char** argv){ cout << "Hello World" << endl; return 0; }
This produces an error like the following on OSX.
opencv-play/main.cpp:6:10: fatal error: 'opencv2/opencv.hpp' file not found #include <opencv2/opencv.hpp> ^~~~~~~~~~~~~~~~~~~~
Since I am not familiar with the CMake system I had to do a bit of web searching. It’s Time To Do CMake Right
was a great article pointing towards a way to properly implement a CMake dependency. I added the following stanzas to
the
CMakeLists.txt file.
cmake_minimum_required(VERSION 3.12) project(opencv_play) set(CMAKE_CXX_STANDARD 17) find_package( OpenCV REQUIRED HINTS "/home/user/tools/opencv-4.0.1/lib/cmake/opencv4" ) add_executable(opencv_play main.cpp) target_link_libraries( opencv_play ${OpenCV_LIBS} )
At this time although I am sure there is a better way to promote the discovery of the library I hard coded the path since I am exploring the library. This allows for correct linking against the OpenCV libraries.
Image Scaling on the CPU
The following code sample will produce a CPU down sampled image. This uses the the LANCZOS4
algorithm since it appears to be the best available implementation for the output image. The output image will forced
into a 256 pixel square, distorting the image to fit. The
waitkey(0) function will block until the window produced by
imshow(string, Mat) receives the Escape character.
#include<iostream> #include <opencv2/opencv.hpp> using namespace std; using namespace cv; int cpuImageResize( const string fileName ){ auto image = imread( fileName, IMREAD_COLOR ); if ( !image.data ){ cerr << "Unable to load image " << fileName << endl; return -1; } Mat result; Size size(256,256); resize( image, result, size, INTER_LANCZOS4); namedWindow("Display Image", WINDOW_AUTOSIZE ); imshow("Display Image", result); waitKey(0); return 0; } int main( int argc, char** argv){ auto fileName = "test.jpg"; return cpuImageResize( fileName ); }
To get the test image copied to the build directory the following stanza needs to be added to the
CMakeLists.txt file:
file(COPY test.jpg DESTINATION ${CMAKE_BINARY_DIR})
Image Scaling on the GPU?
Many of the examples available are for the OpenCV version 3 branch. Part of the major version change was the underlying the architecture of the platform to split a the processing pipeline description and application. This feels similar to the limited amount of experience I have with the Tensorflow API. As a result the tutorials and community posts were not any help in figuring out how to build against the API, resulting in linking errors.
From what I had read, the changes were to prevent arbitrary writes back to the CPU and reduce the cost of implementing backend to perform the computations. As a result the application client code is portable between underlying computational platforms as long as you do not create additional operations for a specific backend.
A majority of the functions are under
cv::gapi in
opencv2/gapi.hpp. To get high level operations such as
resize
the header
opencv2/gapi/core.hpp needs to be included. The
resize operation takes a
Size object during the
pipeline description, or optionally a scale parameter. Since sizes are described during pipeline creation, the pipeline
must be tailored to each aspect ratio. Here is the minimal example:
#include<iostream> #include <opencv2/opencv.hpp> #include <opencv2/gapi.hpp> #include <opencv2/gapi/core.hpp> using namespace std; using namespace cv; using namespace cv::gapi; int gpuImageResize( const string fileName ){ auto image = imread( fileName, IMREAD_COLOR ); if ( !image.data ){ cerr << "Unable to load image " << fileName << endl; return -1; } GMat in; Size size(256,256); auto dest = resize( in, size, INTER_LANCZOS4); GComputation computation(GIn(in), GOut(dest)); Mat result; computation.apply(gin(image), gout(result)); namedWindow("Display Image", WINDOW_AUTOSIZE ); imshow("Display Image", result); waitKey(0); return 0; } int main( int argc, char** argv){ auto fileName = "test.jpg"; return gpuImageResize( fileName ); }
Computationally Accelerated Platforms
Despite a performance benefit of 20% when using reusing the pipeline with the
gapi implementation I fear this may
still be executing on the CPU. Scaling approximately 1250 images a second with the non-pipelined
implementation versus 1500 images a second with the pipeline. I was unable to verify which backend was performing the
processing at the time.
A future project will be building a diagnostic tool to verify the expected backends are being used, such as OpenCL or CUDA. | https://meschbach.com/stream-of-consciousness/programming/2019/03/12-opencv/ | CC-MAIN-2020-40 | refinedweb | 1,138 | 57.06 |
IRC log of mediaann on 2010-06-29
Timestamps are in UTC.
10:57:25 [RRSAgent]
RRSAgent has joined #mediaann
10:57:25 [RRSAgent]
logging to
10:57:30 [fsasaki]
meeting: MAWG
10:57:35 [fsasaki]
chair: joakim
10:57:39 [fsasaki]
agenda:
10:57:54 [fsasaki]
regrets: John
10:59:40 [tobias]
are the European bridges working again?
10:59:52 [wbailer]
wbailer has joined #mediaann
11:00:14 [Zakim]
+ +329331aaaa
11:00:52 [Zakim]
+ +33.4.93.00.aabb
11:01:14 [Zakim]
+[IPcaller]
11:01:21 [Zakim]
+wbailer
11:01:33 [pchampin]
pchampin has joined #mediaann
11:01:54 [Zakim]
+florian
11:03:14 [Zakim]
+ +43.662.228.aacc
11:03:23 [tobias]
Zakim, aacc is me
11:03:23 [Zakim]
+tobias; got it
11:08:29 [Zakim]
-Felix
11:09:21 [fsasaki]
fsasaki has joined #mediaann
11:09:51 [Zakim]
+[IPcaller]
11:11:19 [fsasaki]
topic: namespace decision
11:11:28 [tobias]
Zakim, mutem
11:11:28 [Zakim]
I don't understand 'mutem', tobias
11:11:34 [tobias]
Zakim, mute me
11:11:34 [Zakim]
tobias should now be muted
11:11:42 [fsasaki]
raphael: on the mailling list many people want to have namespace with "#"
11:11:56 [raphael]
hash option would mean:
11:12:04 [fsasaki]
.. want to know whether we can switch the current version to "#"
11:12:07 [wbailer]
+1 for #
11:12:11 [raphael]
+1 for H
11:12:13 [fsasaki]
+ felix
11:12:16 [tobias]
+1 for #
11:12:16 [raphael]
s/H/#
11:12:50 [fsasaki]
felix: anybody against the solution with "#"?
11:12:53 [fsasaki]
no objection
11:13:13 [fsasaki]
RESOLUTION: people at the call for
as the namespace
11:13:50 [fsasaki]
action: Raphael to announce the namespace decision
on the list
11:13:50 [trackbot]
Sorry, couldn't find user - Raphael
11:14:24 [fsasaki]
raphael describing data sets using the media annotation ontology
11:14:46 [tobias]
Zakim, unmute me
11:14:46 [Zakim]
tobias should no longer be muted
11:14:54 [fsasaki]
data set has been accepted for triplefication challenge this year
11:15:08 [fsasaki]
tobias: sounds great, which ontology did you use?
11:15:31 [fsasaki]
raphael: tried to look at your and jp-version
11:15:45 [fsasaki]
.. will write to the mailing list with comments on the current version of the ontology
11:16:11 [fsasaki]
tobias: met jean-pieere last week and discussed changes, will discuss that with you
11:16:33 [fsasaki]
raphael: disagree with "modeling as class" paradigm
11:16:51 [fsasaki]
.. adding more classes has more costs than adding more properties because you need to change the signature of properties
11:16:56 [fsasaki]
.. that is why I disagree
11:17:50 [fsasaki]
topic: f2f meeting
11:18:09 [raphael]
A huge dataset of 30 million triples of flickr photos descriptions and youtube videos descriptions according to the Media Ontology is available at
11:18:12 [fsasaki]
11:18:27 [fsasaki]
"So, we would like to fix the date (September 8-10) for the next F2F based on the results of poll."
11:18:49 [fsasaki]
felix: september 8-10 fine with everybody?
11:18:56 [raphael]
I will make the f2f at these dates
11:18:59 [tobias]
sounds good for me
11:19:08 [wbailer]
for me too
11:19:15 [fsasaki]
RESOLUTION: f2f dates September 8-10 agreed
11:19:18 [wonsuk]
good to me.
11:20:00 [tobias]
Zakim, mute me
11:20:00 [Zakim]
tobias should now be muted
11:20:09 [fsasaki]
topic: LC comments
11:20:29 [fsasaki]
raphael: July 11th is deadline for LC - should we extend the date for feedback?
11:20:37 [florian]
+1
11:20:41 [fsasaki]
.. until the end of the summer
11:20:43 [fsasaki]
+1
11:20:44 [wonsuk]
+1
11:20:47 [raphael]
I would suggest extend to the end of the summer to get more feedback
11:20:48 [raphael]
+1
11:20:54 [tobias]
yes, this sounds like a good idea given that we will most probably have no calls during summer; so +1
11:21:25 [fsasaki]
RESOLUTION: extend the LC period until the end of August
11:21:41 [fsasaki]
raphael: media fragments has 25th of August as the end date for LC
11:21:57 [fsasaki]
topic: media fragments spec
11:22:09 [fsasaki]
raphael: document is in LC until the end of August
11:22:19 [fsasaki]
.. have requested feedback from groups, including mawg
11:22:30 [fsasaki]
.. would appreciate a review from the group
11:22:41 [fsasaki]
.. any volunteer to make a review on behalf of mawg?
11:23:02 [tobias]
Zakim, unmute me
11:23:02 [Zakim]
tobias should no longer be muted
11:23:04 [RRSAgent]
I have made the request to generate
raphael
11:23:13 [fsasaki]
tobias: I can make a review
11:23:22 [RRSAgent]
I have made the request to generate
raphael
11:23:28 [fsasaki]
action: Tobias to make a review of media fragments LC doc
11:23:29 [trackbot]
Created ACTION-265 - Make a review of media fragments LC doc [on Tobias Bürger - due 2010-07-06].
11:23:50 [fsasaki]
raphael: need this until the end of August
11:23:56 [fsasaki]
tobias: fine by me
11:24:33 [fsasaki]
action: thierry to make change of the namespace document, with the "#" URI
11:24:33 [trackbot]
Created ACTION-266 - Make change of the namespace document, with the "#" URI [on Thierry Michel - due 2010-07-06].
11:24:40 [fsasaki]
above action is for the record
11:24:51 [fsasaki]
topic: review from media fragments wg
11:25:05 [fsasaki]
raphael: fragments folks will make individual comments on mawg LC
11:25:17 [fsasaki]
.. you will also get one official review on behalf of the group
11:25:52 [fsasaki]
topic: call schedule
11:26:08 [tobias]
Zakim, mute me
11:26:08 [Zakim]
tobias should now be muted
11:26:14 [fsasaki]
chris: is there an official summer break for W3C?
11:26:17 [fsasaki]
raphael: no
11:26:22 [fsasaki]
chris: will there be a call next week?
11:26:30 [fsasaki]
felix: depends on our chairs, but would be good
11:26:47 [fsasaki]
action: Joakim to organize a call asap, e.g. next week
11:26:47 [trackbot]
Created ACTION-267 - Organize a call asap, e.g. next week [on Joakim Söderberg - due 2010-07-06].
11:27:17 [fsasaki]
s/to organize /to organize with Daniel /
11:27:34 [tobias]
Zakim, unmute me
11:27:34 [Zakim]
tobias should no longer be muted
11:27:37 [tobias]
bye
11:27:38 [Zakim]
-florian
11:27:40 [Zakim]
-wbailer
11:27:41 [Zakim]
-tobias
11:27:42 [Zakim]
-chris
11:27:42 [Zakim]
-raphael
11:27:44 [Zakim]
-Felix
11:27:47 [wonsuk]
wonsuk has left #mediaann
11:27:52 [RRSAgent]
I have made the request to generate
fsasaki
11:28:21 [Zakim]
IA_MAWG()7:00AM has ended
11:28:23 [Zakim]
Attendees were wonsuk, +329331aaaa, +33.4.93.00.aabb, raphael, [IPcaller], Felix, wbailer, florian, +43.662.228.aacc, tobias, chris
11:29:02 [fsasaki]
present: raphael, Felix, wbailer, florian, tobias, chris, wonsuk
11:29:04 [RRSAgent]
I have made the request to generate
fsasaki
11:34:05 [tmichel]
tmichel has joined #mediaann
11:35:14 [Zakim]
IA_MAWG()7:00AM has now started
11:35:22 [Zakim]
+ +49.238.aaaa
11:36:06 [Zakim]
- +49.238.aaaa
11:36:07 [Zakim]
IA_MAWG()7:00AM has ended
11:36:07 [Zakim]
Attendees were +49.238.aaaa
13:31:56 [Zakim]
Zakim has left #mediaann | http://www.w3.org/2010/06/29-mediaann-irc | CC-MAIN-2014-52 | refinedweb | 1,275 | 58.25 |
import re from lxml import etree from lxml.html import defs from lxml.html import fromstring, tostring try: set except NameError: from sets import Set as set __all__ = ['clean_html', 'clean', 'Cleaner', 'autolink', 'autolink_html', 'word_break', 'word_break_html'] # Look at # Particularly the CSS cleaning; most of the tag cleaning is integrated now # I have multiple kinds of schemes searched; but should schemes be # whitelisted instead? # max height? # remove images? Also in CSS? background attribute? # Some way to whitelist object, iframe, etc (e.g., if you want to # allow *just* embedded YouTube movies) # Log what was deleted and why? # style="behavior: ..." might be bad in IE? # Should we have something for just ? That's the worst of the # metas. # UTF-7 detections? Example: #+ADw-SCRIPT+AD4-alert('XSS');+ADw-/SCRIPT+AD4- # you don't always have to have the charset set, if the page has no charset # and there's UTF7-like code in it. # This is an IE-specific construct you can have in a stylesheet to # run some Javascript: _css_javascript_re = re.compile( r'expression\s*\(.*?\)', re.S|re.I) # Do I have to worry about @\nimport? _css_import_re = re.compile( r'@\s*import', re.I) # All kinds of schemes besides just javascript: that can cause # execution: _javascript_scheme_re = re.compile( r'\s*(?:javascript|jscript|livescript|vbscript|about|mocha):', re.I) _whitespace_re = re.compile(r'\s+') # FIXME: should data: be blocked? # FIXME: check against: _conditional_comment_re = re.compile( r'\[if[\s\n\r]+.*?][\s\n\r]*>', re.I|re.S) _find_styled_elements = etree.XPath( "descendant-or-self::*[@style]") _find_external_links = etree.XPath( "descendant-or-self::a[normalize-space(@href) and substring(normalize-space(@href),1,1) != '#']") def clean_html(html, **kw): """ Like clean(), but takes a text input document, and returns a text document. """ doc = fromstring(html) clean(doc, **kw) return tostring(doc) class Cleaner(object): """ Instances cleans the document of each of the possible offending elements. The cleaning is controlled by attributes; you can override attributes in a subclass, or set them in the constructor. ``scripts``: Removes any `` | http://codespeak.net/svn/lxml/branch/html/src/lxml/html/clean.py | crawl-002 | refinedweb | 330 | 60.51 |
Arrays arrays and don't modify the original one. This makes it possible to use Arrays in pure functional code along with lists. "Boxed" means that array elements are just ordinary Haskell (lazy) values, which are evaluated on demand, and can even contain bottom (undefined) values. You can learn how to use these arrays at and I'd recommend that you read this before proceeding to the rest of this page
Nowadays the main Haskell compilers, GHC and Hugs, ship with the same set of Hierarchical Libraries, and these libraries contain a new implementation of arrays which is backward compatible with the Haskell'98 one, but which has far more features. Suffice it to say that these libraries support 9 types of array constructors: Array, UArray, IOArray, IOUArray, STArray, STUArray, DiffArray, DiffUArray and StorableArray.`. Unlike examples, real programs rarely need such declarations.
4 Mutable arrays in ST monad (module Data.Array.ST)
In the same way that IORef has its more general cousin STRef, IOArray has a more general version STArray (and similarly, IOUArray corresponds the IArray interface and therefore can be used in a purely functional way, but internally it uses the efficient update of MArrays.
How does this trick work? DiffArray has a pure external interface, but internally it.
Usage of DiffArray doesn't differ from that of Array, the only difference is memory consumption and speed:
import Data.Array.Diff main = do let arr = listArray (1,1000) [1..1000] :: DiffArray Int Int a = arr ! 1 arr2 = arr // [(1,37)] b = arr2 ! 1 print (a,b)
You can use 'seq' to force evaluation of array elements prior to updating an array:
import Data.Array.Diff main = do let arr = listArray (1,1000) [1..1000] :: DiffArray Int Int a = arr ! 1 b = arr ! 2 arr2 = a `seq` b `seq` (arr // [(1,37),(2,64)]) c = arr2 ! 1 print (a,b,c) made access to 'StorableArray' as fast as to any other unboxed arrays. The only difference between 'StorableArray' and 'UArray' an 'unsafeForeignPtrToStorableArray' operation that allows the use of any Ptr as the address of a 'StorableArray' and in particular works with arrays returned by C routines. Here is anates memory for 10 Ints (which emulates an array returned by some C function), then converts the returned 'Ptr Int' to 'ForeignPtr Int' and 'ForeignPtr Int' to 'StorableArray Int Int'. It then writes and reads the first element of the array. At the end, the memory used by the array is deallocated by 'free', which again emulates deallocation by C routines. We can also enable the automatic freeing of the allocated block by replacing "newForeignPtr_ ptr" with "newForeignPtr finalizerFree ptr". In this case memory will be automatically freed after the last array usage, as for any other Haskell objects.
8 The Haskell Array Preprocessor (STPP)
Using mutable (IO and ST) arrays in Haskell is not very handy. But there is one tool which adds syntactic sugar to make the use of such arrays very close to that of imperative languages. It is written by Hal Daume III and you can get it at
Using this tool, you can index array elements in arbitrarily complex expressions with the notation "arr[|i|]" and the preprocessor will automatically convert these forms to syntactic sugar which simplifies arrays usage. Although not as elegant as STPP, it is implemented entirely inside the Haskell language without requiring any preprocessors.
10 Unsafe indexing, freezing/thawing, running over array elements
There are operations that convert between mutable and immutable arrays of the same type, namely 'freeze' (mutable->immutable) and 'thaw' (immutable->mutable). They make a new copy of the array. If you are sure that a mutable array will not be modified or that an immutable array will not be used after the conversion, you can use unsafeFreeze/unsafeThaw. These operations convert array the in-place if the input and resulting arrays have the the same memory representation (i.e. the same type and boxing). Please note that the "unsafe*" operations modify memory - they set/clear a flag in the array header which specifies array mutability. So these operations can't be used together with multi-threaded access to arrays (using threads or some form of coroutines).
There are also operations that convert unboxed arrays to another element type, namely castIOUArray and castSTUArray. These operations rely on the actual type representation in memory and therefore there are no guarantees on their results. In particular, these operations can be used to convert any unboxable value to a sequence of bytes and vice versa. For example, they are used in the AltBinary library to serialize floating-point values. Please note that these operations don't recompute array bounds to reflect any changes in element size. You need to do that yourself using the 'sizeOf' operation.
While arrays can have any type of index, the internal representation only accepts Ints for indexing. The array libraries first use the Ix class to translate the polymorphic index into an Int. An internal indexing function is then called on this Int index. The internal functions are: unsafeAt, unsafeRead and unsafeWrite, found in the Data.Array.Base module. You can use these operations yourself in order to speed up your program by avoiding bounds checking. These functions are marked "unsafe" for good a reason -- they allow the programmer to access and overwrite arbitrary addresses in memory. These operations are especially useful if you need to walk through entire array:
import Data.Array.Base (unsafeAt) -- | Returns a list of all the elements of an array, in the same order -- as their indices. elems arr = [ unsafeAt arr i | i <- [0 .. rangeSize(bounds arr)-1] ]
"unsafe*" operations in such loops are really safe because 'i' loops only through positions of existing array elements.
11 GHC-specific topics
11.1 Parallel arrays (module GHC.PArr)
As we already mentioned, array library supports two array varieties - lazy boxed arrays and strict unboxed ones. A parallel array implements something intermediate: it's a strict boxed immutable array. This keeps the flexibility of using any data type as an array element while making both creation of and access to such arrays much faster. Array creation is implemented as one imperative loop that fills all the array elements, while accesses to array elements don't need to check the "box". It should be obvious that parallel arrays are not efficient in cases where the calculation of array elements is relatively complex and most elements will not be used. One more drawback of practical usage is that parallel arrays don't support the IArray interface, which means that you can't write generic algorithms which work both with Array and the parallel array constructor.
Like many GHC extensions, this is described in a paper: An Approach to Fast Arrays in Haskell, by Manuel M. T. Chakravarty and Gabriele Keller.
You can also look at the sources of GHC.PArr module, which contains a lot of comments.
The special syntax for parallel arrays is enabled by "ghc -fparr" or "ghci -fparr" which is undocumented in the GHC 6.4.1 user manual.
11.2 Welcome to the machine: Array#, MutableArray#, ByteArray#, MutableByteArray#, pinned and moveable byte arrays
The GHC heap contains two kinds of objects. Some are just byte sequences, while the others are pointers to other objects (so-called "boxes"). This segregation allows the system to find chains of references when performing garbage collection and to update these pointers when memory used by the heap is compacted and objects are moved to new places. The internal (raw) GHC type Array# represents a sequence of object pointers (boxes). There is a low-level operation in the ST monad which allocates an array of specified size in the heap. Its type is something like (Int -> ST s Array#). The Array# type is used inside the Array type which represents boxed immutable arrays.
There is a different type for mutable boxed arrays (IOArray/STArray), namely MutableArray#. A separate type for mutable arrays is required because of the 2-stage garbage collection mechanism. The internal representations of Array# and MutableArray# are the same apart from some flags in header, and this make possible to perform in-place convsion between MutableArray# and Array# (this is that unsafeFreeze and unsafeThaw operations do).
Unboxed arrays are represented by the ByteArray# type. This is just a plain memory area in the Haskell heap, like a C array. There are two primitive operations that create a ByteArray# of specified size. One allocates memory in the normal heap and so this byte array can be moved when garbage collection occurs. This prevents the conversion of a ByteArray# to a plain memory pointer that can be used in C procedures (although it's still possible to pass a current ByteArray# pointer to an "unsafe foreign" procedure if the latter doesn't try to store this pointer somewhere). The second primitive allocates a ByteArray# of a specified size in the "pinned" heap area, which contains objects with a fixed location. Such a byte array will never be moved by garbage collection, so its address can be used as a plain Ptr and shared with the C world. The first way to create ByteArray# is used inside the implementation of all UArray types, while the second way is used in StorableArray (although StorableArray can also point to data allocated by C malloc). Pinned ByteArray# also used in ByteString.
There is also a MutableByteArray# type which is very similar to ByteArray#, but GHC's primitives support only monadic read/write operations for MutableByteArray#, and only pure reads for ByteArray#, as well as the unsafeFreeze/unsafeThaw operations which change appropriate fields in headers of this arrays. This differentiation doesn't make much sense except for additional safety checks.
So, pinned MutableByteArray# or C malloced memory is used inside StorableArray, pinned ByteArray# or C malloced memory - inside ByteString, unpinned MutableByteArray# - inside IOUArray and STUArray, and unpinned ByteArray# is used inside UArray.
The API's of boxed and unboxed arrays API are almost identical:
marr <- alloc n - allocates a mutable array of the given size arr <- unsafeFreeze marr - converts a mutable array to an immutable one marr <- unsafeThaw arr - converts an immutable array to a mutable one x <- unsafeRead marr i - monadic reading of the value with the given index from a mutable array unsafeWrite marr i x - monadic writing of the value with the given index from a mutable array let x = unsafeAt arr i - pure reading of the value with the given index from an immutable array (all indices are counted from 0)
Based on these primitive operations, the array library implements indexing with any type and with any lower bound, bounds checking and all other high-level operations. Operations that create immutable arrays just create them as mutable arrays in the ST monad, make all required updates on this array, and then use unsafeFreeze before returning the array from runST. Operations on IO arrays are implemented via operations on ST arrays using the stToIO operation.
11.3 Mutable arrays and GC
GHC implements 2-stage GC which is very fast. Minor GC occurs after each 256 kb allocated and scans only this area (plus recent stack frames) when searching for "live" data. This solution uses the fact that normal Haskell data are immutable and therefore any data structures created before the previous minor GC can't point to data structures created after it, since due to immutability, data can contain only "backward" references.
But this simplicity breaks down when we add to the language mutable boxed references (IORef/STRef) and arrays (IOArray/STArray). On each GC, including minor ones, each element in a mutable data structure has to be be scanned because it may have been updated since the last GC and to make it point to data allocated since then.
For programs that contain a lot of data in mutable boxed arrays/references, GC times may easily outweigh the useful computation time. Ironically, one such program is GHC itself. The solution for such programs is to add to a command line option like "+RTS -A10m", which increases the size of minor GC chunks from 256 kb to 10 mb, making minor GC 40 times less frequent. You can see effect of this change by using "+RTS -sstderr" option: "%GC time" should significantly decrease.
There is a way to include this option in your executable so that it will be used automatically on each execution - you should just add to your the following line to your project C source file:
char *ghc_rts_opts = "-A10m";
Of course, you can increase or decrease this value according to your needs.
Increasing "-A" value doesn't comes for free. Aside from the obvious increase in memory usage, execution times (of useful code) will also grow. The default "-A" value is tuned to be close to modern CPU cache sizes, so that most memory references fall inside the cache. When 10 mb of memory are allocated before doing GC, this data locality no longer holds. So increasing "-A" can either increase or decrease program speed. You should try various settings between 64 kb and 16 mb while the running program with "typical" parameters and try to select the best setting for your specific program and CPU combination.
There is also another way to avoid increasing GC times: use either unboxed or immutable arrays. Also note that immutable arrays are built as mutable ones and then "frozen", so during the construction time GC will also scan their contents.
Hopefully, GHC 6.6 has fixed the problem - it remembers which references/arrays were updated since last GC and scans only them. You can suffer from the old problems only if you use very large arrays.
Further information:
- RTS options to control the garbage collector
- Problem description by Simon Marlow and report about GHC 6.6 improvements in this area
- Notes about GHC garbage collector
- Papers about GHC garbage collector :-) | http://haskell.org/haskellwiki/Arrays | crawl-002 | refinedweb | 2,312 | 51.68 |
I have this big operating system's project where I have to basically write my own mini operating system. I have everything broken into pieces and am taking it step by step. Starting with the loader. Prior to this class, I had no idea how intricate an OS system was and everything is fairly new to me. I've decided to do this in java for various reason mainly b/c it'll be a good refresher for me. Coding isn't my strong point...actually its probably a weak point and it's been 2 yrs since I last worked with java.
Can any1 with the patience and expertise help me through this ordeal? Ok so i understand the basic concept of the loader and i've started something here....the reading the input part. I have yet to get to implementing the part where it loads it into memory [I'll be using an array].
Can someone help me double check this?
import java.io.*; public class Loader { public static void main(String args[]) String line = null; int count = 0; try { FileReader input = new FileReader();//args[0]); BufferedReader buffRead = new BufferedReader(new FileReader(input)); while((line=buffRead.readLine()) != null) { System.out.println(count+": "+line); line = bufRead.readLine(); count++; } } catch (IOException e) { // catch possible io errors from readLine() System.out.println("Got an IOException error!"); e.printStackTrace(); } close buffRead(); }
Thank you! Whoever decides to help me will be getting a lot of this! | https://www.daniweb.com/programming/software-development/threads/146425/help-with-implementing-a-loader-operating-systems | CC-MAIN-2017-51 | refinedweb | 244 | 59.19 |
Python Stream processing.
Project description
Python Stream Processing
# Python Streams # Forever scalable event processing & in-memory durable K/V store; # as a library w/ asyncio & static typing. import faust
Faust is a stream processing library, porting the ideas from Kafka Streams to Python.
It is used at Robinhood.
Here’s an example processing a stream of incoming orders:
app = faust.App('myapp', broker='kafka://localhost') # Models describe how messages are serialized: # {"account_id": "3fae-...", amount": 3} class Order(faust.Record): account_id: str amount: int @app.agent(value_type=Order) async def order(orders): async for order in orders: # process infinite stream of orders. print(f'Order for {order.account_id}: {order.amount}')
The Agent decorator defines a “stream processor” that essentially consumes from a Kafka topic and does something for every event it receives.
The agent is an async def function, so can also perform other operations asynchronously, such as web requests.
This system can persist state, acting like a database. Tables are named distributed key/value stores you can use as regular Python dictionaries.
Tables are stored locally on each machine using a superfast embedded database written in C++, called RocksDB.
Tables can also store aggregate counts that are optionally “windowed” so you can keep track of “number of clicks from the last day,” or “number of clicks in the last hour.” for example. Like Kafka Streams, we support tumbling, hopping and sliding windows of time, and old windows can be expired to stop data from filling up.
For reliability we use a Kafka topic as “write-ahead-log”. Whenever a key is changed we publish to the changelog. Standby nodes consume from this changelog to keep an exact replica of the data and enables instant recovery should any of the nodes fail.
To the user a table is just a dictionary, but data is persisted between restarts and replicated across nodes so on failover other nodes can take over automatically.
You can count page views by URL:
# data sent to 'clicks' topic sharded by URL key. # e.g. key="" value="1" click_topic = app.topic('clicks', key_type=str, value_type=int) # default value for missing URL will be 0 with `default=int` counts = app.Table('click_counts', default=int) @app.agent(click_topic) async def count_click(clicks): async for url, count in clicks.items(): counts[url] += count
The data sent to the Kafka topic is partitioned, which means the clicks will be sharded by URL in such a way that every count for the same URL will be delivered to the same Faust worker instance.
Faust supports any type of stream data: bytes, Unicode and serialized structures, but also comes with “Models” that use modern Python syntax to describe how keys and values in streams are serialized:
# Order is a json serialized dictionary, # having these fields: class Order(faust.Record): account_id: str product_id: str price: float quantity: float = 1.0 orders_topic = app.topic('orders', key_type=str, value_type=Order) @app.agent(orders_topic) async def process_order(orders): async for order in orders: # process each order using regular Python total_price = order.price * order.quantity await send_order_received_email(order.account_id, order)
Faust is statically typed, using the mypy type checker, so you can take advantage of static types when writing applications.
The Faust source code is small, well organized, and serves as a good resource for learning the implementation of Kafka Streams.
- Learn more about Faust in the introduction introduction page
- to read more about Faust, system requirements, installation instructions, community resources, and more.
- or go directly to the quickstart tutorial
- to see Faust in action by programming a streaming application.
- then explore the User Guide
- for in-depth information organized by topic. asyncio worksikit, TensorFlow, etc.
Installation
You can install Faust either via the Python Package Index (PyPI) or from source.
To install using pip:
$ pip install -U faust
Bundles
Faust also defines a group of setuptools extensions that can be used to install Faust and the dependencies for a given feature.
You can specify these in your requirements or on the pip command-line by using brackets. Separate multiple bundles using the comma:
$ pip install "faust[rocksdb]" $ pip install "faust[rocksdb,uvloop,fast]"
The following bundles are available:
Stores
Optimization
Sensors
Event Loops
Debugging
Downloading and installing from source
Download the latest version of Faust from
You can install it by doing:
$ tar xvfz faust-0.0.0.tar.gz $ cd faust-0.0.0 $ python setup.py build # python setup.py install
The last command must be executed as a privileged user if you are not currently using a virtualenv.
Using the development version
With pip
You can install the latest snapshot of Faust using the following pip command:
$ pip install
Can I use Faust with Django/Flask/etc.?
Yes! Use gevent or eventlet as a bridge to integrate with asyncio.
Using gevent
This approach works with any blocking Python library that can work with gevent.
Using gevent requires you to install the aiogevent module, and you can install this as a bundle with Faust:
$ pip install -U faust[gevent]
Then to actually use gevent as the event loop you have to either use the -L <faust --loop> option to the faust program:
$ faust -L gevent -A myproj worker -l info
or add import mode.loop.gevent at the top of your entry point script:
#!/usr/bin/env python3 import mode.loop.gevent
REMEMBER: It’s very important that this is at the very top of the module, and that it executes before you import libraries.
Using eventlet <faust --loop>?
Yes! Use the tornado.platform.asyncio bridge:
Can I use Faust with Twisted?
Yes! Use the asyncio reactor implementation:
Will you support Python 3.5 or earlier??
You may need to increase the limit for the maximum number of open files. The following post explains how to do so on OS X:
Getting Help
Mailing list
For discussions about the usage, development, and future of Faust, please join the faust-users mailing list.
Resources
Bug tracker
If you have any suggestions, bug reports, or annoyances please report them to our issue tracker at
License
This software is licensed under the New BSD License. See the LICENSE file in the top distribution directory for the full license text.
Contributing
Development of Faust happens at GitHub:
You’re highly encouraged to participate in the development of Faust.
Be sure to also read the Contributing to Faust section in the documentation.
Code of Conduct
Everyone interacting in the project’s codebases,.4.6 available at.
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/faust/ | CC-MAIN-2019-09 | refinedweb | 1,102 | 56.45 |
18 August 2008 09:54 [Source: ICIS news]
MUMBAI (ICIS News)--?xml:namespace>
The company said the net income was down because the previous year's figure was boosted by substantial one-off factors, especially gains from the divestment of the mining technology company DBT.
For the period, the company’s sales rose 11% to €7.9bn and its earnings before interest and tax (EBIT) rose 17% to €869m, on higher earnings from all business units.
“The good figures underscore the strength of our portfolio. Our three business areas are very resilient," Werner Müller, the company CEO said.
For the full year 2008, the company expected a sales growth in the high single digit range and also aimed to post a slight rise in EBIT, as compared with the year-ago period.
( | http://www.icis.com/Articles/2008/08/18/9149485/germanys-evonik-posts-27-fall-in-h1-net-income.html | CC-MAIN-2014-15 | refinedweb | 132 | 61.97 |
Django Models and Migrations
In my last two articles, I looked at the Django Web application framework, written in Python. Django's documentation describes it as an MTV framework, in which the acronym stands for model, template and views.
When a request comes in to a Django application, the application's URL patterns determine which view method will be invoked. The view method can then, as I mentioned in previous articles, directly return content to the user or send the contents of a template. The template typically contains not only HTML, but also directives, unique to Django, which allow you to pass along variable values, execute loops and display text conditionally.
You can create lots of interesting Web applications with just views and templates. However, most Web applications also use a database, and in many cases, that means a relational database. Indeed, it's a rare Web application that doesn't use a database of some sort.
For many years, Web applications typically spoke directly with the database, sending SQL via text strings. Thus, you would say something like:
s = "SELECT first_name, last_name FROM Users where id = 1"
You then would send that SQL to the server via a database client library and retrieve the results using that library. Although this approach does allow you to harness the power of SQL directly, it means that your application code now contains text strings with another language. This mix of (for example) Python and SQL can become difficult to maintain and work with. Besides, in Python, you're used to working with objects, attributes and methods. Why can't you access the database that way?
The answer, of course, is that you can't, because relational databases eventually do need to receive SQL in order to function correctly. Thus, many programs use an ORM (object-relational mapper), which translates method calls and object attributes into SQL. There is a well established ORM in the Python world known as SQLAlchemy. However, Django has opted to use its own ORM, with which you define your database tables, as well as insert, update and retrieve information in those tables.
So in this article, I cover how you create models in Django, how you can create and apply migrations based on those model definitions, and how you can interact with your models from within a Django application.
Models
A "model" in the Django world is a Python class that represents a table in the database. If you are creating an appointment calendar, your database likely will have at least two different tables: People and Appointments. To represent these in Django, you create two Python classes: Person and Appointment. Each of these models is defined in the models.py file inside your application.
This is a good place to point out that models are specific to a particular Django application. Each Django project contains one or more applications, and it is assumed that you can and will reuse applications within different projects.
In the Django project I have created for this article ("atfproject"), I have a single application ("atfapp"). Thus, I can define my model classes in atfproject/atfapp/models.py. That file, by default, contains a single line:
from django.db import models
Given the example of creating an appointment calendar, let's start by defining your Appointment model:
from django.db import models class Appointment(models.Model): starts_at = models.DateTimeField() ends_at = models.DateTimeField() meeting_with = models.TextField() notes = models.TextField() def __str__(self): return "{} - {}: Meeting with {} ({})".format(self.starts_at, self.ends_at, self.meeting_with, self.notes)
Notice that in Django models, you define the columns as class attributes, using a Python object known as a descriptor. Descriptors allow you to work with attributes (such as appointment.starts_at), but for methods to be fired in the back. In the case of database models, Django uses the descriptors to retrieve, save, update and delete your data in the database.
The one actual instance method in the above code is __str__, which every Python object can use to define how it gets turned into a string. Django uses the __str__ method to present your models.
Django provides a large number of field types that you can use in your models, matching (to a large degree) the column types available in most popular databases. For example, the above model uses two DateTimeFields and two TextFields. As you can imagine, these are mapped to the DATETIME and TEXT columns in SQL. These field definitions not only determine what type of column is defined in the database, but also the way in which Django's admin interface and forms allow users to enter data. In addition to TextField, you can have BooleanFields, EmailFields (for e-mail addresses), FileFields (for uploading files) and even GenericIPAddressField, among others.
Beyond choosing a field type that's appropriate for your data, you also can pass one or more options that modify how the field behaves. For example, DateField and DateTimeField allow you to pass an "auto_now" keyword argument. If passed and set to True, Django automatically will set the field to the current time when a new record is stored. This isn't necessarily behavior that you always will want, but it is needed frequently enough that Django provides it. That's true for the other fields, as well—they provide options that you might not always need, but that really can come in handy.
Migrations
So, now you have a model! How can you start to use it? Well, first you somehow need to translate your model into SQL that your database can use. This means, before continuing any further, you need to tell Django what database you're using. This is done in your project's configuration file; in my case, that would be atfproject/atfproject/settings.py. That file defines a number of variables that are used throughout Django. One of them is DATABASES, a dictionary that defines the databases used in your project. (Yes, it is possible to use more than one, although I'm not sure if that's normally such a good idea.)
By default, the definition of DATABASES is:
DATABASES = { 'default': { 'ENGINE': 'django.db.backends.sqlite3', 'NAME': os.path.join(BASE_DIR, 'db.sqlite3'), } }
In other words, Django comes, out of the box, defined to use SQLite. SQLite is a wonderful database for most purposes, but it is woefully underpowered for a real, production-ready database application that will be serving the general public. For such cases, you'll want something more powerful, such as my favorite database, PostgreSQL. Nevertheless, for the purposes of this little experiment here, you can use SQLite.
One of the many advantages of SQLite is that it uses one file for each database; if the file exists, SQLite reads the data from there. And if the file doesn't yet exist, it is created upon first use. Thus, by using SQLite, you're able to avoid any configuration.
However, you still somehow need to convert your Python code to SQL definitions that SQLite can use. This is done with "migrations".
Now, if you're coming from the world of Ruby on Rails, you are familiar with the idea of migrations—they describe the changes made to the database, such that you easily can move from an older version of the database to a newer one. I remember the days before migrations, and they were significantly less enjoyable—their invention really has made Web development easier.
Migrations are latecomers to the world of Django. There long have been external libraries, such as South, but migrations in Django itself are relatively new. Rails users might be surprised to find that in Django, developers don't create migrations directly. Rather, you tell Django to examine your model definitions, to compare those definitions with the current state of the database and then to generate an appropriate migration.
Given that I just created a model, I go back into the project's root directory, and I execute:
django-admin.py makemigrations
This command, which you execute in the project's root directory, tells Django to look at the "atfapp" application, to compare its models with the database and then to generate migrations.
Now, if you encounter an error at this point (and I often do!), you should double-check to make sure your application has been added to the project. It's not sufficient to have your app in the Django project's directory. You also must add it to INSTALLED_APPS, a tuple in the project's settings.py. For example, in my case, the definition looks like this:
INSTALLED_APPS = ( 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'atfapp' )
The output of
makemigrations on my system looks like this:
Migrations for 'atfapp': 0001_initial.py: - Create model Appointment
In other words, Django now has described the difference between the current state of the database (in which "Appointment" doesn't exist) and the final state, in which there will be an "Appointment" table. If you're curious to see what this migration looks like, you can always look in the atfapp/migrations directory, in which you'll see Python code.
Didn't I say that the migration will describe the needed database updates in SQL? Yes, but the description originally is written in Python. This allows you, at least in theory, to migrate to a different database server, if and when you want to do so.
Now that you have the migrations, it's time to apply them. In the project's root directory, I now write:
django-admin.py migrate
And then see:
Operations to perform: Apply all migrations: admin, contenttypes, auth, atfapp, sessions Running migrations: Applying contenttypes.0001_initial... OK Applying auth.0001_initial... OK Applying admin.0001_initial... OK Applying atfapp.0001_initial... OK Applying sessions.0001_initial... OK
The above shows that the "atfapp" initial migration was run. But where did all of these other migrations come from? The answer is simple. Django's user model and other built-in models also are described using migrations and, thus, are applied along with mine, if that hasn't yet happened in my Django project.
You might have noticed that each migration is given a number. This allows Django to keep track of the history of the migrations and also to apply more than one, if necessary. You can create a migration, then create a new migration and then apply both of them together, if you want to keep the changes separate.
Or, perhaps more practically, you can work with other people on a project, each of whom is updating the database. Each of them can create their own migrations and commit them into the shared Git repository. If and when you retrieve the latest changes from Git, you'll get all of the migrations from your coworkers and then can apply them to your app.
Migrating Further
Let's say that you modify your model. How do you create and apply a new migration? The answer actually is fairly straightforward. Modify the model and ask Django to create an appropriate migration. Then you can run the newly created migration.
So, let's add a new field to the Appointment model, "minutes", to keep track of what happened during the meeting. I add a single line to the model, such that the file now looks like this:
from django.db import models class Appointment(models.Model): starts_at = models.DateTimeField() ends_at = models.DateTimeField() meeting_with = models.TextField() notes = models.TextField() minutes = models.TextField() # New line here! def __str__(self): return "{} - {}: Meeting with {} ({})".format(self.starts_at, self.ends_at, self.meeting_with, self.notes)
Now I once again run
makemigrations, but this time, Django is
comparing the current definition of the model with the current state
of the database. It seems like a no-brainer for Django to deal with,
and it should be, except for one thing: Django defines columns, by
default, to forbid NULL values. If I add the "minutes" column, which
doesn't allow NULL values, I'll be in trouble for existing
rows. Django thus asks me whether I want to choose a default value to
put in this field or if I'd prefer to stop the migration before it
begins and to adjust my definitions.
One of the things I love about migrations is that they help you avoid stupid mistakes like this one. I'm going to choose the first option, indicating that "whatever" is the (oh-so-helpful) default value. Once I have done that, Django finishes with the migration's definition and writes it to disk. Now I can, once again, apply the pending migrations:
django-admin.py migrate
And I see:
Operations to perform: Apply all migrations: admin, contenttypes, auth, atfapp, sessions Running migrations: Applying atfapp.0002_appointment_minutes... OK
Sure enough, the new migration has been applied!
Of course, Django could have guessed as to my intentions. However, in this case and in most others, Django follows the Python rule of thumb in that it's better to be explicit than implicit and to avoid guessing.
Conclusion
Django's models allow you to create a variety of different fields in a database-independent way. Moreover, Django creates migrations between different versions of your database, making it easy to iterate database definitions as a project moves forward, even if there are multiple developers working on it.
In my next article, I plan to look at how you can use models that you have defined from within your Django. | https://www.linuxjournal.com/content/djangos-migrations-make-it-easy-define-and-update-your-database-schema?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+linuxjournalcom+%28Linux+Journal+-+The+Original+Magazine+of+the+Linux+Community%29 | CC-MAIN-2018-39 | refinedweb | 2,245 | 55.74 |
23 April 2010 15:29 [Source: ICIS news]
(Releads and updates throughout.)
LONDON(ICIS news)—CEPSA Quimica has declared force majeure (FM) on phenol and acetone from it plant in Huelva, Spain, a company source confirmed on Friday.
Earlier in the day CEPSA Quimica said that it had detected major corrosion on its newest production line, line three, and was assessing whether to declare force majeure on phenol and acetone.
“We had detected major corrosion on the line and tried to make some emergency repairs,” the company source said. “But this does not guarantee the safety of the line so there will be an enquiry and investigation which will take a long time.”
CEPSA had been talking with its legal department about whether to put customers on allocation, possibly followed by force majeure, or both.
Earlier, ?xml:namespace>
In correspondence
"When you hear half and hour ago that the major producer has put customers on allocation and now this - it will be like a bomb shell," the source said.
For more on acetone | http://www.icis.com/Articles/2010/04/23/9353542/cepsa-declares-immediate-fm-on-phenol-acetone-from-huelva.html | CC-MAIN-2014-41 | refinedweb | 173 | 58.52 |
>><<
The “Software”
Decided against DDWRT, since I can’t determine from the website whether or not I could make changes to the source if I needed to. That being the case, I found OpenWRT and am impressed. I followed the instructions for building my own image, but so far that was unnecessary as I haven’t changed a thing, so I won’t bother with details.
Next I used the ASUS “Firmware Restoration” utility to load openwrt-brcm-2.4-squashfs.trx to the router. This was a bit difficult because I had a wireless connection as well as the wired connection to the router active on my laptop and the wl520gu somehow managed to assign itself an IP on the 192.168.24.0 network used on the wireless, instead of the 192.168.1.0 network configured on the wired nic. I figured this out by loading up wireshark; after making this determination, changed the IP of the wired interface on my laptop also to a 192.168.24.x address and used “route” to add a route to the wl520gu on the wired nic’s IP (192.168.24.10).
route add 192.168.24.49 mask 255.255.255.255 192.168.24.10
OpenWRT configuration
After loading the custom firmware, I connected my laptop to one of the LAN ports on the wl520gu and got an 192.168.1.x IP. The wl520gu initializes with wireless off and a static IP of 192.168.1.1 and I want to configure the wl520 as a wireless client. To do this, I followed these steps:
From PC:
telnet 192.168.1.1
at which point I see:
BusyBox v1.14.4 (2009-11-25 22:41:41 EST) built-in shell (ash) Enter 'help' for a list of built-in commands. _______ ________ __ | |.-----.-----.-----.| | | |.----.| |_ | - || _ | -__| || | | || _|| _| |_______|| __|_____|__|__||________||__| |____| |__| W I R E L E S S F R E E D O M KAMIKAZE (bleeding edge, r18540) ------------------ * 10 oz Vodka Shake well with ice and strain * 10 oz Triple sec mixture into 10 shot glasses. * 10 oz lime juice Salute! --------------------------------------------------- root@OpenWrt:~#
Now from the telnet window:
root@OpenWrt:~# vi /etc/config/wireless config wifi-device wl0 option type broadcom option channel 6 # REMOVE THIS LINE TO ENABLE WIFI: # option disabled 1 config wifi-iface option device wl0 option network lan option mode sta option ssid hundred option encryption none
Mode sta specifies wireless client, hundred is the ssid of my wireless network, and notice the commenting out of “option disabled 1″. Now to drop the static IP:
root@OpenWrt:~# vi /etc/config/network
note changes to “lan” config:
config 'interface' 'lan' option 'type' 'bridge' option 'ifname' 'eth0.0' option 'proto' 'dhcp' #option 'ipaddr' '192.168.1.1' #option 'netmask' '255.255.255.0' #option 'dns' #option 'gateway'
Disable telnet and enable ssh by setting a root passwd:
root@OpenWrt:~# passwd
I also installed X-Wrt, but their wiki is fairly informative and its not necessary for the rest of what’s here, so I’m going to skip that step. After a reboot, I can access the wl520gu from ssh wirelessly; just type “reboot” at the prompt and then check out your dhcp leases to figure out what IP your new wireless client got.
Talking to the Arduino
Since I wanted to communicate with the Arduino using the wl520gu’s serial connection, I started by using ser2net. Actually, first I had to also disable the serial terminal!
root@OpenWrt:~# vi /etc/inittab #tts/0::askfirst:/bin/ash --login #ttyS0::askfirst:/bin/ash --login
The version of openwrt I installed has serial at /dev/tts/0, but I commented out both tts/0 and ttyS0 for good measure. Now onto installing ser2net.
root@OpenWrt:~# opkg update root@OpenWrt:~# opkg install ser2net root@OpenWrt:~# vi /etc/ser2net.conf
I tried communication with the Arduino at 115200 at first, but there were odd errors, so I reverted to 38400 which now works rather well. I added this line to the conf file:
3008:telnet:0:/dev/tts/0:38400 NONE 1STOPBIT 8DATABITS LOCAL -RTSCTS
After this setup, I can telnet to the wl520gu and communicate with my Arduino!
telnet 192.168.24.101 3008
Arduino SW
This is nothing fancy and requires quite a bit of improvement, but for now this is what it is. I have a thermistor connected to my Arduino, so I’m going to setup serial commands to read the temperature. Also decided to have a “start” command, because at boot the wl520gu dumps info to the serial port and a response from the Arduino caused boot to fail.
#include <LiquidCrystal.h> #include <string.h> // LiquidCrystal display with: // rs on pin 12 // rw on pin 11 // enable on pin 10 // d4, d5, d6, d7 on pins 5, 4, 3, 2 LiquidCrystal lcd(12, 11, 10, 9, 8, 3, 2); int analogPin = 0; int val; void setup() { // Print a message to the LCD. //lcd.print("hello, world!"); delay(500); Serial.begin(38400); } void loop() { #define MAX_CMD_LENGTH 20 char text[17]; double ri = 0.0986176265; double temp; static char cmds[MAX_CMD_LENGTH]; static int cmdIdx = 0; static int celsius, fahrenheit; static char ready=0; int count = 0; long R; // only update temp read every 256ms (thereabouts) if ((count % 256) == 0) { val = analogRead(analogPin); // read the input pin lcd.clear(); lcd.setCursor(0,0); sprintf(text, "ADC read= %d", val); lcd.print(text); lcd.setCursor(0,1); R = ((long)val * 10000)/(1023-val); temp = 34350 / (log(R / ri)); celsius = (int)temp - 2730; fahrenheit = (celsius*9)/5 + 320; sprintf(text, " Temp=% 3d.%dF", fahrenheit/10, fahrenheit%10); lcd.print(text); } if (Serial.available()) { int incomingByte = 0; while (incomingByte != -1) { // read the incoming byte: incomingByte = Serial.read(); if (incomingByte == -1) continue; if (cmdIdx+1 >= MAX_CMD_LENGTH) { cmdIdx = 0; } switch (incomingByte) { case 13: cmdIdx = 0; break; case '?': cmds[cmdIdx]=0; cmdIdx = 0; if (strcasecmp(cmds, "QWERTYUIOP")==0) { ready =1; Serial.println("ready"); } if (ready) { if (strcasecmp(cmds, "TEMPF")==0) { sprintf(text, "TEMPF:%d.%dF", fahrenheit/10, fahrenheit%10); Serial.println(text); } if (strcasecmp(cmds, "TEMPC")==0) { sprintf(text, "TEMPC:%d.%dC", celsius/10, celsius%10); Serial.println(text); } if (strcasecmp(cmds, "ADC")==0) { sprintf(text, "ADC:%d", val); Serial.println(text); } } break; default: cmds[cmdIdx++]=(incomingByte&0xFF); } //Serial.print("Incoming: "); //Serial.println(incomingByte, DEC); } } ++count; delay(1); }
So after boot the sequence will be:
wl520->arduino: ?qwertyuiop? arduino->wl520: ready wl520->arduino: ?tempf? arduino->wl520: TEMPF:xx.xF wl520->arduino: ?tempc? arduino->wl520: TEMPC:xx.xC ...
Obviously the protocol needs some work, but this was step 1.
WL520GU Scripts
Now to get the temp values directly from a script on the wl520. I usually would use Perl for this, and as a matter of fact, this is pretty much my first Bourne shell script. The hope is to get a rudimentary lock to prevent simultaneous access via a cron job and a cgi script:
root@OpenWrt:~# vi /root/currtemp.sh while [ "$(ls -A /root/currtemp.lck 2> NUL)" ] ; do sleep 1s done mkdir /root/currtemp.lck 2> NUL touch /root/currtemp.lck/$$.lck (echo ?tempf? > /dev/tts/0) && temp=$(grep -m 1 TEMPF /dev/tts/0 | cut -d : -f 2) temp2=$(date | cut -d " " -f 4) echo $temp2 = $temp rm -r /root/currtemp.lck
I wanted to use cron to read the temperature regularly and store it to a log file, but cron was disabled, so the following was necessary:
root@OpenWrt:~# /etc/init.d/cron enable
And here’s my crontab:
root@OpenWrt:~# crontab -e #.---------------- minute (0 - 59) #| .------------- hour (0 - 23) #| | .---------- day of month (1 - 31) #| | | .------- month (1 - 12) OR jan,feb,mar,apr ... #| | | | .---- day of week (0 - 6) (Sunday=0 or 7) OR sun,mon,tue,wed,thu,f #| | | | | #* * * * * command to be executed 0,20,40 * * * * /root/gettemp.sh 0 0 * * * echo -n "" > /var/log/temp.log 0,5,10,15,20,25,30,35,40,45,50,55 * * * * rdate time-a.nist.gov
Ahh .. and since it’s in the crontab, should mention that the wl520 loses time quickly .. I’m guessing this is because it is supposed to run at 240MHz, and yet I see reports that the open source firmwares are only able to run the clock at 200MHz. I installed ntpd via opkg, but didn’t see that it was working, so I force a time update using rdate from the crontab.
And here’s /root/gettemp.sh:
/root/currtemp.sh >> /var/log/temp.log
Not sure if httpd was configured before I installed x-wrt; I’m guessing not, so that may be a pre-requisite, but here’s my shell script for providing some access to my temperature log:
root@OpenWrt:~# vi /www/cgi-bin/temp.sh #!/bin/sh echo -en "Content-Type: text/html\r\n\r\n"; cat <<EOF <html> <head> <title>Temperature Page; made possible by an Arduino</title> <!-- meta http-equiv="refresh" content="5"; URL=/cgi-bin/temp.sh" --> </head> EOF echo \<body\> echo \<h1\>Temperature Log\</h1\> echo -en "<h2>Current</h2>"; echo \<hr\> one="-" two="-" echo "<table border=1>" for i in `/root/currtemp.sh` ; do three="$two" two="$one"; one="$i" if [ $two = "=" ] ; then echo -en "<tr><td>\r\n" echo -en " $three </td><td>\r\n"; echo -en " $one " echo -en "</td></tr>\r\n" fi done echo "</table>"; echo -en "<h2>History</h2>"; echo \<hr\> one="-" two="-" echo "<table border=1>" for i in `cat /var/log/temp.log` ; do three="$two"; two="$one"; one="$i"; if [ $two = "=" ] ; then echo -en "<tr><td>\r\n" echo -en " $three </td><td>\r\n" echo -en " $one " echo -en " </td></tr>\r\n" fi done echo "</table>" echo \<hr\> echo \</body\>\</html\>
.. and to make the log accessible to the public:
root@OpenWrt:~# vi /etc/httpd.conf /cgi-bin/webif/:root:$p$root /cgi-bin/webif/:admin:$p$root /cgi-bin/:: .asp:text/html .svg:image/svg+xml .png:image/png .gif:image/gif .jpg:image/jpg .js:application/x-javascript
Example Use:
From ssh:
From browser:
References:
Obviously I didn’t figure all of this out on my own. Here are some references to material I used along the way!
#1 by Adam on January 27, 2010 - 11:15 am
Quote
I’m curious. Which model of the arduino are you using? And when you say that you connected the TX and RX from the router to the arduino, did you connect it directly to the USB port on the Arduino, or did you use some of the digital pins on the arduino? It’s kind of hard to infer from the picture because the shield is in the way.
#2 by tim on January 27, 2010 - 12:13 pm
Quote
The arduino is an arduino duemilanove and the TX and RX from the router were connected directly to digital pins 0 & 1. Although I don’t see that I stated it explicitly, I haven’t had time to find a way to configure the serial at boot (actually haven’t done more than look at the output since making this post) .. currently it takes a telnet to the ser2net port after a reboot before the port is configured correctly to use any of the scripts.
#3 by displacedtexan on April 1, 2010 - 11:52 am
Quote
Thanks so much! I have wanted to do something very similar with my arduino and 520gu. I was stuck on how to get serial info from my arduino using a shell script. Your temp=$(grep -m 1 TEMPF /dev/tts/0 | cut -d : -f 2) code was perfect.
Two quick questions… you install ser2net, but I don’t think you ever used it… is that correct?
Also, the 520gu serial is at 3.3v and the arduino is 5v. Do you foresee problems with that setup?
Thanks again!
#4 by Marcus Porter on May 2, 2010 - 2:54 pm
Quote
Have you actually been able to use the arduino IDE or avrdude to program the arduino with the set2net exposed serial port?
I’m doing something similar and have been able to talk to the arduino via a windows virtual serial port driver, but have been unable to actually program it, possibly due to the DTR pulsing that the bootloader wants.
I would like to eventually be able to reprogram the arduino’s remotely.
#5 by tim on May 2, 2010 - 4:37 pm
Quote
Hey Marcus, I had thought about trying that myself, but I never actually looked into how the programming worked. For my eventual purposes it won’t be that big of a deal to have to program via USB, but it would definitely be more convenient to program via the wireless.
#6 by tim on May 2, 2010 - 4:41 pm
Quote
displacedtexan – I was using ser2net to configure the serial port to the correct baudrate .. it was a hack, but it also allowed for simple testing from a telnet prompt on the pc. As for the 3.3V/5V difference, it would definitely be better to either use a 3.3V arduino or put a level-shifter between the two. It’s quite possible that I could have used a higher baud rate if I had resolved that difference, but in it’s current state it works just fine.
Pingback: Connecting Arduino to WL-520GU via Serial Port | Maverick Geek
Trackback: Alexander7
Trackback: quick and easy ways to make money
Trackback: butt
Pingback: Connecting Arduino to WL-520GU via Serial Port | MaverickGeek
Trackback: Anonymous | http://takenapart.com/?p=3 | CC-MAIN-2018-26 | refinedweb | 2,260 | 63.29 |
Created attachment 584285 [details]
Screenshot of the initial menu partially localized
Just installed the latest build on Asus EeePad: Mozilla/5.0 (Android; Linux armv7l; rv:12.0a1) Gecko/20111225 Firefox/12.0a1 Fennec/12.0a1.
The first time you start the browser, some strings in the top right menu are in English:
* Bookmark or Remove (checkbox)
* Request Desktop Site
If I open "Impostazioni" (Settings) I get all strings in English, same thing if I try to use "Elimina impostazioni sito" (Clear Site Settings).
It's enough to rotate the screen to see all localized strings as expected.
The only things that remain in English are:
* the whole start page's content (Top Sites, etc.)
* Request Desktop Site/Request Mobile Site
About the second string I searched mxr but I can't find any evident explanation
Similar behavior on Samsung Galaxy S II with 2.3.3, so not just related to HoneyComb.
In this case the number of strings that always remain in English after a screen rotation is bigger: only "Bookmark" and "Other" are translated, menu displayed pressing "Other" is completely in English.
With 20111226 build "Request Desktop Site" is displayed localized, usually it changes after you create a new tab or switch to a different one.
Also all buttons on my Samsung Galaxy S II with 2.2.3 are localized after a screen rotation.
The only problem standing is the one described in the summary: initially Firefox is a mix of English and localized strings.
Sriram - Is this an inflation issue? Maybe something is racing?
Having the same problem for es-ES (tested both HTC Desire and Xperia Neo V, GB). Is there any chance this bug gets attention in a near future? I cannot test my translation properly, and I'd love to do it in the early stages of aurora cycle.
Stas, can you confirm this?
I can confirm this bug for Russian locale on HTC Desire with CyanogenMod 7.1.0 installed.
(In reply to Guillermo López (:willyaranda) from comment #4)
> I cannot test my translation properly, and I'd love to do it in the early
> stages of aurora cycle.
Rotating the screen doesn't work for you?
Also, if you can reproduce it, and look at adb logcat for anything surprising?
(In reply to flod (Francesco Lodolo) from comment #7)
> (In reply to Guillermo López (:willyaranda) from comment #4)
> > I cannot test my translation properly, and I'd love to do it in the early
> > stages of aurora cycle.
> Rotating the screen doesn't work for you?
nope, menu pop-up stays in a mixed state (everything except "Bookmarks" ("Marcadores") in English if I rotate.
But in the Settings menu, rotation changes the language to es-ES.
(In reply to Axel Hecht [:Pike] from comment #8)
> Also, if you can reproduce it, and look at adb logcat for anything
> surprising?
Anything suspicious, no ERRORs in GeckoApp or related.
Well, if I rotate a few times the phone, it *maybe* change the language. Now I've seen everything translated in menu pop-up except "Bookmarks" (so inverted from my previous comments).
But still, not working with a simple rotation.
*** Bug 717533 has been marked as a duplicate of this bug. ***
I know this is bugspam, but I just read the announcement in dev.l10n by Jeff about localizing Fennec11 (Native) and I think this bug should be fixed ASAP (or at least before transplanting aurora->beta if beta will go to the market as usual).
I can't test my language properly (I can blindly test comparing with en-US, but this is not ideal) and I've found a few string that doesn't fit in the screen and I needed to shorten them.
If this bug only truly affects multilanguage APKs, then we could use the new single language APKs to test languages:
(In reply to Mark Finkle (:mfinkle) from comment #13)
> If this bug only truly affects multilanguage APKs, then we could use the new
> single language APKs to test languages:
>
>-
> android-l10n/
Ok, thanks for pointing this to me. Anyway, are you going to ship multilocale builds for Firefox for Android in the market? Or just single locales depending on the location provided by the Market?
Thanks!
We'll need the multi-apk for the amazon market, so we can't really ignore this bug..
Even if we're going to ship single locale builds, I think it would be useful to give this bug a clear priority.
(In reply to flod (Francesco Lodolo) from comment #16)
>.
>
> android_strings.dtd#4
Should be fixed by bug 712970
Since I'm unable to make multi-locale builds, I couldn't try testing this.
From what I remember from initial code (from when the screenshots where taken):
1. Bookmarks had two different strings ("Bookmark" and "Remove") which were changed based on whether the site was bookmarked or not. This was changing in code in java. Now, only one string "Bookmark" is being used.
2. I am not sure about "Request Desktop site". But I don't find that menu anymore.
Now, when the value is changed in code, the android tries to get "localized" string always. In this case, the localized string would be from en-US as the device might have been running on English (US) phones. Note: This is device wide locale.
Android cannot get localized strings based on Fennec's language (as the device's locale is still different). Fennec should request it to be run on a different locale than the system's, so that Android can do it for us. However, our current code doesn't support this. We only have "one" strings.xml file in res/values/ -- which will be the default resource used.
I am not sure if multiple resource folders (of form ab-CD) are generated and placed inside res/values. If they aren't then changing strings cannot happen.
Still, from the screenshots, and the recent code changes to menu, I believe this problem wouldn't exist anymore.
I've tried latest nightly and the problem is still there. Sometimes English, sometimes Spanish…
(In reply to Sriram Ramasubramanian [:sriram] from comment #18)
> Still, from the screenshots, and the recent code changes to menu, I believe
> this problem wouldn't exist anymore.
Wrong ;-)
Request Desktop site is gone, menus changed structure but the problem is still the same (ALL preferences and dialogs are displayed in English until you do a screen rotation, sometimes more than one).
I could see the preferences start in English and change to the language after rotation. But the menus were working fine for me.
I am unable to build a multi-locale clone. I tried setting the device language to French and loading the app (ideally getLocale() should return French, though resources will be in english).
Both for preferences and awesome-screen, when the activity first loads it says English. On rotation, it says French. I am unable to understand what is causing the issue for the same. - This piece of code is confusing me. Does this set the locale to entire application or just the activity?
Created attachment 592969 [details] [diff] [review]
Patch
Phew! Finally with Mark's help we found the issue :)
The "setSelectedLocale()" by making it start in en-US (as the locale is reset).
With this fix, fennec wouldn't have issues with multi-locales.
Comment on attachment 592969 [details] [diff] [review]
Patch
># HG changeset patch
># User Sriram Ramasubramanian <sriram@mozilla.com>
># Date 1327980911 28800
># Node ID 4fd7aa4c5622c199a0ac061122447eafddb956b0
># Parent bfeeb813aef2dfe25a74343034664127d954b274
>Bug 713464: Application Locale should not be reset from CPP.
>
>diff --git a/mobile/android/base/GeckoAppShell.java b/mobile/android/base/GeckoAppShell.java
>--- a/mobile/android/base/GeckoAppShell.java
>+++ b/mobile/android/base/GeckoAppShell.java
>@@ -1142,38 +1142,44 @@ public class GeckoAppShell
> ConnectivityManager cm = (ConnectivityManager)
> GeckoApp.mAppContext.getSystemService(Context.CONNECTIVITY_SERVICE);
> if (cm.getActiveNetworkInfo() == null)
> return false;
> return true;
> }
>
> public static void setSelectedLocale(String localeCode) {
>+ /* Bug 713464: This */
>+
> /* We're not using this, not need to save it (see bug 635342)
Can you add a trailing */ at the end of this line?
Pushed to inbound with required changes.
Sweet.
Should there be a follow-up to comment out the locale setting code in, too?
Leaving open for comment 26.
(In reply to Axel Hecht [:Pike] from comment #26)
> Sweet.
>
> Should there be a follow-up to comment out the locale setting code in
>
> GeckoThread.java#l86, too?
The locale setting code in GeckoThread.java does not harm now. I would like to keep it as it is until we know for sure the multi-locales are working fine.
Comment on attachment 592969 [details] [diff] [review]
Patch
[Approval Request Comment]
Regression caused by (bug #): Bug 635432
User impact if declined:
Multi-locale builds will show Preferences screen in English and then change to system's locale on rotation. This will cause confusion to users.
Testing completed (on m-c, etc.): Landed on m-c on 01/31
Risk to taking this patch (and alternatives if risky): None. Gecko is aware of the locale that the device is in. We are just blocking it from overwriting the locale.
String changes made by this patch: None.
(In reply to Sriram Ramasubramanian [:sriram] from comment #18)
> Since I'm unable to make multi-locale builds, I couldn't try testing this.
why are you unable to make multi-locale builds?
Somehow different locales aren't being packed when i build and do | make package |. Not sure what am i doing wrongly.
you need to re-pack for multilocale builds, using this script
After the merge-%; chrome-% dance for a few locales, you call
make package MOZ_CHROME_MULTILOCALE="de fr pl it zu" AB_CD=multi
Comment on attachment 592969 [details] [diff] [review]
Patch
[Triage Comment]
Approved for Aurora 12 and Beta 11.
I believe this still needs to be fixed on Beta 11.
mozilla-aurora build 2012-02-03 nl confirmed fixed
sorry can not confirm it, i did not installed the multi version i noticed today.
i can not find a aurora multi build. | https://bugzilla.mozilla.org/show_bug.cgi?id=713464 | CC-MAIN-2016-26 | refinedweb | 1,675 | 64.71 |
A.
Download the latest xfilesBinary or xfilesSource archive.
Download jpython.jar if you want to do scripting (see scripting section below). jpython.jar is also needed to recompile the source, though small changes to the source will let it recompile without it. jpython.jar is not needed if you do not want to script.
Download the nativeFile archive only if you want native Unix link detection (see discussion at end). You probably do not need this. will work. Similarly, the Xfiles program can live anywhere. If you follow the installation below you will need to launch the program from the directory where it resides (this does not restrict its function). Changes to the shell files to make it run from any directory are evident.
The program and this documentation refer to client and server machines and directories. These are interchangable -- the server merely refers to the machine that the server is running on (see below).
xfiles.jar java archive containing the program xfilesClient shell program to launch the client gui xfilesServer shell program to launch the server jpython.jar optional, needed for scripting
On my Redhat 5 system, all I do is run netcfg and turn on one of the configured interfaces. Doing /sbin/ifconfig <interface> on would probably work too.
You may need to enable the hosts in the Unix .rhosts file.
If you know networking please send me more authoritative(spell?) instructions
for what's needed here.
xfilesServerThis should print out the server hostname, then "XfilesServer is running".
If you get an error that says "java.lang.ClassNotFoundException: <name of class>", one of the paths in the xfilesServer file is not set correctly.
xfilesClient duckpond /usr/jfk/devl /home/jfk/DevlThe command above will compare the file tree starting at /usr/jfk/devl on the client machine with the file tree /home/jfk/Devl on the machine 'duckpond'.
When the GUI comes up, select a directory and press the start button.
The client partially scans the client file tree at startup to allow you to select a sub-tree of the specified root if desired.
To save you time Xfiles first scans the whole tree before reporting any differences (this may take a while); all differences are then reported consecutively.
After synchronizing one directory, you can select another in the GUI and press start again. Currently the GUI file tree does not update to reflect deletions in earlier runs, however (see the TODO section).
Xfiles writes a file XFILES.LOG listing the selected actions.
xfiles.py can define the following functions:
Please e-mail problems, successes, fixes, and fears
to: zilla@computer.org
Send email with subject line XFILES to be notified of updates.
import java from java.io import File import java.lang.Runtime runtime = java.lang.Runtime.getRuntime() # ignore files that end with these strings skipextensions = ['RCS', ',v', '.o', '.so', '.a', '.class', '.jar'] # return 1 if xfiles should visit this path, else 0 # def pathFilter(path): print 'pathFilter(%s)' % path if path[len(path)-1] == '~': # emacs backup file return 0 if path == 'so_locations': return 0 spath = java.lang.String(path) for ext in skipextensions: if spath.endsWith(ext): return 0 return 1 # called before copying over a file # (check out from RCS if appropriate) # def preCopy(path): name = filename(path) spath = filedir(path) spath = spath + '/RCS/' print 'name = %s' % name if exists(spath): # RCS/ exists spath = spath + name + ',v' print 'spath = %s' % spath if exists(spath): # RCS/file,v exists docmd('co -l -f %s' % path) # called after copying over a file # (check in to RCS if appropriate) # def postCopy(path): name = filename(path) spath = filedir(path) spath = spath + '/RCS/' print 'name = %s' % name if exists(spath): # RCS/ exists spath = spath + name + ',v' print 'spath = %s' % spath if exists(spath): # RCS/file,v exists docmd('ci -u -f -mXfiles_copy_checkin %s' % path) # helper commands def docmd(cmd): if 1: print cmd pid = runtime.exec_(cmd) pid.waitFor() def filedir(path): result = File(path).getParent() if not result: if isabs(path): result = path # Must be root else: result = "" return result def filename(path): return File(path).getName() def exists(path): return File(path).exists() def isabs(path): return File(path).isAbsolute()
Because Xfiles traverses a directory tree, it needs to be able to distinguish between "real" files and links (aliases) so as to avoid an infinite loop in the case where a link points to a directory above itself. There are two approaches to this, and you need to select which one you will use:
For most purposes it will probably be fine to use the built-in code. Read the appendix Links/Aliases/Shortcuts in Java for more details on this issue. | http://www.ibiblio.org/pub/linux/system/backup/xfiles.html | crawl-002 | refinedweb | 775 | 63.8 |
Yes, the dot regex matches whitespace characters when using Python’s
re module.
Consider the following example:
import re string = 'The Dot Regex Matches Whitespace Characters' match = re.findall('.', string) print(match) ''' ['T', 'h', 'e', ' ', 'D', 'o', 't', ' ', 'R', 'e', 'g', 'e', 'x', ' ', 'M', 'a', 't', 'c', 'h', 'e', 's', ' ', 'W', 'h', 'i', 't', 'e', 's', 'p', 'a', 'c', 'e', ' ', 'C', 'h', 'a', 'r', 'a', 'c', 't', 'e', 'r', 's'] '''
Try it yourself in our interactive Python shell (click “run”):
The dot matches all characters in the
string–including whitespaces. You can see that there are many whitespace characters
' ' among the matched characters.
Need more info? Watch the simple tutorial video about the dot regex if you need more clarifications:
Note that the dot matches whitespace characters in all other regular expression languages I have found on the web—no matter the programming language or framework.
Do you want to master the regex superpower? Check out my new book The Smartest Way to Learn Regular Expressions in Python with the innovative 3-step approach for active learning: (1) study a book chapter, (2) solve a code puzzle, and (3) watch an educational chapter video.. | https://blog.finxter.com/does-the-dot-regex-match-whitespace-characters-in-python/ | CC-MAIN-2021-43 | refinedweb | 194 | 64.54 |
At long last, here is the code! Use it in good health (:
There are some example runs of the program below it as well.
#include
#include
#include
// )
{
// ‘m’ and ‘r’ are mixing constants generated offline.
// They’re not really ‘magic’, they just happen to work well.
const unsigned int m = 0x5bd1e995;
const int r = 24;
// Initialize the hash to a ‘random’;
}
struct SShuffler
{
public:
SShuffler(unsigned int numItems, unsigned int seed)
{
// initialize our state
m_numItems = numItems;
m_index = 0;
m_seed = seed;
// calculate next power of 4. Needed sice the balanced feistel network needs
// an even number of bits to work with
m_nextPow4 = 4;
while (m_numItems > m_nextPow4)
m_nextPow4 *= 4;
// find out how many bits we need to store this power of 4
unsigned int numBits = 0;
unsigned int mask = m_nextPow4 – 1;
while(mask)
{
mask = mask >> 1;
numBits++;
}
// calculate our left and right masks to split our indices for the feistel
// network
m_halfNumBits = numBits / 2;
m_rightMask = (1 << m_halfNumBits) - 1; m_leftMask = m_rightMask << m_halfNumBits; } void Restart() { Restart(m_seed); } void Restart(unsigned int seed) { // store the seed we were given m_seed = seed; // reset our index m_index = 0; } // Get the next index in the shuffle. Returning false means the shuffle // is finished and you should call Restart() if you want to start a new one. bool Shuffle(unsigned int &shuffleIndex) { // m_index is the index to start searching for the next number at while (m_index < m_nextPow4) { // get the next number shuffleIndex = NextNumber(); // if we found a valid index, return success! if (shuffleIndex < m_numItems) return true; } // end of shuffled list if we got here. return false; } // Get the previous index in the shuffle. Returning false means the shuffle // hit the beginning of the sequence bool ShuffleBackwards(unsigned int &shuffleIndex) { while (m_index > 1)
{
// get the last number
shuffleIndex = LastNumber();
// if we found a valid index, return success!
if (shuffleIndex < m_numItems) return true; } // beginning of shuffled list if we got here return false; } private: unsigned int NextNumber() { unsigned int ret = EncryptIndex(m_index); m_index++; return ret; } unsigned int LastNumber() { unsigned int lastIndex = m_index - 2; unsigned int ret = EncryptIndex(lastIndex); m_index--; return ret; } unsigned int EncryptIndex(unsigned int index) { // break our index into the left and right half unsigned int left = (index & m_leftMask) >> m_halfNumBits;
unsigned int right = (index & m_rightMask);
// do 4 feistel rounds
for (int index = 0; index < 4; ++index) { unsigned int newLeft = right; unsigned int newRight = left ^ (MurmurHash2(&right, sizeof(right), m_seed) & m_rightMask); left = newLeft; right = newRight; } // put the left and right back together to form the encrypted index return (left << m_halfNumBits) | right; } private: // precalculated values unsigned int m_nextPow4; unsigned int m_halfNumBits; unsigned int m_leftMask; unsigned int m_rightMask; // member vars unsigned int m_index; unsigned int m_seed; unsigned int m_numItems; // m_index assumptions: // 1) m_index is where to start looking for next valid number // 2) m_index - 2 is where to start looking for last valid number }; // our songs that we are going to shuffle through const unsigned int g_numSongs = 10; const char *g_SongList[g_numSongs] = { " 1. Head Like a Hole", " 2. Terrible Lie", " 3. Down in It", " 4. Sanctified", " 5. Something I Can Never Have", " 6. Kinda I Want to", " 7. Sin", " 8. That's What I Get", " 9. The Only Time", "10. Ringfinger" }; int main(void) { // create and seed our shuffler. If two similar numbers are hashed they should give // very different results usually, so for a seed, we can hash the time in seconds, // even though that number should be really similar from run to run unsigned int currentTime = time(NULL); unsigned int seed = MurmurHash2(¤tTime, sizeof(currentTime), 0x1337beef); SShuffler shuffler(g_numSongs, seed); // shuffle play the songs printf("Listen to Pretty Hate Machine (seed = %u)\r\n", seed); unsigned int shuffleIndex = 0; while(shuffler.Shuffle(shuffleIndex)) printf("%s\r\n",g_SongList[shuffleIndex]); system("pause"); return 0; } [/cpp]
| http://blog.demofox.org/2013/07/06/fast-lightweight-random-shuffle-functionality-fixed/ | CC-MAIN-2017-22 | refinedweb | 614 | 53.04 |
6 Hostname and Hostid¶
The
hostname and
hostid are the two basic computer identifiers used in MOSEK license files. The
hostname is just the standard host name and
hostid is usually identical to the MAC address of a network card.
Command line
The easiest way to obtain
hostname and
hostid is to open the shell, go to the directory with MOSEK binaries (
<MSKHOME>/mosek/10.0/tools/platform/<PLATFORM>/bin/) and run the command
mosek -f
It will produce output similar to
MOSEK Version 8.1.0.23 (Build date: 2017-8-24 15:37:04) Copyright (c) MOSEK ApS, Denmark. WWW: mosek.com Platform: Linux/64-X86 FlexLM Version : 11.14 Hostname : myoptserver Host ID : "b083fa34ad2c" License path : /home/mosekuser/mosek/mosek.lic Operating system variables LD_LIBRARY_PATH : *** No input file specfied. No optimization is performed. Return code - 0 [MSK_RES_OK]
Python
If you only installed MOSEK in Python (via Conda, Pip or otherwise) then you can get the same output by running the following code. Note, however, that for floating licenses you will still have to download the full MOSEK distribution package to obtain the license server binaries.
import mosek, sys env = mosek.Env() env.set_Stream(mosek.streamtype.log, sys.stdout.write) env.echointro(1)
Other ways
If you cannot run MOSEK at this point, other methods to obtain the
hostname and
hostid are outlined below.
6.1 The Hostname¶
To obtain the host name open a shell and execute the command:
hostname
6.2 The Host ID¶
A purchased MOSEK license is tied to a particular computer via a unique identifier called a host ID. Usually the host ID is identical to the MAC address of a network card. Therefore, the machine needs to be equipped with a network card. However, an actual network connection is not needed as the licensing system requires only the number encoded in the network card.
Important
Please follow the instructions below, and NOT use the shell command
hostid.
6.2.1 Windows: How to get the Host ID¶
In the Start Menu under All Programs select Mosek Optimization Tools 10.0 and click on Generate HOSTID. MOSEK will display the hostname and the host ID and generate a file named
hostid.txt in the user’s home directory e.g
%UserProfile%\hostid.txt
Please provide the
hostid.txt file whenever the host ID is requested.
6.2.2 Linux: How to get the Host ID¶
To use the license manager the Linux standard base 3.0 must be installed. This package is called
lsb-base or
lsb in most Linux distributions..
Troubleshooting
If you get an error similar to:
./lmutil: No such file or directory
then most likely the Linux Standard Base
lsb package is not installed.
6.2.3 macOS: How to get the Host ID¶. | https://docs.mosek.com/10.0/licensing/hostid-hostname.html | CC-MAIN-2022-27 | refinedweb | 464 | 66.33 |
This page describes how to use the in-game editor overlay with its command line interface and documents many internal game variables (cvars).
Index
Editor Overlay
Access the editor overlay either by going straight to Sandbox Mode from the Main Menu, or within gameplay by pressing “\”. The editor overlay is primarily a command driven, but has a number of direct keybindings which are described in this section. Mouse over the console window to enter commands. The overlay is always in either FLY, COMMAND, or EDIT mode.
Read this Introduction to Editor Mode first.
Keybindings
- ‘1’ to switch to FLY mode (takes control of selected ship). Same controls as in-game flying.
- ‘2’ to switch to COMMAND mode, which allows selecting and manipulating ships. Within command mode, left click to select ships and right click to set ship destinations. This is the default mode.
- ‘3’ to switch to EDIT mode, which allows selecting and manipulating individual blocks. Same controls as the in-game ship editor.
- WASD pans the camera in command and edit mode
- Double click to move the deploy location, which appears as a blue circle overlaid with an X. Commands which spawn things will spawn at this location.
- ‘p’ to freeze the game simulation
- ‘o’ to single step the game simulation
- ‘v’ hides the overlay (toggle)
- ‘/’ repeats the last command
- ESC, ‘~’, and ‘\’ exit command mode
- Ctrl-s runs the “level_save” command
- ‘[‘ and ‘]’ adjust the size of the console
- ‘{‘ and ‘}’ adjust the time factor (speed up, slow down the game simulation)
Selected ships have green boxes around them. Certain commands (export, ssave) only work on the primary selection, which additionally has a blue box around it. Standard and primary selection work in all modes – in fly mode the ship under control is considered selected.
Sectors are delimited by green lines. Sandbox mode has a single large sector. In the main game, 9 sectors are loaded at a time. Commands which manipulate sectors (gen, region, leve_save, etc.) work on the sector in the middle of the screen.
Cvars reference
The CVARS are internal variables that control many aspects of Reassembly. Cvars can be modified immediately within the console via the “cvar” command, or by modifying the cvars.txt in the save directory. cvars.txt is loaded at game startup.
This guide omits – for convenience – the ‘k’ which appears in front of all CVARS elements: eg. kBigTimeStep, kDamageReproduceCooldown, etc.
Time Step:
BigTimeStep, TimeStep and SuperTimeStep
The timestep commands determine how often certain aspects of the AI runs. The time unit is in seconds and the smaller you make these amounts, the more taxing it will be on the processor.
DamageDefendTime
This represents the amount of time it takes for a damaged enemy to cease pursuing you.
DamageReproduceCooldown
This represents the amount of time it will take for an enemy to reproduce a child ship after being damaged.
BlockBigTimeStep & BlockSuperTimeStep
Controls how often the AI makes a decision with regards to blocks. This does not include Launcher or Tractor Beam blocks. BlockSuperTimeStep checks on Seed blocks only.
Fleets:
FleetComposeFuzzyP
Scale at which to randomize between ships with similar P slightly when composing fleets.
Sandbox
ConsoleLines
Sets how many lines are in the Console when loaded.
SandboxSize
Sets the radial size of the sandbox.
GarbageCollectEnable
Removes dead ships, and blocks that leave the sandbox bounding area.
Reproduce:
AIParentReproduceCooldown
After a ship is created, this variable tells the AI how long to wait until the reproduce ability is enabled.
AIParentPFleetRatio
This variable determines how large of a fleet AI spawner ships will attempt to maintain. All excess ships spawned will be released as rogue ships, which will not be considered children of the fleet. The ratio works within this formula:
Mothership P * kAIParentPFleetRatio = Fleet size
Pathfinding:
AIPathMaxQueries
The AI Pathfinder will give up if a path is more than X number of turns and straight lines. Making this variable higher equates to more CPU calculations.
AIPathTimeout
This variable represents the time, in seconds, that a path will be considered as valid. It’s an upper bound, meaning that the AI will attempt to repath every X seconds.
AI Targeting:
SensorRangeMultiplier
Allow globally increasing ship sensor range (set it to e.g. 2, 3).
AITargetMin & AITargetThreshold
This combination of variables determines what ships the AI targets. Both of these variables refer to a ship’s deadliness.
The AI will not target anything beneath the AITargetMin unless the AITargetThreshold * Ship’s P is less than AITargetMin
AutoTargetRadius
This variable controls the units of distance around the cursor that a ship will be targeted. Decreasing this requires more accuracy from the player.
AutofireSpreadWeight
The larger this variable is, the more likely your ship’s turrets will target multiple targets over smarter choices.
BadAimErrorAngle
Angle, in radians, that controls the BAD AIM flag.
Agents:
The term “Agents” refers to a ship that is user-created, that will appear within your game as you play.
AgentCount
This represents the number of Agents that spawn when a new game is created.
AgentMaxDeadly & AgentMaxShips & AgentMinDeadly
These three variables control the size and deadliness of an agent fleet. When spawning an Agent fleet, the game will continue to produce ships until either the Max Ships or Max Deadly variable is reached.
AgentMinSpawnDist
During the initial map generation, an Agent fleet will not be placed any closer to the player’s ship than this variable.
Because Agents tend to be more dangerous than the most of the game’s starter ships
AgentSpeed
Controls Agent movement speed in the universe.
AgentDirectory
Sets the directory from which to pull custom agents.
Effects:
BeamGlow & BeamHalo
These two variables affect how lasers look. Both of these control units of distance (width of the beam), and each of these control different parts of the beam. The bigger the number, the bigger the laser visibility.
BloomBlocks
Controls the bloom effect for blocks
BloomBlurRadius
This is the size, in pixels, if the blur effect.
BloomBrightness
This variable controls the bloom brightness.
BloomIntensity
This variable (between 0 and 1) controls how much bloom will be on.
BloomRadius
This is the size, in pixels, of the bloom effect.
BloomResFactor
This controls the bloom associated with weaponry blocks.
BloomScale
This variable controls how many pixels should be used for the bloom effect.
BloomTonemap
Turns on/off tonemapping.
BlurFactor & BlurMenuRadius
These variables control the size of the blur used for background and menu.
BlurMinDepth
If objects in the z dimension meet or exceed this value, the blur effect will turn on.
DisruptIndicatorAlpha & DisruptIndicatorSize
Control the size of the effect that appears on the screen edges when the player ship is hit.
DopplerFactor
Controls the SFX Doppler effect.
DiscoverDistance
Controls the size of the dotted line around objective objects. Allows the player to activate objectives more easily.
kParticleExplosionColor0
Allows you to set the colour of explosions.
kParticleSmokeTime
Allows you to set howlong smoke particles last. ( When blocks are damaged. )
kParticleFireColor0
Allows you to set the colour of fire. ( When blocks are damaged. )
Blocks:
HealMaxDist
Controls the maximum number of blocks through which a healing laser can heal from point of contact with entity.
BlockElasticity & BlockFriction
These two variables control the physics of blocks: namely the elasticity and friction as the names suggest.
BlockExplodeChance
This variable controls the probability that a block will create an explosion (upon its destruction), causing further damage. It operates on a “1 in X” chance-based system, meaning that the higher the number, the lower the chances of an explosion occurring. Setting this to 1 will pretty much cause chaos.
BlockImpulseDamage & BlockSolveDamage
These two variables control the impact damage for blocks. Impulse uses mass and velocity to calculate the damage to a block while Solve represents the minimum amount of impact damage that will be given.
BlockOverlap
This variable grants a certain amount of overlap to blocks during construction. Setting to 0 may break the game.
CommandHaloDeadliness & CommandHaloSize
These two variables control the light that emits from the Command block. Halo Deadliness controls the size as proportional to P value while Halo Size controls the size as proportional to the block size.
Memory:
BlockMemoryPoolSize
This variable controls how much memory is allocated for block storage. The default, 400kb, should be more than sufficient.
MemPoolMaxChain
This controls the total number of memory pools. The default is more than the vast majority of players will need.
Camera:
CameraAutoZoom & CameraEnablePan
Enable and disable the automatic camera movement.
CameraPanTime
This controls the auto pan feature.
CameraWheelZSpeed
Controls the mouse wheel sensitivity of the camera’s zoom movements.
Taking Images:
CleanBackground
Turning this on allows for a black background. Useful for taking screenshots.
ClusterImageSize
This variable controls the size of images for DumpShipImages.
DumpShipImages
When turned on, the game will take and store a picture of every ship upon start.
Resource Collection:
CollectAnimationTime & CollectBlinkTime
These control how long it takes the resource animation to complete.
DeferredCollectInterval & DeferredCollectRadius
Controls how often the resources are collected and how the grouping of collections works.
Construction Editor
ConstructorBlockLimit
This number controls how many blocks can be placed in the editor.
ConstructorViewBounds
The view bounds control the minimum and maximum view sizes for the ship editor.
Fonts
DefaultFontFile, FallbackFontFile, MonoFontFile, SymbolFontFile & TitleFontFile
Each of these allow you to specify a specific and game-recognizable font for the specified purpose.
Utility
kWriteJSON
Writes blocks.json to the Reassembly data directory.
WriteBlocks
Writes blocks.lua to the Reassembly data directory.
ModExportFactionStart
Controls mod export faction ID start
ModExportBlockStart
Controls mod export block ID start
PortRenderNormals
Help for modders to debug port normals.
Command Reference
Commands
help [COMMAND]:
Lists documentation for commands.
Usage Example
> help add
NOTE: This would list the two commands ADD and ADDBLOCKS, along with their variables and their proper syntax.
find [SEARCH STRING]:
List commands that match the search string you type. Also searches the command help string.
sopen <NAME>:
Open a block cluster by name.
ssave [NAME]
Save the selected block cluster using a name of your choosing.
palette <FACTION>:
Palette draws all unique blocks used to build a given faction. The primary difference between PALETTE and MINPALETTE is that PALETTE will also draw variants of the sized blocks.
minpalette <FACTION>:
Minpalette draws the minimum number of blocks for a given faction. The primary difference between PALETTE and MINPALETTE is that PALETTE will also draw variants of the sized blocks.
fleetpalette <FACTION>:
Type the command along with the faction number to see a row of deactivated ships, summing up all ships in your chosen faction. Protip: you can type “fleetp” as a shortcut.
activate:
This command turns on the AI for the selected ship. This requires a ship be present and selected to work and effectively makes it start moving, shooting, and reacting to other objects.
deactivate:
The opposite of the ACTIVATE command, this command turns off the AI for the selected ship. This requires a ship be present and selected to work and effectively makes it stop moving, shooting, and reacting to other objects.
block
Spawn a block at the cursor. Blocks are parsed in the same format as in blocks.lua. You can also specify just a block ID to spawn it directly.
Usage Examble
block 3
block { features=thruster, shape=octagon }
command
Modify command/ai fields. See the definition of “Command” and “ECommandFlags” appendix on the Docs page.
Usage Example
> command {flags=ATTACK, sensorRadius=9999, resources=9999}
Modding Tools
reload [VARIABLES]
Reload various modding data files, including block definitions, cvars, and shaders.lua
refaction <FACTION>:
REFACTION takes a ship from one faction and reinterprets it, using the blocks of another faction. Will try its best to make the change, but will not do it if comparable blocks are not available. The command will attempt to suggest a compatible faction if your selected faction will not work.
Usage Example
> refaction 12
recolor <COLOR0> <COLOR1>:
Recolor whatever object is selected to the two hand-picked colors you provide. Please use the following format for each color: 0xRRBBGG, the same as used in HTML.
Usage Example
> recolor 0x010199 0x990101
Sector/World
fleet <FACTION> [P TOTAL] [COUNT]:
At the cursor position, spawn a faction of ships meeting your set criteria of Power count and Number. The first variable tells which faction (by number of faction), the second variable is the P total (the approximate amount of P you’d like to see used to generate your fleet), and the number of ships you’d like to see appear. Note that your P is spread out among all the ship count, so a large ship count will build smaller ships against your P total.
Usage Example
> fleet 8 200000 30
ship <SHIP NAME>:
Summon a ship by name. Use the TAB button to see what ships are available.
add <SHIP REGEX> [COUNT]:
Spawn and scatter ship objects throughout the sector. Requires a ship name, defaults to a number of 1.
Usage Example
> add 8_supercorvette 5
asteroid <SIZE> <SIDES 1> [SIDES 2] [SIDES 3]
Generate asteroids using up to 3 different shape types. First determine the size (how many blocks the asteroid will be comprised of). The next three variables all determine the shapes of the blocks that make up the asteroid. The variables mean the number of sides.
Usage Example
> asteroid 50 3 5 8
tiling <SIZE> <TYPE>
Generate a tiling asteroid as large as you’d like. Something to drive around, or through, depending on the mining abilities of your spaceship. The SIZE variable is the number of blocks. The TYPE variable represents programmed tile sets assigned numbers 0 through 6.
Usage Example
> tiling 100 3
plant <TYPE> [SUBTYPE] [SIZE]
Nearly identical to the APLANT command with the following differences: you’re only creating a single plant and that plant doesn’t require a surface, it spawns at the cursor position.
Usage Example
> plant 2 3 50
aplant <TYPE> [SUBTYPE] [SIZE] [COUNT]
Spawn a random assortment of plants using the following variables. Note that this command attempts to place plants given available surfaces in the active sector, but is not always successful due to surface area availability and the shape of the plant generated. Use of APLANT may require multiple attempts before a success is noticed.
Note also that types 1, 2 and 3 correspond to green, blue and pink, respectively, in increasing order of resource generation.
TYPE
1 (blue), 2 (pink), or 3 (green). These colors only distinguish how often resources are produced, from least to most frequent. By using higher-valued numbers, the variable defaults to a value of 1.
SUBTYPE
This represents the color of the flowering elements.
SIZE
This represents how many blocks the plant will have. Larger plants have a lower probability of successful generation.
COUNT
Number of plants to place in the play area. A higher count request will merely make more attempts to place a plant, and will not place copies of a specific plant.
Usage Example
> aplant 2 3 65 100
fill <FLAGS> <FILLPERC> <SIZE>:
Fill your sector with a customized asteroid field. The first variable are the flags. Use the TAB button to see what asteroid types are available. The second variable is the sector fill percentage. The third variable is the size of each asteroid in number of blocks.
Usage Example
> fill EXPLOSIVE 20 10
penrose [BLOCK SCALE] [ITERATIONS]:
Generate a nifty penrose asteroid. The penrose is, for all intents and purposes, circular in shape, but is comprised of a set of tiles placed in the very nifty Penrose tiling sequence. Note that both values should be kept to 9 or less.
Usage Example
> penrose 4 5
region <FACTION>:
This command uses the world generation system to fill the current sector with ships and asteroids for the supplied faction based on the region specifications in regions.lua.
target [SIZE] [HEALTH] [SHIELD]:
Generate a test dummy to attack and taunt with all your muster. The HEALTH variable gives health PER BLOCK, so be careful not to generate a target dummy that will overpower your weak little ships. The SHIELD variable merely turns on (1) or off (0).
Usage Example
> target 100 10000 1
Level File Manipulation
level_save [NUMBER]:
Save your current level as a numbered LUA file. Defaults to the currently loaded level file, which in turn defaults to 0.
revert [LEVEL]:
Revert back to previous save or load a desired save state by specifying the level number.
gen [LEVEL]:
Use the GEN command to randomly generate a level (similar to going to a wormhole) or callout a saved file to load it.
clear
Deletes everything in the sector.
Export
export <NAME> [AUTHOR]:
Export your selected ship with this command. Type the ship’s name as well as your name (or penname) to generate a LUA file with the ship’s information. The command line will display where you can go to retrieve your ship.
Usage Example
> export Bootsy Fernando
import [PATH]
Use the IMPORT command in conjunction with a ship file path (LUA file), to pop a ship into your level.
IMPORT can also load fleet files (usually .lua.gz) as downloaded from the wormhole feed:.
agent
Spawn a complete agent fleet. The word “agent” refers to a complete fleet of ships designed by one player – parallel to the player’s own fleet. The command requires no further specifications. Use this command several times consecutively for an instant AI battle.
upload
This command will upload your fleet to the Reassembly server. They’ll fly out into the ether and begin destroying other Reassembly players, who themselves are completely unawares of what fate awaits them.
This is similar to the functionality used when entering a wormhole.
constructor [FACTION NAME]
Open the ship constructor with the faction of your choice. Immediately begin editing ships in that faction.
Usage Example
> constructor 7
Utility
options [NAME=VAL]
Control the game’s options from the command line. Not nearly as easy as just going to the Options menu, but some folks like a challenge.
Usage Example
> options musicVolume=90
debug [TYPE]
Use the debug command in conjunction with one or more of the available types to toggle the debug information on that type. Use the TAB button after typing “debug” to see available options.
AI – toggle AI debugging overlay
PROFILE|STUTTER – toggle performance graphs
Usage Example
> debug ai
zonefeatures [TYPE]
Toggle global flags for the current gameplay simulation zone. Available flags:
- RESOURCES – zone has resource packets. Block regeneration rate is also reduced with resources disabled.
- SPACERS – a physics optimization that replaces the collision shape for each block with a convex hull under certain conditions. See also “debug SPACERS”,
- BACKGROUND – enable background stars and halo graphics. Will draw a grid when background graphics are disabled.
- PARTICLES – enable particle effects (thruster trails, explosions, etc.).
- UNDO – enable snapshoting of rectangles of zone state before editing operations to support undo (used in sandbox by default).
- UNDO_ALL – enable snapshoting of entire zone on editing operations (used in ship constructor by default).
- NODEBRIS – destroy all detached blocks when a block cluster splits instead of just disconnecting them.
Usage Example
> zonefeatures SPACERS
screenshot
Take a screenshot. Also you can use F2.
Scripting and Utility
exit
The EXIT function is similar to the \ key in that it quits the command line.
quit
Quit to desktop. Do not pass go. Do not collect $200.
freeze
Very similar to simply pressing the “P” button on the keyboard, FREEZE stops the game in its tracks.
cursor <X> <Y>:
Set an absolute position for the cursor using x/y coordinates.
Usage Example
> cursor 100 250
rcursor <X> <Y>:
Short for, relative cursor, it’s related to the CURSOR command. Move the cursor a precise amount of steps to a new location. Use the mouse and double-click feature if you like to live dangerously!
repeat <TIMES> <COMMAND>:
Repeat a single command multiple times. It could be useful…
Usage Example
> repeat 3 aplant 2 2 50 10
view [ENABLE]
Turning this on (set to 1) will make the console disappear. Pressing any other key will reenable the console. The quickkey command of “v” will also open/close the console.
God Powers
explode [RADIUS] [DAMAGE]
If your sector is looking a little too smug for its own good, the explode command is an excellent solution. If in the Sandbox, set your cursor position, else just define a radius and amount of damage to see some chaos in action.
Usage Example
> explode 1000 1000
god
Use the GOD command to make your ship (or the selected ship) invincible.
noclip
Turn off collision detection for a selected ship. It’s like being a ghost ship!!!! Oooooooooooooooooh
resource [QUANTITY]
Adds a specified number of resources to the cursor position.
wormhole
Use the WORMHOLE command to slap a big ol’ swirling sucky thing on the screen… then fly your butt into it.
reveal
Sick of exploring? This nifty command shows where everything is on the map! Cheating is fun!
conquer
Unlocks all sectors in favor of your faction, but does not destroy all opposing ships. Essentially, this action redraws the map, hiding the possessors of all sectors from you.
Fonts/Text
write <CHARS> [NAME]:
Always wanted to write terrible poetry in a font of purple triangular shapes with blue plants growing out of each letter? Well, now you can. Simply create your fancy font using the SFONT command and then type out your poorly selected words using atrocious spelling and grammar using the WRITE command. You’re welcome.
Usage Example
> write "It was a dark and stormy night" myfont
sfont <CHARS> [NAME]:
Create and save your own font using Reassembly blocks. It’s actually kinda nifty. Select an object to save it as a font.
Usage Example
> sfont a myfont
wfile <filename> [FONT_NAME]:
Always wanted to read The Great Gatsby written in a font of purple triangular shapes with blue plants growing out of each letter? Well, now you can. Simply create your fancy font using the SFONT command and then point the console in the direction of your great American novel using the WFILE command.
Usage Example
> wfile c:/mahbooks/bucketlist/tehgats.txt myfont
Tournament
pool
Runs each ship against all other selected ships.
bracket
Start a single-elimination tournament bracket with all selected ships.
There are some Cvars not shown here, primarily the physics controls and button options. Things like KSpeedOfSound, KhasDemoButton, KHeadlessMode and KSpinnerRate for example. Is there another full documentation or wiki i can refer to to sate my curiosity?
The cvars that are not documented are not very interesting if you aren’t developing Reassembly :-D.
It is possible to use a command to unlock all factions, including modded ones? Or do you have to go into configs to do that?
You have to edit configs but it’s pretty simple. Just add the faction numbers to “unlock.lua” in the save directory. It should look something like
{factions={2, 3, 4, 11, 12, 15}}
Please expand on the functionality of each zoneFeature, thanks. Also, how would one go about spawning the gold farmer plants surrounding the asteroids spawned with asteroid/fill/region? For example I’m making a mod that uses plants and I’d like to test their resources in sandbox first.
I added a description of each flag to the description for the zonefeature command. For testing plants, you can use the “add” command to a lot of a plant design quickly, using the same logic as the world generator. For example, “add 5_crop 100” will try to add 100 of the crop plants, placing them randomly. You can also use the “region” command to actually read your regions.lua file and fill the sector based on the description, which is ideal for testing a mod.
how do you generate a level it says gen level 10 failed and so on for a few different numbers why wont it work?
Try the “region” command – e.g. “region 8”.
You can also generate custom levels by combining “fill”, “aplant”, “fleet”, and similar commands.
How do I select another ship in console mode?
Whatever, Got it.
See the (newly added) Editor Overlay section.
When I save my recolored ships the new colors aren’t saved. How do I save my new colors?
How are you saving the ship, and which command are you using to recolor?
I think he’s using ssave instead of export | http://www.anisopteragames.com/sandbox-console-docs/ | CC-MAIN-2020-29 | refinedweb | 4,027 | 65.62 |
* Zachary Amsden <zach@vmware.com> wrote:> > but in exchange you broke all of 32-bit with CONFIG_PARAVIRT=y. > > Which means you did not even build-test it on 32-bit, let alone boot > > test it...> > Why are we rushing so much to do 64-bit paravirt that we are breaking > working configurations? If the developement is going to be this > chaotic, it should be done and tested out of tree until it can > stabilize.what you see is a open feedback cycle conducted on lkml. People send patches for arch/x86, and we tell them if it breaks something. The bug was found before i pushed out the x86.git devel tree (and the fix is below - but this shouldnt matter to you because the bug never hit a public x86.git tree). IngoIndex: linux/include/asm-x86/paravirt.h===================================================================--- linux.orig/include/asm-x86/paravirt.h+++ linux/include/asm-x86/paravirt.h@@ -619,6 +619,7 @@ static inline void write_cr4(unsigned lo PVOP_VCALL1(pv_cpu_ops.write_cr4, x); } +#ifdef CONFIG_X86_64 static inline unsigned long read_cr8(void) { return PVOP_CALL0(unsigned long, pv_cpu_ops.read_cr8);@@ -628,6 +629,7 @@ static inline void write_cr8(unsigned lo { PVOP_VCALL1(pv_cpu_ops.write_cr8, x); }+#endif static inline void raw_safe_halt(void) { | http://lkml.org/lkml/2008/1/18/534 | CC-MAIN-2015-27 | refinedweb | 201 | 58.99 |
C++
Documentation for IB Computing Exercises
- Editor and Development Environment
- Compiling and Running
- "for" loops
- Ex 1: Estimating pi by series summation (3 marks)
- Functions
- Ex 2: Finding roots using the Bisection method (3 marks)
- Ex 3: Matrices and Graphics (3 marks)
- The vector class
- Using class
- Longer Exercises
- Further Reading
This course consists of 3 exercises on this page (which are mostly revision) followed by some longer problems. You should read the handout before your first session. You can begin the work before the start of the lab session if you wish. The lab sessions are compulsory and there are penalties for lateness. To receive the qualification mark (12 marks) you need to complete at least 4 exercises. You should complete - and get marked - at least one problem in each lab session. A maximum of 16 marks are available. Students who have programmed before are advised to try the 4 longer exercises.
The CUED Tutorial Guide to C++ Programming has more details about the language, and the 1AC++Examples folder (in your home folder) contains over 10 examples that we gave you last year that illustrate various aspects of the C++ language.
Start sessions in the DPO by clicking on the
icon at the top of the screen and then the "CUED 2nd Year" option to run the Start IBComputing option. This will give you some files and icons that you need for the lab.
If you want to hide all the extra step-by-step instructions in this document, click on this hide-extras button
Editor and Development Environment [ back to contents]. Use the little
icon in the Favorites section of the Applications menu to start a new window.. For more details, see the CUED new user guide.
Compiling and Running [ back to contents]
You need the initial "./" - just typing the program name won't work. Also note that clicking on the foo icon won't work either in this situation. It will run the program but it won't create a window for it first, so you won't see any text output or have the chance to input anything.
Syntax of for loops [ back to contents] [ back to contents]
Often you want an integer index to loop through consecutive range of values (for instance, for iterating through the elements of an array). When you do this using a for loop you need to ask yourself
- What are the bounds of the interval? For instance, when looping through an array. Here are some examples an array of size 10 called v:
for(unsigned int i = 0; i < 10; ++i) { // current element is v[i] }
Somehow or other you need to become fluent with for loops. Test yourself now
For-loop quiz
When you've done that successfully you're ready to try this exercise
Estimating pi by series summation ("for" loop) [ back to contents]
This way of calculating pi is far more efficient than the pen-dropping method you used in year 1.
- Write a program to print out the first N (1 <= N <= 100) terms in the series 1/i2. The number of terms, N, is to be input by the user at the keyboard.
- Modify your program so that for each value of the index i, it evaluates the sum of the first i terms.
- The sum of this series can be shown (Leonhard Euler (1707-1783)) to converge to pi2/6. Make another modification to your program so that at each iteration the estimate of pi is printed instead of the sum. How good is the estimate of pi after N=100 iterations? How many iterations are needed to get an estimate of pi which is accurate to 2 decimal places after rounding?
- Begin by writing a simple program that prompts the user for the number of terms, N, in the range 1 <= N <= 100. Deal appropriately with situations where the user types invalid integers in. Remember to declare and initialise all the variables that you use. Compile and test your program before going any further.
- Write a for loop to print out the value of the index (counter) i and the value of the ith term:
iterm = 1.0/(i*i);from i=1 to i=N. Compile and test your program
- Modify the program so that at each iteration (i.e. for each value of i) it computes and prints the sum of the first i terms. Compile and test your program
- An estimate for the value of pi can be obtained from the sum of N terms, (stored in a variable called sum, for example) by computing sqrt(6.0*sum). To use the mathematical library function sqrt() you must remember to include the header file cmath at the top of your program. Print out the estimate of pi at each iteration. Compile and test your program
- How good is the estimate of pi after N=100 iterations? Change the maximum value of N so that a better accuracy can be achieved (e.g. N<300).
Functions [ back to contents]
You created some functions in the 1st year. If you're still unsure about the basics read
- the Frequently Ask Question about functions
- More about Function (from the 1A notes)
- Functions - from the Tutorial Guide
Passing by reference
If you want a C++ function to modify a variable that it's given, pass the variable by reference (add '&' to the argument type). The following code shows 2 functions that differ according to whether the parameter is passed by reference. Make sure you understand the implications of this.
void value_func(int param) { param += 2; } void ref_func(int& param) { param += 2; } int main() { int x = 2; value_func(x); // x is still 2 ref_func(x); // x is now 4 }
See the FAQ entry for more details.
Overloading
In C++ it's possible to have 2 functions with the same name as long as they take different parameters. This is called overloading. For example, you can have functions with these prototypes
void process(int x); void process(int x, int y);
and there won't be any ambiguity if in your code you call them using
process(3); process(3, 5);
- the appropriate version of the function will always be called.
Somehow or other you need to become fluent with using functions!. Test yourself now
Function quiz
Finding roots using the Bisection method (writing functions) [ back to contents]
The problem of finding the square root of a number, c, is a special case
of finding the root of a non-linear equation of the form f(x)=0 where
f(x) = c - x2. We would like to find values of x such that f(x) = 0.
A simple method consists of trying to find 2 values of x where the function's value has different signs. We would then know that one solution lies somewhere between these values. For example: If f(a) * f(b) < 0 and a < b then the solution x must lie between these values: i.e. a < x < b. We could then try to narrow the range and hopefully converge on the true value of the root. This is the basis of the so-called Bisection method.
The Bisection method is an iterative scheme (i.e. repetition of a simple
pattern) in which the interval is
halved after each iteration to give the approximate location of the
root. After i iterations the root
(let's call it xi, i.e. x after i iterations) must lie between ai and bi and an approximation for the root is given by pi = (ai + bi)/2. The error ei between the approximation and the true root is
bounded by ei = (bi - ai)/2 = (b1 - a1)/2i.
At each iteration the sign of the functions f(ai) and f(pi)
are tested and if f(ai) * f(pi) < 0 the root must
lie in the half-range ai < x < pi. Alternatively the root lies in
the other half (see figure).
We can thus update the new lower and upper bound for the root using this logic
if f(ai) * f(pi) < 0
then ai+1 = ai and bi+1 = pi
else ai+1 = pi and bi+1 = bi
Unless you understand this theory you won't be able to write the problem, so re-read the explanation if necessary, or read Wikipedia's Bisection page. Then
- Write a function that computes the square root of a number in the range 1 < x <= 100 with an accuracy of 10-4 using the Bisection method. The Math library sqrt function must not be used.
- Test your function by calling it from a program that prompts the user for a single number and displays the result.
- Modify this program so that it computes the square roots of numbers from 1 to 10. Compare your results with the answers given by the sqrt() mathematical library function.
The square root of a number will be found by calling a user-defined function to implement one of the iterative algorithms (i.e. repetition of a pattern of actions) described above. You are required to define a function to find the square root of a number i.e. f(x) = c - x2. Your solution is to be accurate to 5 decimal places. Since the number input by the user is between 1 and 100 the root will satisfy 0 < x <= 10 and a valid initial guess for the lower and upper bound for the solution will always be a1 = 0.1 and b1 = 10.1.
The error after i iterations will be 10/2i. To produce a solution which is accurate to 5 decimal places we will need more than 20 iterations.
- Start with a very simple program which prompts the user for the value of a real number in the range 1 < c <= 100. Deal appropriately with situations where the user types invalid values in. Compile it. Test it.
- Define a function MySquareRoot(), which is passed the number (i.e. single parameter of type float) and returns the approximate value of its square root (i.e. return value type is float). The C++ code for the function should be placed below the body of main(). Begin the implementation of the function by typing in the function header and the opening and closing braces. For example:
float MySquareRoot(float square) { // Body of function definition }
- Inside the body of the function (i.e. after the opening brace):
- You will need to declare local variables to store the values of ai, bi, pi and f(ai) * f(pi). For example:lower, upper, root and sign. Initialize the values of lower and upper to 0.1 and 10.1 respectively.
- Set up a loop using the while or for repetition control statements to repeat the following algorithm (Bisection method) at least 20 times.
- In each execution of the loop:
- Estimate the value of the root as the average of the lower and upper bounds. Store this value in variable root.
- Evaluate the function at the current value of lower (i.e. ai) and at the current estimate of the root, (pi).
- Evaluate f(ai) * f(pi) and store this value in variable sign.
- Depending on the value of sign update the lower and upper bounds by the bisection method described above.
- The function must end with a return statement to pass back the approximate value of the square root.
- Declare the function by including the function prototype before main(). Compile your program to make sure you have not made any typing or syntax errors.
- Call your function from your program with the (actual) parameter, e.g. number. The return value of the function is to be assigned to a variable (e.g. squareRoot) which should also be declared in main():
squareRoot = MySquareRoot(number);Test it by calculating the square root of 2 (which is 1.41421) etc. or comparing the result with that given by the library function, sqrt(). To use the mathematical library function you must remember to include the header file cmath at the top of your program.
- Set up a loop in the main routine to call the function 10 times to calculate the square roots of the integers 1,2, ... 10.
Matrices and Graphics [ back to contents]
You're now going to transform shapes using matrix multiplication. Graphics aren't part of C++, so that part of the code will be done for you (you'll use OpenGL as you did in the 1st year and with the Mars Lander). Matrices will be implemented as 2D arrays, which you met in the first year too.
First, a little theory. If the coordinates (x,y) of a 2D point are in a column vector v, then the coordinates of the point when rotated theta anti-clockwise about the origin can be calculated using matrix multiplication as follows
[ cos(theta) -sin(theta) ] newv = [ sin(theta) cos(theta) ] * v
To scale the values by xscale in the x direction and yscale in the y direction, you can do
[ xscale 0 ] newv = [ 0 yscale ] * v
If v is a matrix with 2 rows and n columns, then each column can represent a point and the 2-by-n resulting matrix will contain all the transformed coordinates. The calculations (but not the graphics) are shown in the Advanced Topics section of our "Tutorial Guide to C++ Programming".
The following code displays a colourful triangle on the screen. Your task is to transform this triangle. Start by rotating it 4 times by 45 degrees clockwise to get the output shown on the right. You need to create a ComputeMatrix function to calculate the transform matrix, and a RotateCoordinates function to perform the rotation.
// Call this file graphics.cc. Compile using // g++ -I/usr/local/include -I/usr/include/GL -L/usr/local/lib -o graphics graphics.cc -lglue -lglut -lGLU -lGL // Run it by typing // ./graphics #include <iostream> #include <cmath> #include <unistd.h> // for sleep #include "glue.h" using namespace std; // Call the transform routines and draw the graphics. void mygraphics(int w, int h) { float triangleCoords[2][3]={{0, 0.33, 0 }, {0.33,0.33, 0 }}; float matrix [2][2]; float angle=-M_PI/4; // in radians // Set coordinate system to be -1>x>1; -1>y>1 with (-1,-1) bottom left glViewport(0, 0, w, h); glMatrixMode(GL_PROJECTION); glLoadIdentity(); // Uncomment the next line of code that calls ComputeMatrix // to set the elements in the matrix appropriately. // ComputeMatrix(matrix,angle); // Rotate by "angle" radians 4 times for (int i=0;i<4;i++) { // Uncomment the next line of code that calls RotateCoordinates // to rotate the coordinates in the triangleCoords array // RotateCoordinates(matrix, triangleCoords); // Draw a triangle, with blue, green and red vertices glBegin(GL_TRIANGLES); glColor3f(0.0, 0.0, 1.0); // blue glVertex2f(triangleCoords[0][0], triangleCoords[1][0]); glColor3f(0.0, 1.0, 0.0); // green glVertex2f(triangleCoords[0][1], triangleCoords[1][1]); glColor3f(1.0, 0.0, 0.0); // red glVertex2f(triangleCoords[0][2], triangleCoords[1][2]); glEnd(); glFlush(); sleep(1); // pause for 1 second. } } int main() { glueWindow(); graphicsfunction (mygraphics); glueGo(); }
- Copy and compile the provided code. Exit using
Ctrl-C
- Look at how ComputeMatrix and RotateCoordinates are going to be called, and write the protoypes for these functions (note that arrays are always passed by reference)
- Write the ComputeMatrix function to initialise the elements of the transform matrix (you might want to print the values out to see if they're correct). Compile the code.
- Write the RotateCoordinates code to change the coordinates of the triangle's vertices. Remember not to overwrite the values in triangleCoords until they original values are no longer needed.
- Uncomment the calls to ComputeMatrix and RotateCoordinates (you might want to print the transformed coordinates out to see if they're correct). Compile and run the code.
You could change the angle, the number of repetitions, or try shrinking the triangle as you spin it. Look on the Web to see how to perform other transformations using matrices, and how to combine transformations.
That ends the revision stage. The rest of the material on this page is needed for the later exercises.
The vector class [ back to contents]
When C++ was developed from C the designers realised that arrays were a weakness and introduced vectors. In the exercises to come you will need to store variables in containers] << " "; }
- C++ has dozens of functions that can be used with vectors to sum, sort, shuffle and search the elements. Here's a simple example
#include <vector> // needed for vector #include <algorithm> // needed for reverse using namespace std; int main() { vector<int> v(3); // Declare a vector of 3 ints v[0] = 7; v[1] = v[0] + 3; v[2] = v[0] + v[1]; reverse(v.begin(), v.end()); }
Note that this uses the begin() and end() member functions of vector to describe how much of the vector to reverse (in this case all of it)
Using class [ back to contents] variables like this to functions, and use them as return values. You can also declare a vector with class elements. The following code creates a vector of 7 Persons and then sets the age of the 4th person to 47.
vector<Person> people(7); people[3].age = 47;
Longer Exercises [ back to contents]
- Exercise 4 (3 marks)
- Exercise 5 (4 marks)
- Exercise 6 (4 marks)
- Exercise 7 (5 marks)
Further Reading [ back to contents]
- 1A C++ coursework, Michaelmas
- CUED's C++ page
- CUED's C++ Frequently Asked Questions
- 1B CUED C++ crib
- Unix from the command line
- © 2010 University of Cambridge Department of Engineering
Information provided by Roberto Cipolla and Ethan Eade (Last updated: December 2010)
- Privacy
- Accessibility | http://www-h.eng.cam.ac.uk/help/languages/C++/1BC++/index.php?mode=beginner | CC-MAIN-2017-51 | refinedweb | 2,938 | 61.67 |
NAME
lp - line printer devices
SYNOPSIS
#include <linux/lp.h>
CONFIGURATION
lp[0–2] are character devices for the parallel line printers; they have major number 6 and minor number 0–2. The minor numbers correspond to the printer port base addresses 0x03bc, 0x0378 and 0x0278. Usually they have mode 220 and are owned by root and group lp. You can use printer ports either with polling or with interrupts. Interrupts are recommended when high traffic is expected, for example, for laser printers. For usual dot matrix printers polling will usually be enough. The default is polling.
DESCRIPTION
The following ioctl(2) calls are supported: int ioctl(int fd, LPTIME, int arg) Sets the amount of time that the driver sleeps before rechecking the printer when the printer’s buffer appears to be filled to arg. If you have a fast printer, decrease this number; if you have a slow printer then increase it. This is in hundredths of a second, the default 2 being 0.02 seconds. It only influences the polling driver. int ioctl(int fd, LPCHAR, int arg) Sets the maximum number of busy-wait iterations which the polling driver does while waiting for the printer to get ready for receiving a character to arg. If printing is too slow, increase this number; if the system gets too slow, decrease this number. The default is 1000. It only influences the polling driver. int ioctl(int fd, LPABORT, int arg) If arg is 0, the printer driver will retry on errors, otherwise it will abort. The default is 0. int ioctl(int fd, LPABORTOPEN, int arg) If arg is 0, open(2) will be aborted on error, otherwise error will be ignored. The default is to ignore it. int ioctl(int fd, LPCAREFUL, int arg) If arg is 0, then the out-of-paper, offline and error signals are required to be false on all writes, otherwise they are ignored. The default is to ignore them. int ioctl(int fd, LPWAIT, int arg) Sets the number of busy waiting iterations to wait before strobing the printer to accept a just-written character, and the number of iterations to wait before turning the strobe off again, to arg. The specification says this time should be 0.5 microseconds, but experience has shown the delay caused by the code is already enough. For that reason, the default value is 0. This is used for both the polling and the interrupt driver. int ioctl(int fd, LPSETIRQ, int arg) This ioctl(2) requires superuser privileges. It takes an int containing the new IRQ as argument. As a side effect, the printer will be reset. When arg is 0, the polling driver will be used, which is also default. int ioctl(int fd, LPGETIRQ, int *arg) Stores the currently used IRQ in arg. int ioctl(int fd, LPGETSTATUS, int *arg) Stores the value of the status port in arg. The bits have the following meaning: LP_PBUSY inverted busy input, active high LP_PACK unchanged acknowledge input, active low LP_POUTPA unchanged out-of-paper input, active high LP_PSELECD unchanged selected input, active high LP_PERRORP unchanged error input, active low Refer to your printer manual for the meaning of the signals. Note that undocumented bits may also be set, depending on your printer. int ioctl(int fd, LPRESET) Resets the printer. No argument is used.
FILES
/dev/lp*
SEE ALSO
chmod(1), chown(1), mknod(1), lpcntl(8), tunelp(8)
COLOPHON
This page is part of release 3.15 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at. | http://manpages.ubuntu.com/manpages/jaunty/man4/lp.4.html | CC-MAIN-2014-35 | refinedweb | 604 | 65.52 |
CodeIgniter error logging set to 1:
/* |-------------------------------------------------------------------------- | Error Logging Threshold |-------------------------------------------------------------------------- | | If you have enabled error logging, you can set an error threshold to | determine what gets logged. Threshold options are: | You can enable error logging by setting a threshold over zero. The | threshold determines what gets logged. Threshold options are: | | 0 = Disables logging, Error logging TURNED OFF | 1 = Error Messages (including PHP errors) | 2 = Debug Messages | 3 = Informational Messages | 4 = All Messages | | For a live site you'll usually only enable Errors (1) to be logged otherwise | your log files will fill up very fast. | */ $config['log_threshold'] = 1;
Is it possible to suppress error messages for functions that are preceeded by a @?
For example, I have somewhere a need to detect if a string is serialized or not. I'm using code similar to the following:
public function is_serialized($data) { return (@unserialize($data) !== false); }
Unfortunately, every time that $data is not serialized, it clogs up my error log even though I am using the @ operator. | https://www.daniweb.com/programming/web-development/threads/456742/suppress-errors-in-codeigniter-error-log | CC-MAIN-2018-47 | refinedweb | 164 | 51.28 |
- NAME
- VERSION
- SYNOPSIS
- ABOUT
- CHANGES FROM PREVIOUS VERSION
- ATTRIBUTES
- METHODS
- AUTHOR
- BUGS
- SUPPORT
- SEE ALSO
- CONTRIBUTORS
- LICENSE AND COPYRIGHT
NAME
Net::API::Gett - Perl bindings for Ge.tt API
VERSION
Version 1.04
SYNOPSIS();
ABOUT
Gett is a clutter-free file sharing service that allows its users to share up to 2 GB of files for free. They recently implemented a REST API; this is a binding for the API. See for full details and how to get an API key.
CHANGES FROM PREVIOUS VERSION
This library is more encapsulated. Share functions which act on shares are in the Net::API::Gett::Share object namespace, and likewise with Ge.tt files. Future versions of this library will modify the Request and User objects to be roles rather than objects.
ATTRIBUTES
- user
Net::API::Gett::User object.
has_user()predicate.
- request
Net::API::Gett::Request object.
METHODS
Unless otherwise noted, these methods die if an error occurs or if they get a response from the API which is not successful. If you need to handle errors more gracefully, use Try::Tiny to catch fatal errors.
- new()
Constructs a new object. Optionally accepts:
A Ge.tt API key, email, and password, or,
A Ge.tt refresh token, or,
A Ge.tt access token().
Share functions
All of these functions cache Net::API::Gett::Share objects. Retrieve objects from the cache using the
shares method. Use the
get_share method to update a cache entry if it is stale.
Retrieves all share information for the given user. Takes optional scalar integers
offsetand
limitparameters, respectively.
Returns an unordered list of Net::API::Gett::Share objects.
Retrieves (and/or refreshes cached) information about a specific single share. Requires a
sharenameparameter..
File functions
- get_file()
Returns a Net::API::Gett::File object given a
sharenameand a
fileid. Does not require an access token to call.
- upload_file()
This method uploads a file to Gett. The following key/value pairs are valid:
filename (required)
What to call the uploaded file when it's inside of the Gett service.
sharename (optional)
Where to store the uploaded file. If not specified, a new share will be automatically created.
title (optional)
If specified, this value is used when creating a new share to hold the file. It will not change the title of an existing share. See the
update()method on the share object to do that.
content (optional)
A synonym for
contents. (Yes, I've typo'd this too many times.) Anything in
contentshas precedent, if they're both specified.
contents (optional)
A representation of the file's contents. This can be one of:
A buffer (See note below)
An IO::Handle object
A FILEGLOB
A pathname to a file to be read
If not specified, the
filenameparameter is treated as a pathname. This attempts to be DWIM, in the sense that if
contentscontains a value which is not a valid filename, it treats
contentsas a buffer and uploads that data.
encoding
An encoding scheme for the file content. By default it uses
:raw. See
perldoc -f binmodefor more information about encodings.
chunk_size.
AUTHOR
Mark Allen,
<mrallen1 at yahoo dot com>
BUGS
Please report any bugs or feature requests to
bug-net-api-get::API::Gett
You can also look for information at:
RT: CPAN's request tracker (report bugs here)
AnnoCPAN: Annotated CPAN documentation
CPAN Ratings
MetaCPAN
GitHub
SEE ALSO
CONTRIBUTORS
Thanks to the following for patches:
Keedi Kim ()
Alexander Ost
LICENSE AND COPYRIGHT
This program is free software; you can redistribute it and/or modify it under the terms of either: the GNU General Public License as published by the Free Software Foundation; or the Artistic License.
See for more information. | https://metacpan.org/pod/release/MALLEN/Net-API-Gett-1.04/lib/Net/API/Gett.pm | CC-MAIN-2016-30 | refinedweb | 610 | 58.69 |
PEP 591 – Adding a final qualifier to typing
- Author:
- Michael J. Sullivan <sully at msully.net>, Ivan Levkivskyi <levkivskyi at gmail.com>
- BDFL-Delegate:
- Guido van Rossum <guido at python.org>
- Discussions-To:
- Typing-SIG list
- Status:
- Accepted
- Type:
- Standards Track
- Created:
- 15-Mar-2019
- Python-Version:
- 3.8
- Post-History:
-
- Resolution:
- Typing-SIG message
Abstract
Motivation
The
final decorator
The current
typing module lacks a way to restrict the use of
inheritance or overriding at a typechecker level. This is a common
feature in other object-oriented languages (such as Java), and is
useful for reducing the potential space of behaviors of a class,
easing reasoning..
The
Final annotation
The current
typing module lacks a way to indicate that a variable
will not be assigned to. This is a useful feature in several
situations:
- Preventing unintended modification of module and class level constants and documenting them as constants in a checkable way.
- Creating a read-only attribute that may not be overridden by subclasses. (
@propertycan make an attribute read-only but does not prevent overriding)
- Allowing a name to be used in situations where ordinarily a literal is expected (for example as a field name for
NamedTuple, a tuple of types passed to
isinstance, or an argument to a function with arguments of
Literaltype (PEP 586)).
Specification
The
final decorator
The
typing.final decorator is used to restrict the use of
inheritance and overriding.
A type checker should prohibit any class decorated with
@final
from being subclassed and any method decorated with
@final from
being overridden in a subclass. The method decorator version may be
used with all of instance methods, class methods, static methods, and properties.
For example:
from typing import final @final class Base: ... class Derived(Base): # Error: Cannot inherit from final class "Base" ...
and:
from typing import final class Base: @final def foo(self) -> None: ... class Derived(Base): def foo(self) -> None: # Error: Cannot override final attribute "foo" # (previously declared in base class "Base") ...
For overloaded methods,
@final should be placed on the
implementation (or on the first overload, for stubs):
from typing import Any, overload class Base: @overload def method(self) -> None: ... @overload def method(self, arg: int) -> int: ... @final def method(self, x=None): ...
It is an error to use
@final on a non-method function.
The
Final annotation
The
typing.Final type qualifier is used to indicate that a
variable or attribute should not be reassigned, redefined, or overridden.
Syntax
Final may be used in one of several forms:
- With an explicit type, using the syntax
Final[<type>]. Example:
ID: Final[float] = 1
- With no type annotation. Example:
ID: Final = 1
The typechecker should apply its usual type inference mechanisms to determine the type of
ID(here, likely,
int). Note that unlike for generic classes this is not the same as
Final[Any].
- In class bodies and stub files you can omit the right hand side and just write
ID: Final[float]. If the right hand side is omitted, there must be an explicit type argument to
Final.
- Finally, as
self.id: Final = 1(also optionally with a type in square brackets). This is allowed only in
__init__methods, so that the final instance attribute is assigned only once when an instance is created.
Semantics and examples
The two main rules for defining a final name are:
- There can be at most one final declaration per module or class for a given attribute. There can’t be separate class-level and instance-level constants with the same name.
- There must be exactly one assignment to a final name.
This means a type checker should prevent further assignments to final names in type-checked code:
from typing import Final RATE: Final = 3000 class Base: DEFAULT_ID: Final = 0 RATE = 300 # Error: can't assign to final attribute Base.DEFAULT_ID = 1 # Error: can't override a final attribute
Note that a type checker need not allow
Final declarations inside loops
since the runtime will see multiple assignments to the same variable in
subsequent iterations.
Additionally, a type checker should prevent final attributes from being overridden in a subclass:
from typing import Final class Window: BORDER_WIDTH: Final = 2.5 ... class ListView(Window): BORDER_WIDTH = 3 # Error: can't override a final attribute
A final attribute declared in a class body without an initializer must
be initialized in the
__init__ method (except in stub files):
class ImmutablePoint: x: Final[int] y: Final[int] # Error: final attribute without an initializer def __init__(self) -> None: self.x = 1 # Good
Type checkers should infer a final attribute that is initialized in
a class body as being a class variable. Variables should not be annotated
with both
ClassVar and
Final.
Final may only be used as the outermost type in assignments or variable
annotations. Using it in any other position is an error. In particular,
Final can’t be used in annotations for function arguments:
x: List[Final[int]] = [] # Error! def fun(x: Final[List[int]]) -> None: # Error! ...
Note that declaring a name as final only guarantees that the name will
not be re-bound to another value, but does not make the value
immutable. Immutable ABCs and containers may be used in combination
with
Final to prevent mutating such values:
x: Final = ['a', 'b'] x.append('c') # OK y: Final[Sequence[str]] = ['a', 'b'] y.append('x') # Error: "Sequence[str]" has no attribute "append" z: Final = ('a', 'b') # Also works
Type checkers should treat uses of a final name that was initialized with a literal as if it was replaced by the literal. For example, the following should be allowed:
from typing import NamedTuple, Final X: Final = "x" Y: Final = "y" N = NamedTuple("N", [(X, int), (Y, int)])
Reference Implementation
The mypy [1] type checker supports
Final and
final. A
reference implementation of the runtime component is provided in the
typing_extensions [2] module.
Rejected/deferred Ideas
The name
Const was also considered as the name for the
Final
type annotation. The name
Final was chosen instead because the
concepts are related and it seemed best to be consistent between them.
We considered using a single name
Final instead of introducing
final as well, but
@Final just looked too weird to us.
A related feature to final classes would be Scala-style sealed classes, where a class is allowed to be inherited only by classes defined in the same module. Sealed classes seem most useful in combination with pattern matching, so it does not seem to justify the complexity in our case. This could be revisited in the future.
It would be possible to have the
@final decorator on classes
dynamically prevent subclassing at runtime. Nothing else in
typing
does any runtime enforcement, though, so
final will not either.
A workaround for when both runtime enforcement and static checking is
desired is to use this idiom (possibly in a support module):
if typing.TYPE_CHECKING: from typing import final else: from runtime_final import final
References
This document has been placed in the public domain.
Source:
Last modified: 2022-01-21 11:03:51 GMT | https://peps.python.org/pep-0591/ | CC-MAIN-2022-27 | refinedweb | 1,169 | 54.02 |
pls give me source code jtable netbeans code
how to show data in jtable pls provide me source code netbeans
I need the correct and genuine response of the code for the Jtable from Java programming Expert
Why should we set the Column names that we don't use it here.
Good one but the column names are missing in GUI!
Post your Comment
Disabling User Edits in a JTable Component
Disabling User Edits in a JTable Component
... to disable
the user edits in a JTable component means editing is not allow to user... in all JTable
in every previous sections but now you will learn a JTable
Packing a JTable Component
Packing a JTable Component
...; a JTable by adjusting it in the center.
Description of program:
This program helps you in packing a JTable component.
For this you will need a JTable having
Creating a JTable Component
Creating a JTable Component
Now you can easily create a JTable component.
Here, the procedure for creating a JTable component is given with the brief
description
restrict jtable editing
){
return false;
}
}
Disabling User Edits in a JTable Component...restrict jtable editing How to restrict jtable from editing or JTable disable editing?
public class MyTableModel extends
Moving a Column in JTable
Moving a Column in JTable
This section describes, how to move a column in JTable
component. Moving is a very simple method that moves the data
Setting Cell Values in JTable
values in JTable component. For this you must have the some previous knowledge
about JTable. A cell is known as the format of a row and a
column in ...
Setting Cell Values in JTable
JTable populate with resultset.
JTable populate with resultset. How to diplay data of resultset using JTable?
JTable is component of java swing toolkit. JTable class...); //Creating object of JTable
JscrollPane is used for providing the facility
Changing the Name of Column in a JTable
the name
of column in JTable component. You have learnt the JTable
containing ...
Changing the Name of Column in a JTable
... the name in the name of column in a JTable in this
following example
Setting Tool Tips on Cells in a JTable
the tool tips
in the cells in a JTable component. So, you will be able to know... tips on returned JTable component. For
getting the JComponent object use... Setting Tool Tips on Cells in a JTable
problem with JTable - Swing AWT
inside the JTable should be checked.
details:
for creating JTable... TableCellRenderer{
public Component getTableCellRendererComponent( JTable t... to swings.i was having an assignment like i need to create a JTable
Creating a Scrollable JTable
Creating a Scrollable JTable : Swing Tutorials ... section, you will learn how to
create a scrollable JTable component. When any table has large volume of
data, the use of scrollbar is applied in
Setting Grid Line in JTable
Setting Grid Line in JTable
In the earlier section you have learnt for creating a
simple JTable that contains predefined grid line with black color. But in
this Java
JTable
JTable need to add values to a JTable having 4 coloumns ,2 of them are comboboxes
Removing a Column from a JTable
from a JTable component that uses the table model. Removing a column from
a JTable...
Removing a Column from a JTable
... to be deleted from the JTable.
Description of program:
This program helps
Java JTable
the JTable class and a
subclass of JComponent. It is a user-interface component...
Java JTable
JComponent.component is more flexible Java
Swing component that allows the user
Create a Custom Cell Renderer in a JTable
cell
renderer in a JTable component. Here, first of all you will know about the cell
renderer in JTable. The cell renderer is the component of JTable...
Create a Custom Cell Renderer in a JTable
how to print JInternal frame component ?
how to print JInternal frame component ? hello sir,
i devalop a swing application .but problem is that how display report & print it.
some data prefech from database & keep to jtable but how it is print with table
Shading Columns in JTable
in JTable. In JTable component
the shading columns are the simplest way...) in JTable component that overrides the
prepareRenderer() method. The table calls... Shading Columns in JTable
how to show data in jtable netbeansamitesh kumar August 26, 2012 at 7:01 PM
pls give me source code jtable netbeans code
jtable creating source codeamitesh kumar August 26, 2012 at 7:06 PM
how to show data in jtable pls provide me source code netbeans
Creating and using JTable in Java programmingAhmed Bedru September 25, 2012 at 7:32 PM
I need the correct and genuine response of the code for the Jtable from Java programming Expert
Column namesEhsan November 8, 2012 at 2:17 AM
Why should we set the Column names that we don't use it here.
The Missing Column NamesSiddharth Polisiti January 21, 2014 at 8:28 PM
Good one but the column names are missing in GUI!
Post your Comment | http://www.roseindia.net/discussion/18230-Creating-a-JTable-Component.html | CC-MAIN-2014-23 | refinedweb | 831 | 72.36 |
It’s pretty easy to create a web component with Vue.js and then consume that web component in a Vue.js app. I’m interested in this as a way to plug in custom user interfaces in the Airavata Django Portal, specifically custom experiment input editors. Using web components as the plugin mechanism allows extensions to be written using any or no framework. But to build a proof-of-concept I decided to build the web component using Vue.js
vue-cli makes it easy to create a Web Component build of a Vue.js component. Just run
vue-cli-service build --target wc src/components/MyComponent.vue
This creates a output files in
dist/ called
my-component.js and
my-component.min.js. It also creates a
demo.html file that demonstrates how to load and use the web component. To see this in action, let’s create a simple "Hello World" component and then build and load it.
First, install vue-cli. Then run the following (note: the following assumes
yarn is installed, but you can use
npm instead):
vue create hello-world cd hello-world yarn build --target wc src/components/HelloWorld.vue
Now open
dist/demo.html in a web browser. On macOS you can do:
open dist/demo.html
You should see the vue-cli default Vue component boilerplate.
demo.html looks like this:
<meta charset="utf-8"> <title>hello-world demo</title> <script src=""></script> <script src="./hello-world.js"></script> <hello-world></hello-world>
This loads Vue.js as a global object and the built web component script. The
./hello-world.js script registers the web component so it is immediately availble for use as shown at the bottom:
<hello-world></hello-world>.
So that’s how to build a Vue.js web component and how to load it in a basic web page. But how would you load it in a Vue.js application and integrate it? There are a few things to keep in mind.
vue-cli externalizes the Vue dependency
When you load a Vue.js web component you’ll need to make it available in the global scope, that is, a property of the
window object. In your Vue.js app, before you load the web component, you’ll need to do something like:
import Vue from "vue"; if (!window.Vue) { window.Vue = Vue; }
Using dynamic imports
You can of course import the web component using a script tag, but I feel like in a Vue.js web component it’s more natural to use the dynamic import function.
const webComponentURL = "..."; // or wherever it lives import(/* webpackIgnore: true */ webComponentURL);
The
/* webpackIgnore: true */ is necessary because otherwise Webpack will try to use the import statement at build time to generated an optimized, code-splitted build.
Vue.config.ignoredElements
When you reference custom elements in Vue.js templates, you need to let Vue.js know to ignore them and not expect them to be Vue.js components. Otherwise, Vue.js will generate a warning because it will appear to it that either the developer mistyped the Vue.js component name or that the component wasn’t registered.
For the Airavata Django Portal, what I’ve done is define a prefix (as a regular expression) that will be ignored ("adpf" stands for Airavata Django Portal Framework):
Vue.config.ignoredElements = [ // Custom input editors that have a // tag name starting with "adpf-plugin-" // Vue will ignore and not warn about /^adpf-plugin-/, ]
Dynamically reference web component in Vue.js template
We’ve seen how to use a web component in a Vue.js template: you just use the tag name, like the
demo.html example above. But how would you dynamically reference a web component? You can do that with the special ‘is’ attribute, which the Vue.js special component tag also supports.
<template> <component is="tagName"/> </template> <script> export default { //... data() { return { "tagName": "hello-world" } } } </script>
Handling web component events
Web component events are handled a little differently from Vue.js events. First, with Vue.js events you can emit an event with a value which will be passed as the first argument to event handler (see for an example). This doesn’t quite work with web components. Instead, the emitted event will have a
detail attribute which is an array of the event values. So instead of expecting the first argument to be the event value, the handler should expect the event object as the first argument and then check its
detail attribute for the event value.
webComponentValueChanged: function(e) { if (e.detail && e.detail.length && e.detail.length > 0) { this.data = e.detail[0]; } }
Second, and maybe I’m doing something wrong, but when I have my Vue.js component emit an "input" event, I end up getting two "input" events, one from the Vue.js component and a native "input" event. Perhaps it is more correct to say that when the Vue.js app listens for the "input" event on the web component it ends up getting the native and custom Vue.js "input" events. I was able to prevent the native "input" event with the
.stop modifier.
<template> <!-- .stop added to prevent native InputEvent from being dispatched along with custom 'input' event --> <input type="text" : </template> <script> export default { name: "simple-text-input", // ... methods: { onInput(e) { this.$emit("input", e.target.value); }, }, }; </script>
Still to do
You can see the code for the web component here:. Here is the commit for integrating this into the Airavata Django Portal:
This is a pretty basic proof-of-concept. Things I still want to do:
- Verify the web component can be published to and loaded from a CDN or some other public registry, for example,.
- Integrate validation by using the InputEditorMixin. (note: this is Vue.js specific but similar mixins or utilities could be developed for other frameworks.) This mixin automatically runs validation, but the custom input editor could augment this with any custom validation as required. The way we’ve designed the input editors is that the input editor components own the validation of the values, however, most of the validation is metadata driven and not usually implemented in the input editor component.
- Unify some code in the InputEditorContainer. Essentially, as much as possible I don’t want to have two code paths, one for internal Vue components and one for web components, although as pointed out above, event handling is a little different between the two.
- Create a higher level component to load the web components. This higher level component would use
window.customElement.get(tagName)to see if the component is already loaded.
- This is more Airavata Django Portal specific, but some input editors need to generate and/or upload an input file. I need to think about how to provide an API that web components can use to easily upload files. File input editors need to register the uploaded file and get back an identifier (called a data product URI) that is then returned as the value (as opposed to string input editors which need to edit the string value and just return the same). | https://marcus.4christies.com/2020/09/creating-and-using-vue-js-web-components/ | CC-MAIN-2021-21 | refinedweb | 1,188 | 58.08 |
Question :
I made a “Game”. I love playing it, and I would like to distribute it to my friends without having to install Python and Pygame on their computers.
I did a lot of research on Py2Exe and Pyinstaller. I looked through many tutorials, fixes, errors, but none of them seem to help me.
Pyinstaller is useless because it doesn’t like fonts in Pygame, and Py2exe wouldn’t compile the built in modules, so I found Pygame2exe which is just a premade setup script for use with py2exe that includes pygame and fonts. It supposedly builds fine, but the exe is unusable… I get the error:
“Microsoft Visual C++ Runtime Library
Runtime Error!
Program C:…distWorm Game.exe
This application has requested the Runtime to terminate in an unusual
way. Please contact the application’s support team for more
information.”
I just don’t get it… Why can’t I compile this game!!!
Here is the game code, made with Python 2.7:
import pygame import random import os pygame.init() class Worm: def __init__(self, surface): self.surface = surface self.x = surface.get_width() / 2 self.y = surface.get_height() / 2 self.length = 1 self.grow_to = 50 self.vx = 0 self.vy = -1 self.body = [] self.crashed = False self.color = 255, 255, 0 def event(self, event): if event.key == pygame.K_UP: if self.vy != 1: self.vx = 0 self.vy = -1 else: a = 1 elif event.key == pygame.K_DOWN: if self.vy != -1: self.vx = 0 self.vy = 1 else: a = 1 elif event.key == pygame.K_LEFT: if self.vx != 1: self.vx = -1 self.vy = 0 else: a = 1 elif event.key == pygame.K_RIGHT: if self.vx != -1: self.vx = 1 self.vy = 0 else: a = 1 def move(self): self.x += self.vx self.y += self.vy if (self.x, self.y) in self.body: self.crashed = True self.body.insert(0, (self.x, self.y)) if (self.grow_to > self.length): self.length += 1 if len(self.body) > self.length: self.body.pop() def draw(self): x, y = self.body[0] self.surface.set_at((x, y), self.color) x, y = self.body[-1] self.surface.set_at((x, y), (0, 0, 0)) def position(self): return self.x, self.y def eat(self): self.grow_to += 25 class Food: def __init__(self, surface): self.surface = surface self.x = random.randint(10, surface.get_width()-10) self.y = random.randint(10, surface.get_height()-10) self.color = 255, 255, 255 def draw(self): pygame.draw.rect(self.surface, self.color, (self.x, self.y, 3, 3), 0) def erase(self): pygame.draw.rect(self.surface, (0, 0, 0), (self.x, self.y, 3, 3), 0) def check(self, x, y): if x < self.x or x > self.x +3: return False elif y < self.y or y > self.y +3: return False else: return True def position(self): return self.x, self.y font = pygame.font.Font(None, 25) GameName = font.render("Worm Eats Dots", True, (255, 255, 0)) GameStart = font.render("Press Any Key to Play", True, (255, 255, 0)) w = 500 h = 500 screen = pygame.display.set_mode((w, h)) GameLoop = True while GameLoop: MenuLoop = True while MenuLoop: for event in pygame.event.get(): if event.type == pygame.QUIT: pygame.quit() elif event.type == pygame.KEYDOWN: MenuLoop = False screen.blit(GameName, (180, 100)) screen.blit(GameStart, (155, 225)) pygame.display.flip() screen.fill((0, 0, 0)) clock = pygame.time.Clock() score = 0 worm = Worm(screen) food = Food(screen) running = True while running: worm.move() worm.draw() food.draw() if worm.crashed: running = False elif worm.x <= 0 or worm.x >= w-1: running = False elif worm.y <= 0 or worm.y >= h-1: running = False elif food.check(worm.x, worm.y): score += 1 worm.eat() print "Score %d" % score food.erase() food = Food(screen) for event in pygame.event.get(): if event.type == pygame.QUIT: pygame.quit() elif event.type == pygame.KEYDOWN: worm.event(event) pygame.display.flip() clock.tick(200) if not os.path.exists("High Score.txt"): fileObject = open("High Score.txt", "w+", 0) highscore = 0 else: fileObject = open("High Score.txt", "r+", 0) fileObject.seek(0, 0) highscore = int(fileObject.read(2)) if highscore > score: a = 1 else: fileObject.seek(0, 0) if score < 10: fileObject.write("0"+str(score)) else: fileObject.write(str(score)) highscore = score fileObject.close() screen.fill((0, 0, 0)) ScoreBoarda = font.render(("You Scored: "+str(score)), True, (255, 255, 0)) if highscore == score: ScoreBoardb = font.render("NEW HIGHSCORE!", True, (255, 255, 0)) newscore = 1 else: ScoreBoardb = font.render(("High Score: "+str(highscore)), True, (255, 255, 0)) newscore = 0 Again = font.render("Again?", True, (255, 255, 0)) GameOver = font.render("Game Over!", True, (255, 255, 0)) screen.blit(GameName, (180, 100)) screen.blit(GameOver, (200, 137)) screen.blit(ScoreBoarda, (190, 205)) if newscore == 0: screen.blit(ScoreBoardb, (190, 235)) elif newscore == 1: screen.blit(ScoreBoardb, (175, 235)) screen.blit(Again, (220, 365)) pygame.draw.rect(screen, (0, 255, 0), (200, 400, 40, 40), 0) pygame.draw.rect(screen, (255, 0, 0), (260, 400, 40, 40), 0) LEFT = font.render("L", True, (0, 0, 0)) RIGHT = font.render("R", True, (0, 0, 0)) screen.blit(LEFT, (215, 415)) screen.blit(RIGHT, (275, 415)) pygame.display.flip() loop = True while loop: for event in pygame.event.get(): if event.type == pygame.QUIT: pygame.quit() elif event.type == pygame.MOUSEBUTTONDOWN: x, y = event.pos if x > 200 and x < 240 and y > 400 and y < 440: loop = False elif x > 260 and x < 300 and y > 400 and y < 440: GameLoop = False loop = False elif event.type == pygame.KEYDOWN: if event.key == pygame.K_LEFT: loop = False elif event.key == pygame.K_RIGHT: GameLoop = False loop = False screen.fill((0, 0, 0)) pygame.quit()
Answer #1:
I met this problem, too. After investigation, I found that the runtime error is caused by the font. I noticed that you also used None as the font name, too. Please remember that there’s a notice for using the pygame2exe about the font, just below “Changes by arit:” at pygame2exe page, that we should use a “fontname.ttf” to replace None and place that fontname.ttf under the correct folder that the exe can find. For example, you can use “freesansbold.ttf” to replace None when creating font, and place freesansbold.ttf under the folder the exe file exists. Hope that helps.
Answer #2:
I do not think there is anything wrong with your code. Infact, it compiles well and I played as well. Nice game.
I would suggest looking into the following in your computer,
- Have you installed all Microsoft Updates
- Look through the programs ( Control Panel – Programs & Features) and see if you have the latest Microsoft Visual C++ Libraries are Present.
I think if the above two are properly in place, it should work fine.
I tested this on a machine with the following conf:
1. Windows 7 with all the security patches updated.
Answer #3:
My answer:
After few weeks (had this problem even before) I’m happy to say that I solved this problem! 🙂
1st part of my problem ():
I solved it by editing setup.py script with adding “excludes” part in it. That resulted in successful making of executable file!
Modified setup.py script:
from distutils.core import setup import py2exe setup(windows=['source_static.py'], options={ "py2exe": { "excludes": ["OpenGL.GL", "Numeric", "copyreg", "itertools.imap", "numpy", "pkg_resources", "queue", "winreg", "pygame.SRCALPHA", "pygame.sdlmain_osx"], } } )
So, if you have similar issues, just put those “missing” modules into this “excludes” line.
2nd part:
After I succeeded in making of executable file, I had next problem: “The application has requested the Runtime to terminate it in unusual way. Please contact…“. After days and days of searching and thinking how to solve this another problem, I found a way to do it. I couldn’t believe that the problem was so absurd. The problem was in my code, with font definition:
font1 = pygame.font.SysFont(None, 13)
After changing “None” to some system font name (for an example “Arial” (must be a string)), and compiling, I couldn’t believe that my .exe file worked!
font1 = pygame.font.SysFont("Arial", 13)
Of course, you can use your own font, but you must specify its path and define it in your program.
So for all of you who are experiencing this issues, try this steps and I hope that you will succeed.
I really hope that this will help you, because I’ve lost days and weeks trying to solve these problems. I even tried making my .exe file with all versions of python and pygame, with many other .exe builders and setup scripts, but without luck. Besides these problems, I had many other problems before but I found answers to them on stackoverflow.com.
I’m happy that I found a way to solve this problems and to help you if you are faced with the same ones.
Small tips (things I’ve also done):
1st: update your Microsoft Visual C++ library to the latest one.
2nd: if you have images or fonts similar that your executable program needs, include them to dist folder (where your .exe file has been created).
3rd: when you are making your .exe file, include all needed files to the folder where your setup.py script is (all files and directories that your main script uses).
Used Python 2.7 x64, pygame and py2exe. | https://discuss.dizzycoding.com/pygame2exe-errors-that-i-cant-fix/ | CC-MAIN-2022-33 | refinedweb | 1,566 | 80.38 |
JabChapter 9
From WikiContent
Revision as of 06:10, 28 May 2009
[ singapore airlines asian affairs ] [ auto pisteros ] [ search engine marketing south africa ] [ map asian tsunami ] [ automotive mail sheridan wyoming yahoo.com ] 4+4 auto trader nz [ african american style updo ] [ mayfair auto parts taylor michigan ] [ autothority chip ] [ antivirus free software ] [ street map victoria australia ] [ symantac antivirus update ] [ auction auto buying california southern ] [ ectasia sander ] [ south african football association ] [ africaremix.org ] [ african american womens organizations ] [ business for sale in cape town south africa ] symantic antivirus download [ travel centers australia ] [ auto titles virginia ] [ africa hunting movie video ] [ auto exposure bracketing ] [ african american party decorations ] [ kampala uganda africa ] [ stinger antivirus download free ] [ asia central eastern europe farmer ] guard antivirus norton antivirus live update subscription key free [ autoclaves used ] [ autorox.com ] [ chicago auto show tickets free ] [ symantec antivirus server 2003 ] import dvds from asia [ west african country ] [ autovermietung koeln ] australia telescope national facility [ converting australian dollars to american dollars ] [ australian coin old ] [ antivirus linux review ] [ nordstroms auto south dakota ] asia cheap fare [ auto racing ] art auto ltd vintage [ 27 auto become br br break line poker tag ] [ south africa regions ] [ bendigo bank australia ] [ shop fittings australia ] [ australia facts for children ] [ commonweath bank of australia ] [ recipe for chicken parmasian ] [ delta kappa delta south asian ] [ african legend stories ] [ titanium antivirus and truprevent compusa display ] [ mdaemon antivirus ] little tykes toys western australia [ african queen lyrics ] [ asian art museum san fran ] [ absa south africa ] [ african american difference made who ] [ etrust antivirus free downloads ] [ asian pga tour golf ] [ plane trains automobile movie clip ] [ african american celebrity hair product ] [ human resource management australia ] [ auto body repair step by step ] [ australian defence industries ] [ gcis south africa ] [ asian gallery girls ] [ money transfer uk australia ] [ avg antivirus update free ] influential asians [ time differences usa and australia ] [ asian fever 12 ] [ avg antivirus definition update ] [ grissoft antivirus ] [ womens weekly australia ] [ madeira auto rental ] [ paramount pictures australia ] [ sears auto center burlington ma ] african slave pic [ australia java script ] [ quick heal antivirus free download ] [ africa corporate gift sou ] [ auto turret ] asian search engine [ maplestory auto clicker ] [ euthanasia methods humans ] [ asiasari ] [ hype hair style for african american ] [ asian hand jewelry made ] site [ blue book value auto ] [ photo of african american woman ] [ vandals in north africa ] [ lake tanganyika in africa ] [ asian.com lewd ] [ storagesync automatic backup software ] [ australia car hand second ] [ sophiatown south africa ] [ winantivirus.com ] [ asian eyelashes ] [ massage equipment australia ] [ asian bangin ] [ auto differential ] [ asia carrera foto ] [ auto by owner private sale used ] page webmap [ tissot automatic movement ] [ asia regine songbird velasquez ] [ tim oleary australian antivirus ] url asian gigantic tit [ lily thai asian ] [ definition of autosomal linkage ] [ auto finance rate ] [ airline asian cost low ] [ auto computer shutdown ] [ small antivirus program ] [ australian money picture ] [ afro asian story ] [ liberty life insurance in south africa ] [ sf international asian american film festival ] [ brock real estate south australia ] [ puffy amiyumi lyrics true asia ] webmap [ autograph evolution star war ] [ asian tsunami aid ] [ maralinga australia ] [ antivirus w32 rontokbro ] [ asians teens ] links [ auto cad piping software ] [ box hill tafe australia ] [ extreme auto body ] [ asia carerra com vvv ] [ anastasia eyebrow kits ] [ avg antivirus serial key ] [ symantac antivirus update ] [ asian free symbol ] automotive latch [ exchange rate australian ] [ pictures of the money of asia ] australian cancer society [ anastasia the lost princess ] [ mautofied ] domain [ update norton antivirus ] [ african animal drawing ]ÃÂâÃÂÃÂÃÂÃÂtheÃÂÃÂÃÂî MINDSTORMSÃÂâÃÂÃÂÃÂâ.ÃÂâÃÂÃÂÃÂÃÂthe ÃÂâÃÂÃÂÃÂÃÂitÃÂâÃÂÃÂÃÂÃÂGroupÃÂâÃÂÃÂÃÂÃÂqmacro@jabber.com/jarltkÃÂâÃÂÃÂÃÂÃÂÃÂâÃÂÃÂÃÂÃÂno other user is in the room "cellar"
- with that nicknameÃÂâÃÂÃÂÃÂÃÂand the conference component registers the entry.
- It does this by sending qmacro the presence of all the room
- occupants, including that of himself:'/>
- These presence elements are also sent to the other room occupants so
- they know that deejay is present.
- Conference component-generated notification
- In addition to the presence elements sent for each room occupant, a
- general roomwide message noting that someone with the nickname
- deejay just entered the room is sent out by the component as
- a type='groupchat' message to all the room occupants:
RECV: <message to='qmacro@jabber.com/jarltk' type='groupchat'
from='cellar@conf.merlix.dyndns.org'> <body>deejay has become
available</body> </message>
- The text "has become available" used in the body of the message is
- taken directly from the Action Notices definitions, part of the
- Conferencing component instance configuration described in Section
- 4.10.3. Note that the identity of the room itself is simply a generic
- version of the JID that the room occupants use to enter:
cellar@conf.merlix.dyndns.org
- Roomwide chat
- Once the user with the nickname roscoe sees someone enter the
- room, he sends a greeting, and qmacro waves back:
RECV: <message to='qmacro@jabber.com/jarltk'
from='cellar@conf.merlix.dyndns.org/roscoe' type='groupchat' cnu=>
<body>hi qmacro</body> </message>
SEND: <message to='cellar@conf.merlix.dyndns.org' type='groupchat'> <body>/me waves to everyone</body> </message>
- As with the notification message, each message is a
- groupchat-type message. The one received appears to come from
- cellar@conf.merlix.dyndns.org/roscoe, which is the JID
- representing the user in the room with the nickname roscoe.
- This way, roscoe's real JID is never sent to qmacro.
- The message deejay sends is addressed to the room's identity
- cellar@conf.merlix.dyndns.org, and contains a message that
- starts with /me. This is simply a convention that is
- understood by clients that support conferencing, meant to represent an
- action and displayed thus:
* deejay waves to everyone
{{Note|Ignore the cnu attribute; it's put there and used by the component and should never make it out to the client endpoints. The attribute name is a short name for the conference user and refers to the internal structure that represents a conference room occupant within the component.
</code>
- One-on-one chat
- The Conferencing component also supports a one-on-one chat mode, which
- is just like normal chat mode (where messages with the type
- chat are exchanged) except that the routing goes through the
- component. The intended recipient of a conference-routed chat message
- is identified by his room JID. So in this example:
RECV: <message to='qmacro@jabber.com/jarltk'
from='cellar@conf.merlix.dyndns.org/flash' type='chat'>
<body>Is that you, qmacro?</body>
<thread>jarl1998911094</thread> </message>
- the user nicknamed flash actually addressed the chat message
- to the JID:
cellar@conf.merlix.dyndns.org/deejay
- which arrived at the Conferencing component (because of the hostname,
- conf.merlix.dyndns.org causes the <message/>
- element to be routed there), which then looked up internally who
- deejay really was (qmacro@jabber.com/jarltk) and
- sent it on. This way, the recipient of a conference-routed message
- never discovers the real JID of the sender. In all other ways, the
- actual <message/> element is like any other
- <message/> elementÃÂâÃÂÃÂÃÂÃÂin this case, it contains a message
- <body/> and a chat <thread/>. (See
- Section 5.4.1 for details on the <message/> element.)
- Leaving the room
- In the same way that room entrance is effected by sending an
- available presence (remember, a <presence/>
- element without an explicit type attribute is understood to
- represent type="available'), leaving a room is achieved by
- doing the opposite:
RECV: <presence to='qmacro@jabber.com/jarltk'
type='unavailable' from='cellar@conf.merlix.dyndns.org/roscoe'/>
- The people in the conference room are sent a message that
- roscoe has left the room by the unavailable presence
- packet. This is by and large for the benefit of each user's client, so
- that the room occupant list can be updated. The component also sends
- out a verbal notification, in the same way as it sends a verbal
- notification out when someone joins:
RECV: <message to='qmacro@jabber.com/jarltk' type='groupchat'
from='cellar@conf.merlix.dyndns.org'> <body>roscoe has
left</body> </message>
- Like the join notification, the text for the leave notification ("has
- left") comes directly from the component instance configuration
- described in Section 4.10.3.
The Script's Scope
The Keyword Assistant (keyassist) script will be written in Python using the Jabberpy library. As mentioned earlier, the script will perform the following tasks:
- Connect to a predetermined Jabber server
- Join a predetermined conference room
- Sit there quietly, listening to the conversation
- Take simple commands from people to watch for, or stop watching for, particular words or phrases uttered in the room
- Relay the context of those words or phrases to whomever requested them, if heard In addition
to setting the identity of the Jabber server and the conference room in variables, we'll also need to keep track of which users ask the assistant for words and phrases.
We'll use a dictionary (hash in Perl terms), as shown in Example 9-3, because we want to manage the data in there by key, the JID of those users that the script will be assisting. Having a look at what this dictionary will look like during the lifetime of this script will help us to visualize what we're trying to achieve.
Typical contents of the Keyword Assistant's dictionary
{ 'dj@gnu.pipetree.com/home': { 'http:': 1, 'ftp:': 1
},
'piers@jabber.org/work': { 'Perl': 1, 'Java': 1, 'SAP R/3': 1 },
'cellar@conf.merlix.dyndns.org/roscoe': { 'dialback': 1 }
}
We can see from the contents of the dictionary in Example 9-3 that three people have asked the script to look out for words and phrases. Two of those peopleÃÂâÃÂÃÂÃÂÃÂdj and piersÃÂâÃÂÃÂÃÂÃÂhave interacted with the script directly by sending a normal (or chat) <message/>. The other person, with the conference nickname roscoe, is in the "cellar" room and has sent the script a message routed through the Conference component in the same way that flash sent qmacro a private message in Example 9-2: the JID of the sender belongs to (has the hostname set to) the conference component. Technically, there's nothing to distinguish the three JIDs here; it's just that we know from the name that conf.merlix.dyndns.org is the name that identifies such a component.
If we dissect the dictionary, we can see that:
- dj wants to be notified if any web or FTP URLs are mentioned.
- piers is interested in references to two of his favorite
languages and his favorite business software solution.
- roscoe
is interested in any talk about dialback. We said we'd give the script a little bit of intelligence. This was a reference to the ability for users to interact with the script while it runs, rather than having to give the script a static list of words and phrases in a configuration file. dj, piers, and roscoe have done this by sending the script messages (directly, not within the room) with simple keyword commands, such as:
- dj: "watch http:" script: "ok, watching for http:" dj:
- "watch gopher:" script: "ok, watching for gopher:" dj:
- "watch ftp:" script: "ok, watching for ftp:" dj: "ignore
- gopher:" script: "ok, now ignoring gopher:"
...
- piers: "list" script: "watching for: Perl, Java, SAP R/3"
...
- roscoe: "stop" script: "ok, I've stopped watching"
The keyassist Script
Example 9-4 shows the keyassist script in its entirety. The script is described in detail, step by step, in the next section.
The keyassist Perl script
import jabber from string import split, join, find import sys
keywords = {}
def addword(jid, word): if not keywords.has_key(jid): keywords[jid] = {} keywords[jid][word] = 1
def delword(jid, word): if keywords.has_key(jid): if keywords[jid].has_key(word): del keywords[jid][word]
def messageCB(con, msg):
type = msg.getType() if type == None: type = 'normal'
#))
# Scan room talk if type == 'groupchat': message = msg.getBody()
for jid in keywords.keys(): for word in keywords[jid].keys(): if find(message, word) >= 0: con.send(jabber.Message(jid, word + ": " + message)))
# Remove keyword list for groupchat correspondent if prs.getType() == 'unavailable': jid = str(prs.getFrom()) if keywords.has_key(jid): del keywords[jid]
Server = 'gnu.mine.nu' Username = 'kassist' Password = 'pass' Resource = 'py'
Room = 'jdev' ConfServ = 'conference.jabber.org' Nick = 'kassist'
con = jabber.Client(host=Server))
con.setMessageHandler(messageCB) con.setPresenceHandler(presenceCB)
con.send(jabber.Presence())
roomjid = Room + '@' + ConfServ + '/' + Nick print "Joining " + Room con.send(jabber.Presence(to=roomjid))
while(1): con.process(5)
Dissecting the keyassist Script
Taking keyassist step by step, the first section is probably familiar if you've seen the previous Python-based scripts in Section 8.1 and Section 8.3, both in Chapter 8.
import jabber from string import split, join, find import sys
Here, all of the functions and libraries that we'll need are brought in. We'll use the find function from the string library to help with the keyword searching.
Next, we declare the dictionary. This will hold a list of the words that the script will look for, as defined by each person, as shown in Example 9-3.
keywords = {}
Maintaining the keyword dictionary
To maintain this dictionary, we will use two subroutines to add words to and remove words from a user's word list. These subroutines are called when a command such as watch or ignore is recognized in the callback subroutine that handles incoming <message/> elements:
def addword(jid, word): if not keywords.has_key(jid):
keywords[jid] = {} keywords[jid][word] = 1
def delword(jid, word): if keywords.has_key(jid): if keywords[jid].has_key(word): del keywords[jid][word]
A string representation of the JID (in jid) of the correspondent giving the command is passed to the subroutines along with the word or phrase specified (in word) by the user. The dictionary has two levels: the first level is keyed by the JID, and the second by word or phrase. We use a dictionary, rather than an array, at the second level simply to make removal of words and phrases easier.
Message callback
Next, we define the callback to handle incoming <message/> elements:
def messageCB(con, msg):
type = msg.getType() if type == None: type = 'normal'
As usual, we're expecting the message callback to be passed the connection object (in con) and the message object itself (msg). How this callback is to proceed is determined by the type of message received. We determine the type (taken from the <message/> element's type attribute) and store it in the variable called type. Remember that if no type attribute is present, a message type of normal is assumed. (See Section 5.4.1.1 for details of <message/> attributes.)
The two types of incoming messages we're expecting this script to receive are those conveying the room's conversationÃÂâÃÂÃÂÃÂÃÂin groupchat-type messagesÃÂâÃÂÃÂÃÂÃÂand those over which the commands such as watch and ignore are carried, which we expect in the form of normal- or chat-type messages.
The first main section of the messageCB handler deals with incoming commands:
#))
If the <message/> element turns out to be of the type in which we're expecting a potential command, we want to determine the JID of the correspondent who sent that message. Calling the getFrom() method will return us a JID object. What we need is the string representation of that, which can be determined by calling the str() function on that JID object:
jid = str(msg.getFrom())
Then we grab the content of the message by calling the getBody() on the msg object and split the whole thing on the first bit of whitespace. This should be enough for us to distinguish a command (watch, ignore, and so on) from the keywords. After the split, the first element (index 0) in the message array will be the command, and the second element (index 1) will be the word or phrase, if given. At this stage, we also declare an empty reply:
message = split(msg.getBody(), None, 1); reply = ""
Now it's time to determine if what the script was sent made sense as a command:"
We go through a series of checks, taking appropriate action for the supported commands:
- watch
- Watch for a particular word or phrase.
- ignore
- Stop watching for a particular word or phrase.
- list
- List the words and phrases currently being watched.
- stop
- Stop watching altogether; remove the list of words and phrases. The
- addword() and delword() functions defined earlier
- are used here, as well as other simpler functions; one that lists the
- words and phrases for a particular JID:
keywords[jid].keys()
and one that removes them:
del keywords[jid]
If there was something recognizable for the script to do, we get it to reply appropriately:
if reply: con.send(msg.build_reply(reply))
The build_reply() function creates a reply out of a message object by setting to to the value of the original <message/> element's from attribute and preserving the element type attribute and <thread/> tag, if present. The <body/> of the reply object (which is just a <message/> element) is set to whatever is passed in the function call; in this case, it's the text in the reply variable.
Now that we've dealt with incoming commands, we need another section in the message callback subroutine to scan for the words and phrases. The target texts for this scanning will be the snippets of room conversation, which arrive at the callback in the form of groupchat-type <message/> elements:
# scan room talk if type == 'groupchat': message =
msg.getBody()
The message variable holds the string we need to scan; it's just a case of checking for each of the words or phrases on behalf of each of the users who have asked:
for jid in keywords.keys(): for word in
keywords[jid].keys(): if find(message, word) >= 0:
con.send(jabber.Message(jid, word + ": " + message))
If we get a hit, we construct a new Message object, passing the JID of the person for whom the string has matched (in the jid variable) and the notification, consisting of the word or phrase that was found (in word) and the context in which it was found (the sentence uttered, in message). Once found and constructed, the <message/> is sent to that user. By default, the Message constructor specifies no type attribute, so the user is sent a "normal" message.
Presence callback
Having dealt with the incoming <message/> elements, we turn to the <presence/> elements. Most of those we receive in this conference room will be notifications from people entering and leaving the room, as shown in Example 9-2. We want to perform housekeeping on our keywords dictionary so the entries don't become stale. We also want to deal with the potential problem of conflicting nicknames. Let's look at that first.
We want to check for the possibility of nickname conflict problems that may occur when we enter the room, and the chosen nickname (flash) is already taken.
Remembering that a conflict notification will look something like this:
<presence to='qmacro@jabber.com/jarltk'
from='cellar@conf.merlix.dyndns.org/flash' type='error'> <error
code='409'>Conflict</error> </presence>
we test for the receipt of a <presence/> element with the
following:)
The <presence/> element will appear to be sent from the JID that we constructed for the initial room entry negotiation (in the roomjid variable further down in the script); for example:
jdev@conference.jabber.org/kassist
We compare this value to the value of the incoming
<presence/>'s from attribute, and also make sure
that the type attribute is set to error. If it is, we
want to extract the details from the <error/> tag that
will be contained as a direct child of the <presence/>.
The Jabberpy library currently doesn't offer a direct high-level function to get at this tag from the Presence object (in prs), but we can strip away the presence object "mantle" and get at the underlying object, which is a neutral "node"ÃÂâÃÂÃÂÃÂÃÂa Jabber element, or XML fragment, without any preconceived ideas of what it is (and therefore without any accompanying high-level methods such as getBody() or setPriority()).
{{Note|If this seems a little cryptic, just think of it like this: each of the Presence, Message, and IQ classes are merely superclasses of the base class Protocol, which represents elements generically.
</code> The asNode() method gives us what we needÃÂâÃÂÃÂÃÂÃÂa Protocol object representation of the <presence/> element. From this we can get to the <error/> tag and its contents. If we find that we do have a nickname conflict, we abort by disconnecting from the Jabber server and ending the script.
The general idea is that this script will run indefinitely and notify the users on a continuous basis, so we need to do a spot of keyword housekeeping. No presence subscription relationships are built (mostly to keep the script small and simple; you could adapt the mechanism from the recipe in Section 8.3 if you wanted to make this script sensitive to presence), so notifications will get queued up for the user if he is offline with the use of the mod_offline module of the Jabber Session Manager (JSM). This makes a lot of sense for the most part; however, we still want to have the script send notifications even if the user is offline. Additionally, a command could be sent to the script to watch for a keyword or phrase from a user within the room. We would receive the command from a JID like this:
jdev@conference.jabber.org/nickname
This is a transient JID, in that it represents a user's presence in
the jdev room for a particular session. If a word is spotted by the
script hours or days later, there's a good chance that the user has left
the room, making the JID invalid as a recipient. Although the JID is
technically valid and will reach the conferencing component, there
will be no real user JID that it is paired up with. Potentially worse,
the room occupant's identity JID may be assigned to someone else at a
later stage, if the original user left, and a new user entered choosing
the same nickname the original user had chosen. See the upcoming sidebar
titled "Transient and Nonexistent JIDs" for a short discussion of the
difference between a transient JID and a nonexistent JID.
So as soon as we notice a user leave the room we're in, which will be indicated through a <presence/> element conveying that occupant's unavailability, we should remove any watched-for words and phrases from the dictionary:
# Remove keyword list for groupchat correspondent if
prs.getType() == 'unavailable': jid = str(prs.getFrom()) if
keywords.has_key(jid): del keywords[jid]
As before, we obtain the string representation of the JID using the str() function on the JID object that represents the presence element's sender, obtained via the getFrom() method.
{{Sidebar|Transient and Nonexistent JIDsWhat happens when you send a message to a "transient" conference room JID? Superficially, the same as when you send one to a nonexistent JID. But there are some subtle differences.
A transient JID is one that reflects a user's alternate identity in the context of the Conferencing component. When you construct and send a message to a conference transient JID, it goes first to the conference component because of the hostname in the JID that identifies that component, for example:
jdev@conference.jabber.org/qmacro
The hostname conference.jabber.org is what the jabberd backbone
uses to route the element. As mentioned earlier, the Conferencing
component will relay a message to the real JID that belongs to the user
currently in a room hosted by that component.
While the component itself is usually persistent, the room occupants (and so their transient JIDs) are not. When a message is sent to the JID jdev@conference.jabber.org/qmacro and there is no room occupant in the jdev room with the nickname qmacro, the message will still reach its first destinationÃÂâÃÂÃÂÃÂÃÂthe componentÃÂâÃÂÃÂÃÂÃÂbut be rejected at that stage, as shown in Example 9-5.
A message to a nonexistent transient JID is rejected
SEND: <message to='jdev@conference.jabber.org/qmacro'>
<body>Hello there</body> </message>
RECV: <message to='dj@gnu.mine.nu/jarl' from='jdev@conference.jabber.org/qmacro' type='error'> <body>Hello there</body> <error code='404'>Not Found</error> </message>
Although the rejectionÃÂâÃÂÃÂÃÂÃÂthe "Not Found" errorÃÂâÃÂÃÂÃÂÃÂis the same as if a message had been sent to a JSM user that didn't exist, the difference is that the transient user always had the potential to exist, whereas the JSM user never did. Of course, if the JID referred to a nonexistent Jabber server, then the error returned wouldn't be a "Not Found" error 404, but an "Unable to resolve hostname" error 502.
</code>
The main script
Now that we have the subroutines and callbacks set up, all we need to do is define the Jabber server and room information:
Server = 'gnu.mine.nu' Username = 'kassist' Password = 'pass'
Resource = 'py'
Room = 'jdev' ConfServ = 'conference.jabber.org' Nick = 'kassist'
The kassist user can be set up simply by using the reguser script presented in Section 7.4:
$ ./reguser gnu.mine.nu username=kassist password=pass
[Attempt] (kassist) Successful registration $
In the same way as in previous recipes' scripts, a connection attempt is
made, followed by an authentication attempt:
con = jabber.Client(host=Server,debug=0,log=0))
Then the message and presence callbacks messageCB() and presenceCB() are defined to the connection object in con:
con.setMessageHandler(messageCB)
con.setPresenceHandler(presenceCB)
After sending initial presence, informing the JSM (and anyone who might be subscribed to kassist's presence) of the assistant's availability:
con.send(jabber.Presence())
we also constructÃÂâÃÂÃÂÃÂÃÂfrom the Room, ConfServ, and Nick variablesÃÂâÃÂÃÂÃÂÃÂand send the <presence/> element for negotiating entry to the jdev room hosted by the Conferencing component at conference.jabber.org:
roomjid = Room + '@' + ConfServ + '/' + Nick print "Joining " +
Room con.send(jabber.Presence(to=roomjid))
The con.send() function will send a <presence/> element that looks like this:
SEND: <presence to='jdev@conference.jabber.org/kassist'/>
We're sending available presence to the room, to negotiate entry, but
what about the initial presence? Why do we send that too if there are no
users who will be subscribed to the kassist JID? If no initial
presence is sent, the JSM will merely store up any
<message/> elements destined for kassist, as it
will think the JID is offline.
The processing loop
Once everything has been set up, we simply need to have the script sit back and wait for incoming packets and handle them appropriately. For this, we simply call the process() function every 5 seconds to look for elements arriving on the XML stream:
while(1): con.process(5)
Connecting Devices to Jabber
LEGO MINDSTORMS. What a great reason to dig out that box of LEGO bricks you haven't touched in years. When I found out that LEGO was bringing out a programmable brick, the RCX,[2], I went to my favorite toy shop and purchased the set. In addition to the RCX (shown in Figure 9-1), the MINDSTORMS set comes with an infrared (IR) port and an IR tower, which you can connect to the serial port of your PC, a battery compartment,[3] motors, touch and light sensors, and various LEGO Technic parts.
</code> There are plenty of ways to interact with the RCX. The MINDSTORMS Robotics Invention System (RIS)ÃÂâÃÂÃÂÃÂâ set comes with Windows software with which you can build programs by moving blocks of logic around graphically on the screen and chaining them together. In addition, various efforts on the parts of talented individuals have come up with many different ways to program the RCX. The Unofficial Guide to LEGOÃÂÃÂÃÂî MINDSTORMSÃÂâÃÂÃÂÃÂâ Robots (O'Reilly & Associates, Inc., 1999) tells you all you need to know about programing the RCX. What's important to know for this recipe is detailed in Programming the RCX.
{{Sidebar|Programming the RCXThere are two approaches to programming the RCX. One approach is to write a program on your PC, download it to the RCX, and start and stop the program using the buttons on the RCX itself.
The other approach is to control the RCX directly from a program that you write and execute on your PC, sending control signals and receiving sensor values over the IR connection.
Both approaches have their merits. How appropriate each one is boils down to one thing: connections. On the one hand, building autonomous machines that find their way around the kitchen to scare the cat and bring you a sandwich calls for the first approach, when, once you've downloaded the program to the RCX, you can dispense with any further connection with your PC because the entire logic is situated in your creation. On the other hand, if you want to build a physical extension to a larger system that, for example, has a connection to the Internet, the second approach is likely to be more fruitful, because you can essentially use the program that runs on your PC and talks to the RCX over the IR link as a conduit, a proxy of sorts, to other programs and systems that can be reached over the network. We're going to use the second approach.
The RIS software that comes as standard centers around an ActiveX control. While there are plenty of ways to talk to the RCX without using this control (the book mentioned earlier describes many of these ways), the features offered by the controlÃÂâÃÂÃÂÃÂÃÂSpirit.ocxÃÂâÃÂÃÂÃÂÃÂare fine for many a project. And with Perl's Win32::OLE module, we can interact with this ActiveX control without having to resort to Visual Basic.
</code>
What We're Going to Do
Everyone knows that one of the virtues of a programmer is laziness. We're going to extend this virtue (perhaps a little too far) and enhance it with a hacker's innate ability to combine two favorite pastimesÃÂâÃÂÃÂÃÂÃÂprogramming and playing with LEGOÃÂâÃÂÃÂÃÂÃÂto build contrived but fun devices.
Often being a key part of a programmer's intake, coffee figures highly on the daily agenda. It's important to have a good cup of coffee to keep the brain cells firing, but it's even more important to know whether there's actually any coffee left in the pot. Going over to the coffeepot to find out is time away from the keyboard and therefore time wasted. So let's put the RCX to good use and build a device to tell us, via Jabber, whether the coffeepot has enough for another cup.
In building the device, a light sensor was connected to the RCX to "see" the level of coffee in the pot. Since the coffeepot is made of glass, light passes through it unless the coffee gets in the way, thus creating a simple binary switch:
- No (or a small amount of) light measured: there's coffee in the pot.
- Some (or a larger amount of) light: there's no coffee in the pot. We
want to be able to send the availability of coffee to all interested parties in a way that their off-the-shelf Jabber clients can easily understand and display.
Figure 9-2 shows the LEGO MINDSTORMS device in action. The brick mounted on the gantry is the light sensor, which extends to the glass coffeepot; a wire runs from it to the connector on the RCX. Behind the RCX is the IR tower, which is connected to the PC.
</code> Remembering that <presence/> elements are a simple way of broadcasting information about availability and that they contain a <status/> tag to describe the detail or context of that availability (see Section 5.4.2 for details on the <presence/> element), we have a perfect mechanism that's ready to be used. What's more, most, if not all, of the off-the-shelf Jabber client implementations will display the content of the <status/> tag in the client user's roster next to the JID to which it applies. Figure 9-3 shows how the content of the <status/> tag is displayed as a hovering "tooltip" in WinJab.
</code> Here's what we need to do:
- Step 1
- Set up the RCX
- We need to set the RCX up, with the light sensor, so that it's close
- enough to the coffeepot to take reliable and consistent light
- readings. Luckily the serial cable that comes with the MINDSTORMS set
- and connects to the IR tower is long enough to stretch from the
- computer to within the infrared line of sight to the RCX.
- Step 2
- Make the correct calibrations
- There are bound to be differences in ambient light, sensitivity of the
- light sensor, and how strong you make your coffee. So we need a way of
- calibrating the setup, so that we can find the appropriate "pivot
- point" light reading value that lies between the two states of
- coffee and no coffee.
- Step 3
- Set up a connection to Jabber
- We need a connection to a Jabber server and a client account there. We
- can set one up using the reguser script from Section 7.4. We also need
- the script to honor presence from users who want to be informed of the
- coffee state.
- Step 4
- Set up a sensor poll/presence push loop
- Once the RCX has been set up, the sensor calibrations taken, and the
- connection has been made, we need to monitor the light sensor on the
- RCX at regular intervals. At each interval, we determine the coffee
- state by comparing the value received from the sensor with the pivot
- point determined in the calibration step and send any change in that
- state as a new availability <presence/> element
- containing an appropriate description in the <status/>
- tag.
The Coffee Script
We're going to use Perl and the Net::Jabber libraries to build the script shown in Example 9-6. Perl allows us a comfortable way to interact with an ActiveX control, through the Win32::OLE module, so let's have a look at the coffee script as a whole, then we'll go back and look at the script in detail.
The coffee script, written in Perl
use Net::Jabber qw(Client); use Win32::OLE; use Getopt::Std; use
strict;
my %opts; getopt('ls', \%opts);;
my $current_status = -1; my @status; $status[NOCOFFEE] = 'xa/Coffeepot is empty'; $status[COFFEE] = '/Coffee is available!';
my $rcx = &setup_RCX(SENSOR);
- Either calibrate if no parameters given, or run with the parameter
- given as -l, which will be taken as the pivot between coffee and no
- coffee
&calibrate($rcx) unless defined($opts{'l'});
- Determine initial status (will be either 0 or 1)
my $s = &set_status($rcx->Poll(9, SENSOR));
my $jabber = &setup_Jabber(SERVER, PORT, USERNAME, PASSWORD, RESOURCE, $s);
- Main loop: check Jabber and RCX
while (1) { defined($jabber->Process(GRAIN)) or die "The connection to the Jabber server was broken\n"; my $s = &set_status($rcx->Poll(9, SENSOR)); &set_presence($jabber, $s) if defined $s; }
- Set up Jabber client connection, sending initial;
}
sub set_presence { my ($connection, $s) = @_; my $presence = Net::Jabber::Presence->new(); my ($show, $status) = split("/", $status[$s], 2); $presence->SetPresence( show => $show, status => $status ); print $status, "\n"; $connection->Send($presence); }
-; }
}
sub set_status { my $val = shift;
my $new_status = $val < $opts{'l'} ? COFFEE : NOCOFFEE;
if ($new_status != $current_status) { $current_status = $new_status; return $current_status; } else { return undef; }
}
Examining the Coffee Script Step by Step
Now that we've seen the coffee script as a whole, let's examine it step by step to see how it works.
Declaring the modules, constants, and variables
We first declare the packages we're going to use. In addition to Net::Jabber and Win32::OLE, we're going to use Getopt::Std, which affords us a comfortable way of accepting and parsing command-line options. We also want to use the strict pragma, which should keep us from making silly coding mistakes by not allowing undeclared variables and the like.
We specify Client on the usage declaration for the Net::Jabber package to specify what should be loaded. The package is a large and comprehensive set of modules, and only some of those are relevant for what we wish to do in the scriptÃÂâÃÂÃÂÃÂÃÂbuild and work with a Jabber client connection. Other module sets are pulled in by specifying Component or Server.
use Net::Jabber qw(Client); use Win32::OLE; use Getopt::Std; use
strict;
We're going to allow the command-line options -l and -s, which perform the following tasks:
- No options specified (or just the -s options)
- calibration mode.
- When we run the script for the first time, we need to perform the
- calibration and read values from the sensor to determine a midpoint
- value. A number above the midpoint signifies the presence of light
- and therefore the absence of coffee; below signifies the absence of
- light and therefore the presence of coffee. This step is necessary
- because not every environment (ambient light, sensitivity of the light
- sensor, and so on) will be the same. The upper and lower values,
- representing lightness and darkness, respectively, will vary across
- different environments. The point is to obtain a value in between
- these upper and lower valuesÃÂâÃÂÃÂÃÂÃÂthe midpointÃÂâÃÂÃÂÃÂÃÂwith which we can compare a
- light value read at any particular time. : If we don't specify any
- options, the script will start up automatically in calibration mode:
C:\temp> perl coffee.pl
- Figure 9-4 shows the script run in calibration mode. The values
- displayed, one each second, represent the values read from the light
- sensor. When the sensor was picking up lots of light, the values were
- 60. When the sensor was moved in front of some coffee, the values went
- down to around 45. Based upon this small test, the pivot point value
- was 50, somewhere in between those two values.
- -l
- Specify the pivot value.
- Once we've determined a pivot point value, we run the script and tell
- it this pivot value with the -l (light pivot):
C:\temp> perl coffee.pl -l 50
- -s
- Specify the sensor number.
- The RCX, shown in Figure 9-1, has three connectors to which you can
- attach sensors. They're the three gray 2-by-2 pieces, labeled 1, 2,
- and 3, near the top of the brick. The script assumes you've attached
- the light sensor to the one marked 1, which internally is 0. If you
- attach it to either of the other two, you can specify the connector
- using the -s (sensor) with a value of 1 (for the middle connector) or
- 2 (for the rightmost connector), like this:
C:\temp> perl coffee.pl -l 50 -s 2
- You can specify the -s option when running in calibration or normal
- modes.
{{Figure|title=Running coffee in calibration mode|image=0596002025-jab_0904.png</code> The options, summarized in Table 9-1, are defined with the Getopt::Std function:
my %opts; getopt('ls', \%opts);
Next comes a list of constants, which describe:
- The script's Jabber relationship, including the server it will connect
to and the username, password, and resource it will connect with.
- The representation of the two states of coffee and no coffee, which
will be used to determine the content of the <status/> tag sent along inside any <presence/> element emitted.
- The identification of the connector to which the light sensor is
attached and the polling granularity of the sensor (poll/presence push loop) described earlier. This item is measured in seconds.;
The last part of the script's setup deals with the coffee state:
my $current_status = -1; my @status; $status[NOCOFFEE] =
'xa/Coffeepot is empty'; $status[COFFEE] = '/Coffee is available!';
We use a two-element array (@status) to represent the two possible coffee states. The value of each array element is a two-part string, with each part separated by a slash (/). Each of these parts will be transmitted in a <presence/> element, with the first part (which is empty in the element representing the COFFEE state) representing the presence <show/> value and the second part representing the presence <status/> value. Example 9-7 shows what a <presence/> element looks like when built up with values to represent the NOCOFFEE state.
A presence element representing the NOCOFFEE state
<presence> <show>xa</show>
<status>Coffeepot is empty</status> </presence>
Most Jabber clients use different icons in the roster to represent different <show/> values. In this case, we will use xa for no coffee and a blank (which represents "online" or "available") for coffee to trigger the icon change.
Initialization and calibration
Whenever we need to talk to the RCX, some initialization is required via the ActiveX control. That's the same whether we're going to calibrate or poll for values. The setup_RCX() function takes a single argumentÃÂâÃÂÃÂÃÂÃÂthe identification of which connector the light sensor is connected toÃÂâÃÂÃÂÃÂÃÂand performs the initialization, which is described later in Section 9.2.3.8. The function returns a handle on the Win32::OLE object that represents the ActiveX control, which in turn represents the RCX via the IR tower:
my $rcx = &setup_RCX(SENSOR);
If the -l option is not specified, it means we're going to be running calibration. So we call the calibrate() function to do this for us. We pass the RCX handle (in $rcx) so the calibration can run properly:
# Either calibrate if no parameters given, or
- run with the parameter given as -l, which will be taken as the pivot
- between coffee and no coffee
&calibrate($rcx) unless defined($opts{'l'});
As with the setup_RCX() function, calibrate() is described later.
Calibration mode will be terminated by ending the script with Ctrl-C, so the next thing we come across is the call to the function set_status(), which represents the first stage in the normal script mode; set_status() is used to determine the initial coffee status.
A value is retrieved by calling the ActiveX control's Poll() function. (Table 9-2 lists the ActiveX control's functions and properties used in this script.) We specify that we're after a sensor value (the 9 as the first argument) from the sensor attached to the connector indicated by the SENSOR constant:
# Determine initial status (will be either 0 or 1) my $s =
&set_status($rcx->Poll(9, SENSOR));
The value retrieved is passed to the set_status() function, which determines whether the value is above or below the pivot value and whether the new status is different from the current one. It's going to be something along the lines of one of the values displayed when the script was run in calibration mode. If it is (and in this case, it will be, because in this first call, the value of $current_status is set to -1, which represents neither the COFFEE nor the NOCOFFEE state), that status will be returned; otherwise, undef will be returned.
==== Connecting to the Jabber server ====
At this stage, we're ready to connect to the Jabber server. The call to setup_Jabber() does this for us, returning a handle to the Jabber connection object that we store in $jabber. This handle will be used later in the script to send out <presence/> elements. The $jabber variable contains a reference to a Net::Jabber::Client object. This is the equivalent of the con variable used in the earlier Python scripts to hold the jabber.Client object and the ConnectionBean object (cb) in the earlier Java script.[4]
my $jabber = &setup_Jabber(SERVER, PORT, USERNAME, PASSWORD,
RESOURCE, $s);
In addition to passing the constants needed for the client connection to the Jabber server, we pass the initial coffee status, held in $s. We'll have a look at what the setup_Jabber() function does with this initial status a bit later when we get to the function's definition.
Sensor poll/presence push loop
Now that we've set everything up, determined the initial coffee status, and connected to the Jabber server, we're ready to start the main loop:
# Main loop: check Jabber and RCX while (1) {
defined($jabber->Process(GRAIN)) or die "The connection to the Jabber
server was broken\n"; my $s = &set_status($rcx->Poll(9, SENSOR));
&set_presence($jabber, $s) if defined $s;
}
The while (1) loop is a bit of a giveaway. This script won't stop until you force it to by entering Ctrl-CÃÂâÃÂÃÂÃÂÃÂbut that's essentially what we want. In the loop, we call the Process() method on the Jabber connection object in $jabber.
Process() is the equivalent of the Jabberpy's process() method in the Python scripts. Process() waits around for up to the number of seconds specified as the single argument (or not at all if no argument is specified) for XML to appear on the stream connection from the Jabber server. If complete fragments do appear, callbacks, defined in the connection object, are called with the elements (<iq/>, <message/>, and <presence/>) that the fragments represent. This is in the same way as, for example, callbacks are used in the Python scripts using the Jabberpy library. The setup_Jabber(), which will be discussed in the next section, is where the callback definition is made.
Net::Jabber's Process() method returns undef if the connection to the Jabber server is terminated while waiting for XML. The undef value is dealt with appropriately by ending the script.
The GRAIN constant, set to 1 second in the script's setup section, is used to specify how long to wait for any packets from the Jabber server. For the most part, we're not expecting to receive much incoming Jabber trafficÃÂâÃÂÃÂÃÂÃÂthe occasional presence subscription (or unsubscription) request perhaps (see later), but other than that, the only packets traveling over the connection to the Jabber server will be availability <presence/> packets representing coffee state changes, sent from the script. This delay is normally set to 1 second. And because that's a comfortable polling interval for the light sensor, we can set that within the same loop.
Calling the ActiveX control's Poll() again with the same arguments as before ("get a sensor value from the sensor attached to the SENSORth connector"), we pass the value to the set_status() to determine the coffee state. If the state was different from last time (if $s receives a value and not undef), then we want to emit a <presence/> element to reflect that state. We achieve this by calling the set_presence() function, passing it the connection object and the state.
The setup_Jabber() function
Here we define the setup_Jabber() function, which is called to set up the connection to the Jabber server and authenticate with a predefined user:
# Set up Jabber client connection, sending intial;
}
First, we instantiate a new Net::Jabber::Client object. Net::Jabber distinguishes between client- and component-based connections to Jabber; the component-based equivalent of this class is Net::Jabber::Component. The Connect() method is passed arguments that specify the hostname and port of the Jabber server to connect to. It returns a 0 status if the connection could not be made.
We can register handlers for Jabber elements received over the XML stream carried by the connection we just made. Here we are interested in incoming presence subscription or unsubscription requests, as we'll see in the definition of the InPresence() function.
The single method, SetCallBacks(), does what the collective jabber.Client methods setPresenceHandler(), setMessageHandler(), and setIqHandler() do in a single callÃÂâÃÂÃÂÃÂÃÂtaking a list of element types and subroutine references, in the form of a hash.
After registering the callback for <presence/> elements, it's time to authenticate, passing the username, password, and resource defined in the list of constants at the start of the script. If authentication is successful, the result of the call to the AuthSend() method is a single string with the value ok. If not, that value is replaced with an error code and the descriptive text is available in a further string. (This is why we catch the results of a call in an array, called @result.) A complete list of Jabber error codes and texts can be found in Table 5-3.
Why RosterGet()? We're not subscribing to anyone, and we're not really interested in anything but the values we're polling from our brick. So theoretically there's no reason to make a request to retrieve our roster from the server. However, because we want the script to receive and process subscription and unsubscription requests, we need to request the roster beforehand; otherwise, the JSM won't send such requests to us. See Section 8.3.3.4 in Chapter 8 for an explanation as to why.
Once we've requested the roster, so as to receive presence subscription and unsubscription requests, the job is almost done. The last thing to do in setting up the Jabber connection is to send initial availability information. The setup_Jabber() function receives the initial coffee status as the last argument in the call (in $initial_status), which it passes on to the function that sends a <presence/> element, set_presence(). Along with the initial coffee status, we also send the $connection object that represents the connection to the Jabber server that we've just established (referred to outside of this function with the $jabber variable). This is so the set_presence() function can use the connection handle to send the element down the stream.
The set_presence() function
This function is used by setup_Jabber() to send the script's (and therefore the coffee's) initial presence. It's also used within the main sensor poll/presence push loop to send further presence packets if the coffee's state changes.
sub set_presence { my ($connection, $s) = @_; my $presence =
Net::Jabber::Presence->new(); my ($show, $status) = split("/",
$status[$s], 2); $presence->SetPresence( show => $show, status
=> $status ); print $status, "\n"; $connection->Send($presence);
}
On receipt of the Jabber connection object and the coffee status, which will be 0 (NOCOFFEE) or 1 (COFFEE), set_presence() constructs a new Net::Jabber::Presence object. This object represents a <presence/> element, upon which we can make method calls to hone the element as we wish. SetPresence() is one of these methods, with which we can set values for each of the <show/> and <status/> tags. We retrieve the values for each of these tags by pulling the strings from the appropriate member of the @status array, as described earlier in Section 9.2.3.1.
We print the coffee's status (remember, this function is called only when the status changes, not every time the sensor is polled) and send the newly built <presence/> element down the XML stream to the Jabber server. This is accomplished by passing the presence object as an argument to the Send() method of the connection object in $connection. This works in the same way as the send() function in Jabberpy and the send() function in JabberBeans. Everyone who has subscribed to the script user's presence, and who is available, will receive the coffee status information.
Figure 9-3 shows the status information received in the WinJab client. The string sent in the <status/> tag is shown in the tooltip that appears when the mouse hovers over the "coffee" roster item.
The InPresence() subroutine
Our presence handler, the callback subroutine InPresence(), honors requests for subscription and unsubscription to the script user's (and therefore the coffee's) presence. This callback is designed to work in the same way as the presenceCB() callback in the Python recipe described in Section 8.3.
However, while the Python Jabberpy library hands to the callbacks a jabber.Client object and the element to be handled, the Perl Net::Jabber library hands over a session ID and the element to be handled. Don't worry about the session ID here; it's related to functionality for building Jabber servers, not clients, and we can and should ignore it for the purposes of this recipe. What is important is the element to be handled, which appears as the second argument passed to the subroutine collected by the $presence variable from $_[1].
What is common between the two libraries is that the element that is passed to be handled as the subject of the callback is an instance of the class that the callback represents. In other words, a callback is used to handle <presence/> elements, and the element received is an instance of the Net::Jabber::Presence class (just as the element received by a Jabberpy presence callback is an instance of the jabber.Presence class).
#')); }
}
With an object in $presence, we can get information from the element using data retrieval methods such as those used here: GetFrom() and GetType(), which extract the values from the from and type attributes of the <presence/> element, respectively.
If the <presence/> element type represents a subscription request (type="subscribe'), we unquestioningly honor the request, by sending back an affirmative reply. The Reply() method of the presence object is one of a number of high-level functions that make it possible to turn elements around and send them back. In this case, the method replaces the value of the <presence/>'s to attribute with the value of the from attribute, and preserves its id. It also allows us to pass arguments as if we were calling the SetPresence() method described earlier. Rather than set the <show/> and <status/> tags as we did earlier in the set_presence() function, we merely set the element's type attribute to subscribed or unsubscribed, depending on the request.
So, with an incoming <presence/> element in $presence that looks like this:
<presence from='qmacro@jabber.org/office' type='subscribe'
to='coffee@merlix.dyndns.org' id='21'>
calling the Reply() method would cause the element in $presence to change to this:
<presence to='qmacro@jabber.org/office' type='subscribed'
id='21'>
Remember, the from attribute on elements originating from the client is set by the server, not by the client. The script doesn't ask for a subscription to the user's presence in return. The script isn't interested in whether the people who have subscribed to its presence are availableÃÂâÃÂÃÂÃÂÃÂits purpose is to let people know whether there's any coffee left in the pot.
The setup_RCX() function
This function is called once every time the script is started and is required to initialize the RCX:;
}
A Win32::OLE object representing the RCX's ActiveX control Spirit is instantiated. A Win32::OLE function is used to suppress warnings, and the RCX is initialized by setting the COM port to COM1 for serial communications. The sensor type and mode are set for the light sensor attached to the connector identified by the value passed into the $sensor variable. Table 9-2 shows us that sensor type 3 represents Light, and sensor mode 2 specifies a Transitional measurement mode, the upshot of which is that the values returned on a poll are all within a certain restricted range, which makes it easier to decide whether there's any coffee in the pot.
We return the Win32::OLE RCX object to be used elsewhere in the script for calibration and polling.
The calibrate() function
The calibrate() function is called if the script is started without the -l option. This function simply prints a message, waits for the user to press Enter, and then goes into a a gentle loop, emitting whatever value was polled from the light sensor so the user can determine the pivot point:; }
}
The output produced from this function can be seen in Figure 9-4.
The set_status() function
The set_status() function receives the latest light value as polled from the sensor and compares it with the pivot value. If the status defined in $new_status is different from the current status (in $current_status), then the current status is updated and returned; otherwise, undef is returned:
sub set_status { my $val = shift;
my $new_status = $val < $opts{'l'} ? COFFEE : NOCOFFEE;
if ($new_status != $current_status) { $current_status = $new_status; return $current_status; } else { return undef; }
}
If this function returns a status value, a new <presence/> element is generated and emitted by the script. Otherwise, there's no change ("the coffee's still there," or "there's still no coffee!") and nothing happens.
An RSS News Agent
While the Jabber clients available off the shelf are orientated toward receiving (and sending) messages from other people, the possibilities don't stop there, as is clear from the recipes we've seen already. In this recipe, we're going to build a Jabber component that retrieves news items from various sources on the Web and sends them on to Jabber users who have expressed an interest in receiving them. We're going to use the Web for our news sources, but they could just as easily be sources within a corporate intranet. The key thing is that the sources are available in a readily parseable format.
RSS (RDF[5] Site Summary or, alternatively, Really Simple Syndication) is an XML format used for describing the content of a web site, where that site typically contains news items, diary entries, event information, or generally anything that grows, item by item, over time. A classic application of RSS is to describe a news site such as JabberCentral (). JabberCentral's main page (see Figure 9-5) consists of a number of news itemsÃÂâÃÂÃÂÃÂÃÂin the "Recent News" sectionÃÂâÃÂÃÂÃÂÃÂabout Jabber and its developer community. These items appear in reverse chronological order, and each one is succinct, sharing a common set of properties:
- Title
- Each item has a title ("JabberCon Update 11:45am - Aug 20").
- Short description
- Each item contains a short piece of text describing the content and
- context of the news story ("JabberCon Update - Monday Morning").
- Link to main story
- The short description should be enough to help the reader decide if he
- wants to read the whole item. If he does, there's a link ("Read More")
- to the news item itself.
{{Figure|title=JabberCentral's main page|image=0596002025-jab_0905.png</code> It is this collection of item-level properties that are summarized in an RSS file. The formality of the XML structure makes it a straightforward matter for:
- Automating the retrieval of story summaries for inclusion in other sites (syndication)
- Combining these items with items from other similar sources (aggregation)
- Checking to see whether there is any new content (new items) since the last visit Example 9-8 shows what the RSS XML for JabberCentral's news items shown in Figure 9-5 looks like.
RSS source for JabberCentral
<?xml version="1.0" encoding="ISO-8859-1"?>
<!DOCTYPE rss PUBLIC "-//Netscape Communications//DTD RSS 0.91//EN" "">
<rss version="0.91">
<channel>
<title>JabberCentral</title>
<description> JabberCentral is the premiere Jabber end-user news and support site. Many Jabber developers are actively involved at JabberCentral to provide fresh and authoritative information for users. </description>
<language>en-us</language> <link></link> <copyright>Copyright 2001, Aspect Networks</copyright>

<item> <title>JabberCon Update 11:45am - Aug 20</title> <link> 998329970</link> <description>JabberCon Update - Monday Morning</description> </item>
<item> <title>Jabcast Promises Secure Jabber Solutions</title> <link> 998061331</link> <description> Jabcast announces their intention to release security plugins with their line of products and services. </description> </item>
... (more items) ...
</channel>
</rss>
The structure is very straightforward. Each RSS file describes a channel, which is defined as follows:
- Channel information
- The channel header information includes the channel's title
- (<title/>), short description
- (<description/>), main URL (<link/>),
- and so on. The channel in this case is JabberCentral.
- Channel image
- Often RSS information is rendered into HTML to provide a concise
- "current index" summary of the channel it describes. An image can be
- used in that summary rendering, and its definition is held in the
- <image/> section of the file.
- Channel items
- The bulk of the RSS file content is made up of the individual
- <item/> sections, each of which reflects an item on the
- site that the channel represents. We can see in Example 9-8 that the
- first <item/> tag:
<item> <title>JabberCon Update 11:45am - Aug
20</title>
<link>
</link> <description>JabberCon Update - Monday
Morning</description> </item>
- describes the most recent news item shown on JabberCentral's main
- pageÃÂâÃÂÃÂÃÂÃÂ"JabberCon Update 11:45am - Aug 20." Each of the news item
- properties are contained within that <item/> tag: the
- title (<title/>), short description
- (<description/>), and link to main story
- (<link/>).
- Channel interactive feature
- There is a possibility for each channel to describe an interactive
- feature on the site it represents; often this is a search engine
- fronted by a text input field and Submit button. The interactive
- feature section of an RSS file is used to describe how that mechanism
- is to work (the name of the input field and the Submit button and the
- URL to invoke when the button is clicked, for example). This is so
- HTML renderings of the site can include the feature otherwise
- available only on the original site.
{{Note|This interactive feature definition is not shown in the RSS example here.
</code> RSS information lends itself very well to various methods of viewing. There are custom "headline viewer" clients availableÃÂâÃÂÃÂÃÂÃÂfocused applications that allow you to select from a vast array of RSS sources and have links to items displayed on your desktop (so, yes, the personal newspaperÃÂâÃÂÃÂÃÂÃÂof sortsÃÂâÃÂÃÂÃÂÃÂis here!). There are also possibilities for having RSS items scroll by on your desktop control bar.
And then there's Jabber. As described in Section 5.4.1, the Jabber <message/> element can represent something that looks suspiciously like an RSS item. The message type "headline" defines a message that carries news headline information. In this case, the <message/> element itself is usually embellished with an extension, qualified by the jabber:x:oob namespace (described in Section 6.3.8). Example 9-9 shows what the element would look like if the first news item from the JabberCentral site were carried in a headline message.
A headline message carrying a JabberCentral news item
<message type='headline' to='dj@qmacro.dyndns.org'>
<subject>JabberCon Update 11:45am - Aug 20</subject>
<body>JabberCon Update - Monday Morning</body> <x
xmlns='jabber:x:oob'>
<url>
lt;/url> <desc>JabberCon Update - Monday Morning</desc>
</x> </message>
The jabber:x:oob namespace carries the crucial parts of the RSS item. Clients, such as WinJab and Jarl, can understand this extension and display the content in a clickable list of headlines, each representing a single RSS item, similar to the headline viewer clients mentioned earlier.
Of course, we could send RSS items to clients in nonheadline type messages:
<message type='headline' to='dj@qmacro.dyndns.org'>
<subject>JabberCon Update 11:45am - Aug 20</subject>
<body> JabberCon Update - Monday Morning
</body> </x> </message>
where the complete item information is transmitted in a combination of the <subject/> and <body/> tags. This works, but the clients can only display the message, as with any other message. However, if we send formalized metadata, the value of the message content increases enormously. (Figure 9-8 shows Jarl displaying RSS-sourced news headlines.)
Distributing RSS-sourced headlines over Jabber to standard Jabber clients is a great combination of off-the-shelf technologies. In fact, we'll see in the next section that it's not just standard Jabber clients that fit the bill; we'll write a Jabber-based headline viewer to show that not all Jabber clients are, nor should they be, made equal.
Writing the News Agent
We're going to write an RSS news agent, which we'll simply call newsagent. The newsagent is a mechanism that checks predefined sources for new RSS items and sends (or pushes) them to people who are interested in receiving them. For the sake of simplicity, we'll define the list of RSS sources in newsagent itself. See Section 9.3.4 later in this chapter for details on how to further develop this script.
The newsagent script as a component
Until now, the examples we've used, such as cvsmsg, HostAlive, and keyassist (shown in Chapter 8), have all existed as Jabber clients. That is, they've performed a service while connected to the Jabber server via the JSM. There's nothing wrong with this. Indeed, it's more than just fine to build Jabber-based mechanisms using a Jabber client stub connection; that way, your script, through its identityÃÂâÃÂÃÂÃÂÃÂthe user JIDÃÂâÃÂÃÂÃÂÃÂcan avail itself of all the IM-related functions that the JSM offersÃÂâÃÂÃÂÃÂÃÂpresence, storage and forwarding of messages, and so on. Perhaps even more interesting is that the mechanism needs only an account, a username, and a password on a Jabber server to be part of the big connected picture.
However, we know from Chapter 4 that there are other entities that connect to Jabber to provide services. These entities are called components. You can look at components as philosophically less "transient" than their client-connected brethren and also closer to the Jabber server in terms of function and connection.
We know from Section 4.1.1 that there are various ways to connect a component: library load, STDIO, and TCP sockets. The first two dictate that the component will be located on the same host as the jabberd backbone to which it connects, although a Jabber server could consist of
a collection of jabberds running on separate hosts. The TCP
sockets connection type uses a socket connection between the component and the jabberd backbone, over which streamed XML documents are exchanged (in the same way they are exchanged in a client connection). It allows us to run components on any host and connect them to a Jabber server running on another host if we wish. This approach is the most desirable because of the connection flexibility. But it's not just the flexibility that matters: because the component is abstracted away from the Jabber server core libraries, it's up to us to decide how the component should be written. All the component has to do to get the Jabber server to cooperate is to establish the socket connection as described in the component instance configuration, perform an authenticating handshake, and correctly exchange XML stream headers.
Let's review how a TCP socket-based component connects. We'll base the review on what we're actually going to have to do to get newsagent up and running.
First, we have to tell the Jabber server that it is to expect an incoming socket connection attempt, which it is to accept. We do this by defining a component instance definition (or "description"ÃÂâÃÂÃÂÃÂÃÂsee Section 4.2.1) for our component. We include this definition in the main Jabber server configuration file, usually called jabber.xml. Example 9-10 shows a component instance definition for the RSS news agent, known as rss.qmacro.dyndns.org.
A component instance definition for the RSS news agent
<service id='rss.qmacro.dyndns.org'> <accept>
<ip>localhost</ip> <port>5999</port>
<secret>secret</secret> </accept> </service>
The name of the host on which the main Jabber server is running is qmacro.dyndns.org; it just so happens that the plan is to run the RSS news agent component on the same host. We give it a unique name (rss.qmacro.dyndns.org) to enable the jabberd backbone, or hub, to distinguish it from other components and to be able to route elements to it.
An alternate way of writing the component instance definition is shown in Example 9-11. The difference is simply in the way we specify the name. In Example 9-10, we specified an id in the <service/> tag with the value rss.qmacro.dyndns.org. In the absence of any <host/> tag specification in the definition, this id value is used by the jabberd routing logic as the identification for the component when determining where elements addressed with that destination should be sent. In Example 9-11, we have an explicit <host/> specification that will be used instead; we simply identify the service with an id attribute value of rss. In this latter case, it doesn't really matter from an addressability point of view what we specify as the value for the id attribute.
An alternative instance definition for the RSS news agent
<service id='rss'>
<host>rss.qmacro.dyndns.org</host> <accept>
<ip>localhost</ip> <port>5999</port>
<secret>secret</secret> </accept> </service>
The instance definition contains all the information the Jabber server needs. We can tell from the <accept/> tag that this definition describes a TCP sockets connection. The socket connection detail is held in the <ip/> and <port/> tags. In this case, as we're going to run the RSS News Agent component on the same host as the Jabber server itself, we might as well kill two related birdsÃÂâÃÂÃÂÃÂÃÂperformance and securityÃÂâÃÂÃÂÃÂÃÂwith one stone by specifying in the <ip/> tag:[6]
- Performance
- Connecting over the loopback device, as opposed to a real network
- interface, will give us a slight performance boost.
- Security
- Accepting only on the loopback device is a simple security measure
- that leaves one less port open to the world. The
- <secret/> tag holds the secret that the connecting
- component must present in the authentication handshake. How the secret
- is specified is described later on in this section.
Now let's look at the component's view of things. It will need to establish a socket connection to 127.0.0.1:5999. Once that connection has been established, jabberd will be expecting it to announce itself by sending its XML document stream header. Example 9-12 shows a typical stream header that the component will need to send.
The RSS component's stream header
SEND: <?xml version='1.0'?> <stream:stream
xmlns='jabber:component:accept'
xmlns:
This matches the description of a Jabber XML stream header (also known as a stream "root" as it's the root tag of the XML document) from Section 5.3. The namespace that is specified as the one qualifying the content of the stream is jabber:component:accept. This namespace matches the component connection method (TCP sockets) and the significant tag name in the component instance definition (). Likewise, the namespace jabber:component:exec matches the STDIO component connection method and the significant tag name in its component instance definition format: ()ÃÂâÃÂÃÂÃÂÃÂsee Section 4.1.3.3. The value specified in the to attribute matches the hostname specified in the configuration's <ip/> tag.
After receiving a valid stream header, jabberd responds with a similar root to head up its own XML document stream going in the opposite direction (from server to component). A typical response to the header (Example 9-12) received from the server by the component is shown in Example 9-13.
The server's stream header reply
RECV: <?xml version='1.0'?> <stream:stream
xmlns:
The stream header sent in response shows that the server is confirming the component instance's identification as rss. This reflects whatever was specified in the <service/> tag's id attribute of the component instance definition. Here, the value of the id attribute was rss as in Example 9-11.
It also contains an ID for the component instance itself (id="3B8E3540'). This ID is a random string shared between both connecting parties; the value is used in the next stage of the connection attemptÃÂâÃÂÃÂÃÂÃÂthe authenticating handshake.
The digest authentication method for clients connecting to the JSM is described in Section 7.3.1.2. This method uses a similar shared random string. On receipt of the server's stream header, the component takes the ID and prepends it onto the secret that it must authenticate itself with. It then creates a NIST SHA-1 message digest (in a hexadecimal format) of that value:
SHA1_HEX(ID+SECRET) After the digest is created, it is sent
in a <handshake/> element as the first XML fragment
following the root:
SEND: <handshake
id="1">14d437033d7735f893d509c002194be1c69dc500</handshake>
On receipt of this authentication request, jabberd combines the ID value with the value from the <secret/> tag in the component instance definition and performs the same digest algorithm. If the digests match, the component is deemed to have authenticated itself correctly, and it is then sent back an empty <handshake/> tag in confirmation:
<handshake/>
The component may commence sending (and being sent) elements.
If the component sends an invalid handshake valueÃÂâÃÂÃÂÃÂÃÂthe secret may be wrong or the digest may not have been calculated correctlyÃÂâÃÂÃÂÃÂÃÂthe connection is closed: jabberd sends a stream error, ending the conversation:
RECV: <stream:error>Invalid handshake</stream:error>
Working out who gets what newsfeeds
Definitions of the RSS sources are held within the newsagent itself, but there's no reference to who might want to receive new items from which sources. We need a way for the component to accept requests, from users, that say things like:
"I'd like to have pointers to new items from Slashdot sent to me, please."or:
"I'd also like pointers to new items on Jon Udell's site, please."or even:
"Whoa, information overflow! Stop all my feeds!"There's a common theme that binds together components such
as the Jabber User Directory (JUD), and the transports to other IM systems such as Yahoo! and ICQ. This theme is known as registration. We've seen this before in the form of user registration, described in Section 6.5.2. This is the process of creating a new account with the JSM. Registration with a service such as the JUD or an IM transport, however, follows a similar process, and both types of registration have one thing in common: the jabber:iq:register namespace.
The jabber:iq:register namespace is used to qualify the exchange of information during a registration process. The registration process to create a new user account with the JSM uses the jabber:iq:register namespace to qualify registration data exchanged. The registration process with the JSM to modify the account details (name, email address, and so on) also uses jabber:iq:register to qualify the account amendment data exchanged. Both types of registration requests are addressed to the JSM. The difference, which allows the JSM to distinguish between what is being requested, is that no session is active on the stream between client and server in the new user registration process, whereas in the account amendment process, a session is active. This is also mentioned in Section 7.2.2.5Chapter 7.
The jabber:iq:register namespace is described in Section 6.2.11 in Chapter 6. It shows us how a typical conversation between requester and responder takes place:
- The client sends an IQ-get: "How do I register?" The component
- sends an IQ-result: "Here's how:
follow these instructions to fill in these fields."
- The client then sends an IQ-set with values in the fields: "OK,
- here's my registration request." To which the component responds,
- with another IQ-result: "Looks fine. Your registration details have
- been stored."
It's clear that this sort of model will lend itself well to the process of allowing users to make requests to receive pointers to new items from RSS sources chosen from a list. Example 9-14 shows this conversational model in Jabber XML. There are many fields that can be used in a registration request; the description in Section 6.2.11 in Chapter 6 includes a few of theseÃÂâÃÂÃÂÃÂÃÂ<name/>, <first/>, <last/>, and <email/>ÃÂâÃÂÃÂÃÂÃÂbut there are more. We'll take the <text/> field to accept the name of an RSS source when a user attempts to register his interest to receive pointers to new items from that source. The conversational model is shown from the component's perspective.
A registration conversation for RSS sources "How do I register?"
RECV: <iq type='get' id='JCOM_3' to='rss.qmacro.dyndns.org'
from='dj@qmacro.dyndns.org/basement'> <query
xmlns='jabber:iq:register'/> </iq>
"Here's how:"
SEND: <iq id='JCOM_3' type='result'
to='dj@qmacro.dyndns.org/basement' from='rss.qmacro.dyndns.org'>
<query xmlns='jabber:iq:register'> <instructions> Choose an
RSS source from: Slashdot, Jon Udell[, ...] </instructions>
<text/> </query> </iq>
"OK, here's my registration request:"
RECV: <iq type='set' id='JCOM_5' to='rss.qmacro.dyndns.org'
from='dj@qmacro.dyndns.org/basement'> <query
xmlns='jabber:iq:register'> <text>Slashdot</text>
</query> </iq>
"Looks fine. Your registration details have been stored."
SEND: <iq id='JCOM_5' type='result'
to='dj@qmacro.dyndns.org/basement' from='rss.qmacro.dyndns.org'>
<query xmlns='jabber:iq:register'>
<text>Slashdot</text> </query> </iq>
After some time passes...
"Whoa, information overflow! Stop all my feeds!"
RECV: <iq id='JCOM_11' to='rss.qmacro.dyndns.org' type='set'
from='dj@qmacro.dyndns.org/basement'> <query
xmlns='jabber:iq:register'> <remove/> </query>
</iq>
"OK, you've been removed. All feeds stopped."
SEND: <iq id='JCOM_11' to='dj@qmacro.dyndns.org/basement'
type='result' from='rss.qmacro.dyndns.org'> <query
xmlns='jabber:iq:register'> <remove/> </query>
</iq>
A lightweight persistent storage system is used for the user/source registrationsÃÂâÃÂÃÂÃÂÃÂDataBase Manager (DBM)ÃÂâÃÂÃÂÃÂÃÂto keep the script fairly simple.
The bigger question here is: how will the user know he can register to a particular RSS feed? Or more importantly: how can he determine if the RSS News Agent system exists? Most clients, having connected to the server and established a session with the JSM, make a request for a list of agents (old terminology) or services (new terminology) available from the Jabber server with the following IQ-get method:
SEND: <iq id="wjAgents" to="qmacro.dyndns.org" type="get">
<query xmlns="jabber:iq:agents"/> </iq>
The response to the request looks like this:
RECV: <iq id='wjAgents' to='dj@qmacro.dyndns.org/basement'
type='result' from='qmacro.dyndns.org'> <query
xmlns='jabber:iq:agents'> <agent jid='conf.qmacro.dyndns.org'>
<name>Public Chatrooms</name>
<service>public</service> <groupchat/> </agent>
<agent jid='users.jabber.org'> <name>Jabber User
Directory</name> <service>jud</service>
<search/> <register/> </agent> </query>
</iq>
which reflects the contents of the <browse/> section in the JSM configuration as shown in Example 9-15.
The JSM configuration's browse section
<browse> <conference type="public"
jid="conf.qmacro.dyndns.org" name="Public Chatrooms"/> <service
type="jud" jid="users.jabber.org" name="Jabber User Directory">
<ns>jabber:iq:search</ns>
<ns>jabber:iq:register</ns> </service> </browse>
If we add a stanza that describes the component for the RSS News Agent to the <browse/> section of the JSM configuration:
<service type="rss" jid="rss.qmacro.dyndns.org" name="RSS News
Agent"> <ns>jabber:iq:register</ns> </service>
we get an extra section in the jabber:iq:agents response from the server:
<agent jid='rss.qmacro.dyndns.org'> <name>RSS News
Agent</name> <service>rss</service> <register/>
</agent>
The client-side effect of the agents response is exactly what we're looking for. Figure 9-6 shows WinJab's Agents menu displaying a summary of what it received in response to its jabber:iq:agents query.
</code> We can see that the stanza for the RSS news agent was present in the <browse/> section and the component is faithfully displayed in the agent list, along with Public Chatrooms and Jabber User Directory. In the main window of the screenshot we can see the Supported Namespaces list; it contains the namespace that we specified in the stanza. By specifying:
<ns>jabber:iq:register</ns>
we're effectively telling the client that the component will support a
registration conversation.
But that's not allÃÂâÃÂÃÂÃÂÃÂwe've advertised the RSS news agent in the <browse/> section of the configuration for the JSM on the Jabber server running on qmacro.dyndns.org. That's why we got the information about the RSS news agent when we connected as user dj to qmacro.dyndns.org (see the window's titlebar in Figure 9-6). You may have noticed something odd about the definition of the other two agents, or services, in the <browse/> section earlier or in the corresponding jabber:iq:agents IQ response. Let's take a look at this response again, this time with the extra detail about the component:
RECV: <iq id='wjAgents' to='dj@qmacro.dyndns.org/basement'
type='result' from='qmacro.dyndns.org'> <query
xmlns='jabber:iq:agents'> <agent jid='rss.qmacro.dyndns.org'>
<name>RSS News Agent</name>
<service>rss</service> <register/> </agent>
<agent jid='conf.qmacro.dyndns.org'> <name>Public
Chatrooms</name> <service>public</service>
<groupchat/> </agent> <agent jid='users.jabber.org'>
<name>Jabber User Directory</name>
<service>jud</service> <search/> <register/>
</agent> </query> </iq>
While the jid attribute values for the RSS news agent and Public Chatroom agents show that they are components connected to the Jabber server (i.e., they both have JIDs in the qmacro.dyndns.org "space," and so are connected to the Jabber server running at qmacro.dyndns.org), the jid attribute for the Jabber User Directory points to a name in the jabber.org "space"! This is a side effect of the power and foresight of Jabber's architectural design. If we connect a componentÃÂâÃÂÃÂÃÂÃÂwhether it's one we've built ourselves or one we've downloaded fromÃÂâÃÂÃÂÃÂÃÂwe can give it an internal or external identity when we describe it in the jabber.xml configuration.
Example 9-8 and Example 9-9 show two examples of an instance definition for the RSS news agent component. Both specify potentially external identities. If the hostname rss.qmacro.dyndns.org is a valid and resolvable hostname, the component can be reached from anywhere, not just from within the Jabber server to which it is connected. If the hostname wasn't resolvable by the outside world, by having a simple name such as rss, it could be reached only from the Jabber server to which it was connected.
So let's say rss.qmacro.dyndns.orgis a valid and resolvable hostname. If your client is connected to a Jabber server running on yourserver.org, this is what would happen if you were to send a registration request (an <iq/> element with a query qualified by the jabber:iq:register namespace) addressed to rss.qmacro.dyndns.org:
- Packet reaches JSM on yourserver.org.
- You send the IQ from your client, which is connected to your Jabber
- server's JSM. This is where the packet first arrives.
- Internal routing tables consulted.
- This is how yourserver.org's jabberd looks in its list of
- internally registered destinations and doesn't find
- rss.qmacro.dyndns.org in there.
- Name resolved and routing established.
- yourserver.org's dnsrv (Hostname Resolution) service
- is used to resolve the rss.qmacro.dyndns.org's address. Then,
- according to dnsrv's instance configuration (specifically the
- <resend>s2s</resend> partÃÂâÃÂÃÂÃÂÃÂsee Section 4.9), the IQ is
- routed on to the s2s (Server to Server) component.
- Server to server connection established.
- yourserver.org establishes a connection to
- qmacro.dyndns.org via s2s and sends the IQ across
- the connection.
- Packet arrives at the RSS News Agent component on qmacro.dyndns.org.
- jabberd on qmacro.dyndns.org routes the packet correctly to
- rss.qmacro.dyndns.org. So, what do we learn from this? As
- exemplified by the reference to the JUD running at
- users.jabber.org that comes predefined in the standard
- jabber.xml with the 1.4.1 version of the Jabber server, you can
- specify references to services, components, on other Jabber
- servers. If you take this RSS News Agent script and run it against
- your own Jabber server, there's no reason why you can't share its
- services with your friends who run their own Jabber servers.
The key is not the reference in the <browse/> section; it is the resolvability of component names as hostnames and the ability of Jabber servers to route packets to each other. The stanza in <browse/> just makes it easier for the clients to automatically know about and be able to interact with services in general. Even if a service offered by a public component that wasn't described in the result of a jabber:iq:agents query, it wouldn't stop you from reaching it. The agent browser is another client, Gabber (shown in Figure 9-7), which is a GTK-based Jabber client that allows you to specify a Jabber server name, in the Server to Browse field, so that you can direct the jabber:iq:agents queries to whatever server you want.
</code> A good example of the distinction between the definition of a component within a <browse/> section and that component's reachability is the version query shown in Example 9-16. Regardless of whether the conference component at gnu.mine.nu was listed in the <browse/> section of qmacro.dyndns.org's JSM, the user dj was able to make a version query by specifying the component's address (a valid and resolvable hostname) in the IQ-get's to attribute.
A Conferencing component responds to a version query
SEND: <iq type='get' to='conf.gnu.mine.nu'> <query
xmlns='jabber:iq:version'/> </iq>
RECV: <iq type='result' to='dj@qmacro.dyndns.org/study' from='conf.gnu.mine.nu'> <query xmlns='jabber:iq:version'> <name>conference</name> <version>0.4</version> <os>Linux 2.2.13</os> </query> </iq>
Polling the RSS sources
Next, we need some way of "interrupting" the process of checking for incoming elements and dispatching them to the callbacks, while we retrieve the RSS data and check for new items. Since we're writing this component in Perl, we could use the alarm() feature to set an alarm and have a subroutine invoked, to poll the RSS sources, when the alarm goes off. However, this recipe uses the Jabber::Connection library, which negates the needs for an external alarm. Instead, we need to take the following steps each time we want to poll the RSS sources:
- Try to retrieve the source from the URL. Attempt to parse the source's
- XML. Go through the items, until we come across one we've seen before;
- the ones we go through until then are deemed to be new. (We need a
- special case the first time around, so that we don't flood everyone
- with every item of a source the first time it is retrieved.) For new
- items, look in the registration database for the users that have
- registered for that source, construct a headline message like the one
- shown in Example 9-7, and send it to those users. Remember the first
- of the new items, so that we don't go beyond it next time.
Other Differences Between Client and Component Programming
There are many differences between programming a component and programming a client. We're already aware of many of the major ones, described earlier in Section 9.3.1.1. There are, however, also more subtle differences that we need to bear in mind.
Components, unlike clients, do not connect to the JSM. They connect as a peer of the JSM. This means not only that they cannot partake of IM features made available by JSM's modules (see Section 4.4.4 for a list of these modules) but also that they must do more for themselves. This isn't as bad as it seems. Take store and forward, for example, a feature provided by the JSM's mod_offline module. While a message sent to a component won't be stored and forwarded if that component is not connected, a message sent from a component to a client will get stored and forwarded if the client is offline, because the message will be routed to the JSM (as specified by the [hostname] in the address), which can decide what action to take. Messages can be passed directly to the client if the user is online or can be stored and forwarded later when they're back online.
When constructing an element as a client, we should not specify a from attribute before it is sent; this is added by the JSM as it arrives to prevent JID spoofing. If a component does not connect through the JSM, no "from-stamping" takes place; the component itself must stamp the element with a from attribute.
The addressing of a component is also slightly different. Client addresses reflect the fact that they're connected to the JSM, always having the form of (with the resource being optional):
[user]@[hostname]/[resource] While the basic
address form of a component is simply:
[hostname] This doesn't mean to say that the address of a
component cannot have a user or a resource part. It's just that
all elemen | http://commons.oreilly.com/wiki/index.php?title=JabChapter_9&diff=24436&oldid=24434 | CC-MAIN-2014-15 | refinedweb | 14,737 | 50.57 |
Input method plugin for Qt5 not working
Hi my friends.
I have a input plugin named "libgcinplatforminputcontextplugin.so" inside ...../plugins/platforminputcontexts" (Linux) and if i run a Qt5 program it not activate this plugin after a "export QT_IM_MODULE=gcin". This directory is right because if i run a Qt5 program after "export QT_IM_MODULE=vkim" it will activate another input method called "vkim". If vkim-plugin not is in this directory, "vkim" input system also will not work. So this, bring me to the question : why is not working "gcin"-plugin ?
"gcin" main interface is written inside header "gcin-qt5.h", his interface is :
#include <QtGui/qpa/qplatforminputcontextplugin_p.h> #include <QtCore/QStringList> #include "qgcinplatforminputcontext.h" class QGcinPlatformInputContextPlugin : public QPlatformInputContextPlugin { Q_OBJECT public: Q_PLUGIN_METADATA(IID "org.qt-project.Qt.QPlatformInputContextFactoryInterface.5.1" FILE "gcin.json") QStringList keys() const; QGcinPlatformInputContext *create(const QString& system, const QStringList& paramList); };
Somebody undestand about input contexts and can help me ?
Can be some another type of problem ?
Inside code of "keys()" and "create(...)" y write a QMessageBox::information(0,0,"...") but it never write nothing after :
export QT_IM_MODULE=gcin ./some_qt5_program
Thanks for any help.
Daniel
Hi
Using
export QT_DEBUG_PLUGINS=1
i discover where is a problem. Qt can not found a library which depend on this plugin.
I resolve it partially copying this "*.so" to a same folder as a executable.
Better would be to know, which environment variable need to set in order Qt can found it, or to know where Qt go to search this library.
greatings, Daniel
- SGaist Lifetime Qt Champion
Hi,
LD_LIBRARY_PATHis probably what you are looking for
Hi SGaist.
I will give it a try. May be on .profile file or something else set this environment variable.
For the time, i simple copy this *.so file to installation folder of Qt :
/home/daniel/Qt/5.5/gcc_64
and it works, except for "designer", but i never use it.
Thanks for you idea.
Daniel
Now i tried with
export LD_LIBRARY_PATH ... designer
... and it works (open designer). So i fill simply set this variable at begin of my system.
Thanks. Daniel
Hi
This is interesting, because i have 2 different places in which i install Qt, one in /opt in another in /home/daniel, some programs refere to one library place (/home/daniel/Qt/5.5/gcc_64/lib) and "designer" refere to /opt/Qt/5.5/gcc_64/lib.
I not need to set LD_LIBRARY_PATH.
After copy this one file to /opt/Qt/5.5/gcc_64/lib, "designer" also works.
Daniel | https://forum.qt.io/topic/58924/input-method-plugin-for-qt5-not-working | CC-MAIN-2018-13 | refinedweb | 415 | 53.27 |
{-# LANGUAGE Rank2Types, TypeFamilies #-} ----------------------------------------------------------------------------- -- | -- Module : Numeric.RAD -- Copyright : (c) Edward Kmett 2010 -- License : BSD3 -- Maintainer : ekmett@gmail.com -- Stability : experimental -- Portability : GHC only -- -- Reverse Mode Automatic Differentiation via overloading to perform -- nonstandard interpretation that replaces original numeric type with -- a bundle that contains a value of the original type and the tape that -- will be used to recover the value of the sensitivity. -- -- This package uses StableNames internally to recover sharing information from -- the tape to avoid combinatorial explosion, and thus runs asymptotically faster -- than it could without such sharing information, but the use of side-effects -- contained herein is benign. -- -- The API has been built to be close to the design of 'Numeric.FAD' from the 'fad' package -- by Barak Pearlmutter and Jeffrey Mark Siskind and contains portions of that code, with minor liberties taken. -- ----------------------------------------------------------------------------- module Numeric.RAD ( -- * First-Order Reverse Mode Automatic Differentiation RAD , lift -- * First-Order Differentiation Operators , diffUU , diffUF , diff2UU , diff2UF -- * Common access patterns , diff , diff2 , jacobian , jacobian2 , grad , grad2 -- * Optimization Routines , zeroNewton , inverseNewton , fixedPointNewton , extremumNewton , argminNaiveGradient ) where import Prelude hiding (mapM) import Control.Applicative (Applicative(..),(<$>)) import Control.Monad.ST import Control.Monad (forM_) import Data.List (foldl') import Data.Array.ST import Data.Array import Data.Ix import Text.Show import Data.Graph (graphFromEdges', topSort, Vertex) import Data.Reify (reifyGraph, MuRef(..)) import qualified Data.Reify.Graph as Reified import Data.Traversable (Traversable, mapM) import System.IO.Unsafe (unsafePerformIO) newtype RAD s a = RAD (Tape a (RAD s a)) data Tape a t = Literal a | Var a Int | Binary a a a t t | Unary a a t instance Show a => Show (RAD s a) where showsPrec d = disc1 (showsPrec d) -- | The 'lift' function injects a primal number into the RAD data type with a 0 derivative. -- If reverse-mode AD numbers formed a monad, then 'lift' would be 'return'. lift :: a -> RAD s a lift = RAD . Literal {-# INLINE lift #-} primal :: RAD s a -> a primal (RAD (Literal y)) = y primal (RAD (Var y _)) = y primal (RAD (Binary y _ _ _ _)) = y primal (RAD (Unary y _ _)) = y {-# INLINE primal #-} var :: a -> Int -> RAD s a var a v = RAD (Var a v) -- TODO: A higher-order data-reify -- mapDeRef :: (Applicative f) => (forall a . Num a => RAD s a -> f (u a)) -> a -> f (Tape a (u a)) instance MuRef (RAD s a) where type DeRef (RAD s a) = Tape a mapDeRef f (RAD (Literal a)) = pure (Literal a) mapDeRef f (RAD (Var a v)) = pure (Var a v) mapDeRef f (RAD (Binary a jb jc x1 x2)) = Binary a jb jc <$> f x1 <*> f x2 mapDeRef f (RAD (Unary a j x)) = Unary a j <$> f x on :: (a -> a -> c) -> (b -> a) -> b -> b -> c on f g a b = f (g a) (g b) instance Eq a => Eq (RAD s a) where (==) = (==) `on` primal instance Ord a => Ord (RAD s a) where compare = compare `on` primal instance Bounded a => Bounded (RAD s a) where maxBound = lift maxBound minBound = lift minBound unary_ :: (a -> a) -> a -> RAD s a -> RAD s a unary_ f _ (RAD (Literal b)) = RAD (Literal (f b)) unary_ f g b = RAD (Unary (disc1 f b) g b) {-# INLINE unary_ #-} unary :: (a -> a) -> (a -> a) -> RAD s a -> RAD s a unary f _ (RAD (Literal b)) = RAD (Literal (f b)) unary f g b = RAD (Unary (disc1 f b) (disc1 g b) b) {-# INLINE unary #-} binary_ :: (a -> a -> a) -> a -> a -> RAD s a -> RAD s a -> RAD s a binary_ f _ _ (RAD (Literal b)) (RAD (Literal c)) = RAD (Literal (f b c)) binary_ f gb gc b c = RAD (Binary (f vb vc) gb gc b c) where vb = primal b; vc = primal c {-# INLINE binary_ #-} -- binary_ with partials binary :: (a -> a -> a) -> (a -> a) -> (a -> a) -> RAD s a -> RAD s a -> RAD s a binary f _ _ (RAD (Literal b)) (RAD (Literal c)) = RAD (Literal (f b c)) binary f gb gc b c = RAD (Binary (f vb vc) (gb vc) (gc vb) b c) where vb = primal b; vc = primal c {-# INLINE binary #-} disc1 :: (a -> b) -> RAD s a -> b disc1 f x = f (primal x) {-# INLINE disc1 #-} disc2 :: (a -> b -> c) -> RAD s a -> RAD s b -> c disc2 f x y = f (primal x) (primal y) {-# INLINE disc2 #-} disc3 :: (a -> b -> c -> d) -> RAD s a -> RAD s b -> RAD s c -> d disc3 f x y z = f (primal x) (primal y) (primal z) {-# INLINE disc3 #-} from :: Num a => RAD s a -> a -> RAD s a from (RAD (Literal a)) x = RAD (Literal x) from a x = RAD (Unary x 1 a) fromBy :: Num a => RAD s a -> RAD s a -> Int -> a -> RAD s a fromBy (RAD (Literal a)) _ _ x = RAD (Literal x) fromBy a delta n x = RAD (Binary x 1 (fromIntegral n) a delta) instance (Num a, Enum a) => Enum (RAD s a) where succ = unary_ succ 1 pred = unary_ pred 1 toEnum = lift . toEnum fromEnum = disc1 fromEnum -- the enumerated results vary with the lower bound and so their derivatives reflect that enumFrom a = from a <$> disc1 enumFrom a enumFromTo a b = from a <$> disc2 enumFromTo a b -- these results vary with respect to both the lower bound and the delta between that and the second argument enumFromThen a b = zipWith (fromBy a delta) [0..] $ disc2 enumFromThen a b where delta = b - a enumFromThenTo a b c = zipWith (fromBy a delta) [0..] $ disc3 enumFromThenTo a b c where delta = b - a instance Num a => Num (RAD s a) where fromInteger = lift . fromInteger (+) = binary_ (+) 1 1 (-) = binary_ (-) 1 (-1) negate = unary_ negate (-1) (*) = binary (*) id id -- incorrect if the argument is complex abs = unary abs signum signum = lift . signum . primal -- notComplex :: Num a => a -> Bool -- notComplex x = s == 0 || s == 1 || s == -1 -- where s = signum x instance Real a => Real (RAD s a) where toRational = disc1 toRational instance RealFloat a => RealFloat (RAD s a) where floatRadix = disc1 floatRadix floatDigits = disc1 floatDigits floatRange = disc1 floatRange decodeFloat = disc1 decodeFloat encodeFloat m e = lift (encodeFloat m e) scaleFloat n = unary_ (scaleFloat n) (scaleFloat n 1) isNaN = disc1 isNaN isInfinite = disc1 isInfinite isDenormalized = disc1 isDenormalized isNegativeZero = disc1 isNegativeZero isIEEE = disc1 isIEEE exponent x | m == 0 = 0 | otherwise = n + floatDigits x where (m,n) = decodeFloat x significand x = unary_ significand (scaleFloat (- floatDigits x) 1) x atan2 (RAD (Literal x)) (RAD (Literal y)) = RAD (Literal (atan2 x y)) atan2 x y = RAD (Binary (atan2 vx vy) (vy*r) (-vx*r) x y) where vx = primal x vy = primal y r = recip (vx^2 + vy^2) instance RealFrac a => RealFrac (RAD s a) where properFraction (RAD (Literal a)) = (w, RAD (Literal p)) where (w, p) = properFraction a properFraction a = (w, RAD (Unary p 1 a)) where (w, p) = properFraction (primal a) truncate = disc1 truncate round = disc1 truncate ceiling = disc1 truncate floor = disc1 truncate instance Fractional a => Fractional (RAD s a) where (/) = binary (/) recip id -- recip = unary recip (const . negate . (^2)) fromRational r = lift $ fromRational r instance Floating a => Floating (RAD s a) where pi = lift pi exp = unary exp exp log = unary log recip sqrt = unary sqrt (recip . (2*) . sqrt) RAD (Literal x) ** RAD (Literal y) = lift (x ** y) x ** y = RAD (Binary vz (vy*vz/vx) (vz*log vx) x y) where vx = primal x vy = primal y vz = vx ** vy sin = unary sin cos cos = unary cos (negate . sin) asin = unary asin (recip . sqrt . (1-) . (^2)) acos = unary acos (negate . recip . sqrt . (1-) . (^2)) atan = unary atan (recip . (1+) . (^2)) sinh = unary sinh cosh cosh = unary cosh sinh asinh = unary asinh (recip . sqrt . (1+) . (^2)) acosh = unary acosh (recip . sqrt . (-1+) . (^2)) atanh = unary atanh (recip . (1-) . (^2)) -- back propagate sensitivities along the tape. backprop :: (Ix t, Ord t, Num a) => (Vertex -> (Tape a t, t, [t])) -> STArray s t a -> Vertex -> ST s () backprop vmap ss v = do case node of Unary _ g b -> do da <- readArray ss i db <- readArray ss b writeArray ss b (db + g*da) Binary _ gb gc b c -> do da <- readArray ss i db <- readArray ss b writeArray ss b (db + gb*da) dc <- readArray ss c writeArray ss c (dc + gc*da) _ -> return () where (node, i, _) = vmap v runTape :: Num a => (Int, Int) -> RAD s a -> Array Int a runTape vbounds tape = accumArray (+) 0 vbounds [ (id, sensitivities ! ix) | (ix, Var _ id) <- xs ] where Reified.Graph xs start = unsafePerformIO $ reifyGraph tape (g, vmap) = graphFromEdges' (edgeSet <$> filter nonConst xs) sensitivities = runSTArray $ do ss <- newArray (sbounds xs) 0 writeArray ss start 1 forM_ (topSort g) $ backprop vmap ss return ss sbounds ((a,_):as) = foldl' (\(lo,hi) (b,_) -> (min lo b, max hi b)) (a,a) as edgeSet (i, t) = (t, i, successors t) nonConst (_, Literal{}) = False nonConst _ = True successors (Unary _ _ b) = [b] successors (Binary _ _ _ b c) = [b,c] successors _ = [] -- this isn't _quite_ right, as it should allow negative zeros to multiply through -- but then we have to know what an isNegativeZero looks like, and that rather limits -- our underlying data types we can permit. -- this approach however, allows for the occasional cycles to be resolved in the -- dependency graph by breaking the cycle on 0 edges. -- test x = y where y = y * 0 + x -- successors (Unary _ db b) = edge db b [] -- successors (Binary _ db dc b c) = edge db b (edge dc c []) -- successors _ = [] -- edge 0 x xs = xs -- edge _ x xs = x : xs d :: Num a => RAD s a -> a d r = runTape (0,0) r ! 0 d2 :: Num a => RAD s a -> (a,a) d2 r = (primal r, d r) -- | The 'diffUU' function calculates the first derivative of a -- scalar-to-scalar function. diffUU :: Num a => (forall s. RAD s a -> RAD s a) -> a -> a diffUU f a = d $ f (var a 0) -- | The 'diffUF' function calculates the first derivative of -- scalar-to-nonscalar function. diffUF :: (Functor f, Num a) => (forall s. RAD s a -> f (RAD s a)) -> a -> f a diffUF f a = d <$> f (var a 0) -- diffMU :: Num a => (forall s. [RAD s a] -> RAD s a) -> [a] -> [a] -> a -- TODO: finish up diffMU and their ilk -- avoid dependency on MTL newtype S a = S { runS :: Int -> (a,Int) } instance Monad S where return a = S (\s -> (a,s)) S g >>= f = S (\s -> let (a,s') = g s in runS (f a) s') bind :: Traversable f => f a -> (f (RAD s a), (Int,Int)) bind xs = (r,(0,s)) where (r,s) = runS (mapM freshVar xs) 0 freshVar a = S (\s -> let s' = s + 1 in s' `seq` (RAD (Var a s), s')) unbind :: Functor f => f (RAD s b) -> Array Int a -> f a unbind xs ys = fmap (\(RAD (Var _ i)) -> ys ! i) xs -- | The 'diff2UU' function calculates the value and derivative, as a -- pair, of a scalar-to-scalar function. diff2UU :: Num a => (forall s. RAD s a -> RAD s a) -> a -> (a, a) diff2UU f a = d2 $ f (var a 0) -- | Note that the signature differs from that used in Numeric.FAD, because while you can always -- 'unzip' an arbitrary functor, not all functors can be zipped. diff2UF :: (Functor f, Num a) => (forall s. RAD s a -> f (RAD s a)) -> a -> f (a, a) diff2UF f a = d2 <$> f (var a 0) -- | The 'diff' function is a synonym for 'diffUU'. diff :: Num a => (forall s. RAD s a -> RAD s a) -> a -> a diff = diffUU -- | The 'diff2' function is a synonym for 'diff2UU'. diff2 :: Num a => (forall s. RAD s a -> RAD s a) -> a -> (a, a) diff2 = diff2UU -- requires the input list to be finite in length grad :: (Traversable f, Num a) => (forall s. f (RAD s a) -> RAD s a) -> f a -> f a grad f as = unbind s (runTape bounds $ f s) where (s,bounds) = bind as -- compute the primal and gradient grad2 :: (Traversable f, Num a) => (forall s. f (RAD s a) -> RAD s a) -> f a -> (a, f a) grad2 f as = (primal r, unbind s (runTape bounds r)) where (s,bounds) = bind as r = f s -- | The 'jacobian' function calcualtes the Jacobian of a -- nonscalar-to-nonscalar function, using m invocations of reverse AD, -- where m is the output dimensionality. When the output dimensionality is -- significantly greater than the input dimensionality you should use 'Numeric.FAD.jacobian' instead. jacobian :: (Traversable f, Functor g, Num a) => (forall s. f (RAD s a) -> g (RAD s a)) -> f a -> g (f a) jacobian f as = unbind s . runTape bounds <$> f s where (s, bounds) = bind as -- | The 'jacobian2' function calcualtes both the result and the Jacobian of a -- nonscalar-to-nonscalar function, using m invocations of reverse AD, -- where m is the output dimensionality. -- 'fmap snd' on the result will recover the result of 'jacobian' jacobian2 :: (Traversable f, Functor g, Num a) => (forall s. f (RAD s a) -> g (RAD s a)) -> f a -> g (a, f a) jacobian2 f as = row <$> f s where (s, bounds) = bind as row a = (primal a, unbind s (runTape bounds a)) -- | The 'zeroNewton' function finds a zero of a scalar function using -- Newton's method; its output is a stream of increasingly accurate -- results. (Modulo the usual caveats.) -- -- TEST CASE: -- @take 10 $ zeroNewton (\\x->x^2-4) 1 -- converge to 2.0@ -- -- TEST CASE -- :module Data.Complex Numeric.RAD -- @take 10 $ zeroNewton ((+1).(^2)) (1 :+ 1) -- converge to (0 :+ 1)@ -- zeroNewton :: Fractional a => (forall s. RAD s a -> RAD s a) -> a -> [a] zeroNewton f x0 = iterate (\x -> let (y,y') = diff2UU f x in x - y/y') x0 -- | The 'inverseNewton' function inverts a scalar function using -- Newton's method; its output is a stream of increasingly accurate -- results. (Modulo the usual caveats.) -- -- TEST CASE: -- @take 10 $ inverseNewton sqrt 1 (sqrt 10) -- converge to 10@ -- inverseNewton :: Fractional a => (forall s. RAD s a -> RAD s a) -> a -> a -> [a] inverseNewton f x0 y = zeroNewton (\x -> f x - lift y) x0 -- | The 'fixedPointNewton' function find a fixedpoint of a scalar -- function using Newton's method; its output is a stream of -- increasingly accurate results. (Modulo the usual caveats.) fixedPointNewton :: Fractional a => (forall s. RAD s a -> RAD s a) -> a -> [a] fixedPointNewton f = zeroNewton (\x -> f x - x) -- | The 'extremumNewton' function finds an extremum of a scalar -- function using Newton's method; produces a stream of increasingly -- accurate results. (Modulo the usual caveats.) extremumNewton :: Fractional a => (forall s t. RAD t (RAD s a) -> RAD t (RAD s a)) -> a -> [a] extremumNewton f x0 = zeroNewton (diffUU f) x0 -- | The 'argminNaiveGradient' function performs a multivariate -- optimization, based on the naive-gradient-descent in the file -- @stalingrad\/examples\/flow-tests\/pre-saddle-1a.vlad@ from the -- VLAD compiler Stalingrad sources. Its output is a stream of -- increasingly accurate results. (Modulo the usual caveats.) -- This is /O(n)/ faster than 'Numeric.FAD.argminNaiveGradient' argminNaiveGradient :: (Fractional a, Ord a) => (forall s. [RAD s a] -> RAD s a) -> [a] -> [[a]] argminNaiveGradient f x0 = let gf = grad f loop x fx gx eta i = -- should check gx = 0 here let x1 = zipWith (+) x (map ((-eta)*) gx) fx1 = lowerFU f x1 gx1 = gf x1 in if eta == 0 then [] else if (fx1 > fx) then loop x fx gx (eta/2) 0 else if all (==0) gx then [] -- else if fx1 == fx then loop x1 fx1 gx1 eta (i+1) else x1:(if (i==10) then loop x1 fx1 gx1 (eta*2) 0 else loop x1 fx1 gx1 eta (i+1)) in loop x0 (lowerFU f x0) (gf x0) 0.1 0 {- lowerUU :: (forall s. RAD s a -> RAD s b) -> a -> b lowerUU f = primal . f . lift lowerUF :: Functor f => (forall s. RAD s a -> f (RAD s b)) -> a -> f b lowerUF f = fmap primal . f . lift lowerFF :: (Functor f, Functor g) => (forall s. f (RAD s a) -> g (RAD s b)) -> f a -> g b lowerFF f = fmap primal . f . fmap lift -} lowerFU :: Functor f => (forall s. f (RAD s a) -> RAD s b) -> f a -> b lowerFU f = primal . f . fmap lift | http://hackage.haskell.org/package/rad-0.1.6/docs/src/Numeric-RAD.html | CC-MAIN-2014-41 | refinedweb | 2,680 | 61.19 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.